The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
13-10-2018
Spherical robot with 32 legs that 'moves like an amoeba' could be used to explore planets or in disaster response missions
Spherical robot with 32 legs that 'moves like an amoeba' could be used to explore planets or in disaster response missions
Mochibot can control all of its legs independently to move swiftly
It allows for the most stable and controllable form of locomotion in a robot
It resembles the tensegrity robots that Nasa is building for planet exploration
It is based on a shape called a rhombic triacontahedron - a polyhedron with 32 vertices and 30 faces made of rhombuses (or rhombi)
Scientists have built an amoeba-like robot with 32 individually controlled legs in a bid to find the perfect combination of stability and control.
The theory for the unusual robot comes from previous experience with robot building that found a robot with more legs is often easier to control.
The robot is called Mochibot and is based on a shape called a rhombic triacontahedron - a polyhedron with 32 vertices and 30 faces made of rhombuses (or rhombi).
Scroll down for video
The robot Mochibot (pictured) and is based on a shape called a rhombic triacontahedron - a polyhedron with 32 vertices and 30 faces made of rhombuses (or rhombi)
The ideal shape for an robot that can travel in any direction at any moment is a sphere.
However, this shape is flawed because they rely on only a single point of contact with the floor, making the machine unstable.
Mochibot is based on a sphere but the team made some improvements to make it easier to control.
Its deformability allows it to adapt to the terrain and how much ground contact it has.
The innovative design moves by retracting the arms inthe direction of motion and simultaneously extending arms on the other side.
To stop it flattens itself out parallel to the ground, as per IEEE Spectrum.
The deformability comes from the individually telescoping legs as each one is made of three sliding rails (which behave like linear actuators).
This allows them to extend to just over half a meter in length or contract to less than a quarter meter.
In practice, these extreme differences are not practical and the maximum diameter of Mochibot is about a meter (40 inches).
It weighs 10 kilograms (22 pounds) including the batteries, and has plenty of room inside for a payload.
A variety of cameras, sensors, or sampling devices could be integrated into the arms.
It’s somewhat similar to one of the tensegrity robots that Nasa has been working on for a while but Mochibot isn’t squishy in the same way that a tensegrity robot is.
The Nasa robot can survive being thrown from a roof, the Mochibot is not quite this durable.
Tensegrity robots locomote by flopping over themselves in a series of motions so complex that it’s taken machine-learning algorithms to figure out how to get them to move efficiently.
This makes them unfathomably complicated and very difficult to steer in a particular direction.
Mochibot’s big advantage over a tensegrity robot is that it can move smoothly and continuously in any direction you like by altering its shape.
Nasa said in a report on its tensegrity robots: 'Nasa desires lightweight, deployable and reliable devices.
'Thus, the long term goal of this work is to develop actively controlled tensegrity structures and devices, which can be deployed from a small volume and used in a variety of applications including limbs used for grappling and manipulating the environment or used as a stabilizing and balancing limb during extreme terrain access.'
The maximum diameter of Mochibot (pictured) is about a meter (40 inches). It weighs 10 kilograms (22 pounds) including the batteries, and has plenty of room inside for a payload. A variety of cameras, sensors, or sampling devices could be integrated into the arms
It’s somewhat similar to one of the tensegrity robots that Nasa has been working on for a while but Mochibot (pictured) isn’t squishy in the same way that a tensegrity robot is. The Nasa robot can survive being thrown from a roof, the Mochibot is not quite this durable
The plethora of legs give the urchin-like robot an advantage over wheeled exploration robots as well as all directions can be optimal for movement, rather than just forward and backward.
The designers also suggest that Mochibot is better at dealing with deformable terrain like sand or loose rock, because its method of rolling locomotion is much less traction dependent.
For applications like planetary exploration or disaster response, Mochibot has the potential to be a versatile platform.
Mochibot has the potential to be a versatile platform. It’s highly mobile, very redundant, and looks like it could manage a hefty science payload
It’s highly mobile, very redundant, and looks like it could manage a hefty science payload.
The next step is to experiment with different kinds of terrain, making sure that Mochibot can roll itself up and down slopes and over rocks and gullies without any problems.
Robots are an integral part of life and they continue to cease the attention of the public with the ever-widening guises and uses they come in.
Boston dynamic's robo-dog went viral when a video surfaced of the machine climbing stais and opening doors.
Since then the company has developed a humanoid robot called Atlas.
According to the company, Atlas is a 'high mobility, humanoid robot designed to negotiate outdoor, rough terrain'.
Atlas measures 1.5m (4.9ft) tall and weighs 75kg (11.8st).
WHAT IS BOSTON DYNAMICS' ATLAS HUMANOID ROBOT?
Atlas the most human-like robot in Boston Dynamic's line-up.
It was first unveiled to the public on 11 July 11 2013.
According to the company, Atlas is a 'high mobility, humanoid robot designed to negotiate outdoor, rough terrain'.
Atlas measures 1.5m (4.9ft) tall and weighs 75kg (11.8st).
The humanoid walks on two legs, leaving its arms free to lift, carry, and manipulate objects in its environment.
Atlas is able to hold its balance when it is jostled or pushed by an external force. Should it fall over, the humanoid robot is capable of getting up again on its own
Stereo vision, range sensing and other sensors allow Atlas to walk across rough terrain and keep its balance.
'In extremely challenging terrain, Atlas is strong and coordinated enough to climb using hands and feet, to pick its way through congested spaces,' Boston Dynamics claims.
Atlas is able to hold its balance when it is jostled or pushed.
If the humanoid robot should fall over, it can get up on its own.
Atlas is designed to help emergency services in search and rescue operations.
The robot will be used to shut-off valves, opening doors and operate powered equipment in environments where human rescuers could not survive.
The US Department of Defence said it has no interest in using Atlas in warfare.
The aviation industry, only born a little over a century ago, is on the brink of an enormous change. While electric cars and the electric scooters are already dotting city streets, planes are preparing to join the emissions-free club.
On October 1, HES Energy Systems announced plans to craft the first regional hydrogen-electric passenger plane in the world. The company aims for the four-passenger aircraft, named Element One, to take to the skies in 2025.
“We are looking at innovative business models and exploring collaboration with companies such as Wingly,” Taras Wankewycz, CEO of of HES Energy Systems told Inverse in an email. Wingly, a flight-sharing startup, sees a perfect pairing between Element One and France’s underused airfields.
The zero-emission plane boasts a range of 500 km to 5,000 km in service, thanks to its lightweight hydrogen fuel cell technology.
What Makes Today the Moment to Go Electric?
The title of first electric plane to take flight actually goes to Heino Brditschka, an airplane manufacturer who made it 300 meters in the air for about 10 minutes in 1973. But the electric aircraft industry only took off in earnest over the past 9 years, spurred on mostly by start-ups and new players in the aviation, according to consulting firm Roland Berger in a Financial Times report. That’s helped drive more innovation: Companies like Siemens, with its record-breaking 200-plus mile per hour electric 330LE, as well as the new electric face of Boeing 737s are also working on similar initiatives.
Aside from competition, the recent push to electric flight is chiefly motivated by environmental concerns. Aviation makes up 3 percent of global carbon emissions, according to the EU’s Clean Sky 2 initiative. And with air travel projected to increase threefold by 2050, the industry is trying to avoid contributing to the problem of climate change any more than it already is.
See also: NASA Is Developing A Supersonic Plane That Is (Hopefully) Super Quiet
In the context of rising emissions, this makes a plane like Element One — designed to create zero-emissions — absolutely transformative. The aircraft would use ultra-light hydrogen fuel cells (stored either as a gas or liquid) to tackle the industry-wide challenge of battery density not matching traditional fuel density (in other words the weight of batteries needed to power aircraft could be overwhelming). The Element One will also only takes 10 minutes to refuel, and may eventually use solar or wind energy to recharge mid-flight. Although the prototype fits four passengers, the aircraft could scale up to 10-20 passengers or more, according to Wankewycz. Innovations like these allow Element One to outperform other battery-electric airplanes, reaching a range of 500 km (a little longer than the Grand Canyon) to 5,000 km (a little over the distance from L.A. to New York).
Current challenges include certification and testing that faces every new aircraft, but Wankewycz is confident in preparing the Element One for success.
Hydrogen fuel cells could be swapped in as little as 10 minutes, and may eventually be recharged using other forms of renewable energy.
And with the expanded range of Element One, new promising opportunities for regional travel open. Wingly, a French flight-sharing startup collaborating with HES Energy Systems, found the perfect opportunity in France’s unused airfields.
“We analyzed the millions of destination searches made by the community of 200,000 pilots and passengers on our platform and confirm there is a tremendous need for inter-regional transport between secondary cities,” says Emeric de Waziers, CEO of Wingly in a press release. “By combining autonomous emission free aircraft such as Element One, digital community-based platforms like Wingly and the existing high density network of airfields, we can change the paradigm. France alone offers a network of more than 450 airfields but only 10% of these are connected by regular airlines. We will simply connect the remaining 90%.”
In today’s paradigm, small, short-distance flights like the ones de Waziers describes are a luxury of the terrifically rich. But at the intersection of hydrogen-electric technology and forward thinkers of startups like Wingly, passengers from diverse economic backgrounds may soon have a quieter, greener (and sleeker) reason to clap at the end of their flight.
“Star Wars,” “Her,” and “iRobot.” What do all these movies have in common? The artificial intelligence (AI) depicted in there is crazy-sophisticated. These robots can think creatively, continue learning over time, and maybe even pass for conscious.
Real-life artificial intelligence experts have a name for AI that can do this — it’s Artificial General Intelligence (AGI). For decades, scientists have tried all sorts of approaches to create AGI, using techniques such as reinforcement learning and machine learning. No approach has proven to be much better than any other, at least not yet.
Indeed, there’s a catch here: despite all the excitement, we have no idea how to build AGI.
Either way, most experts think it’s coming — sooner rather than later. In a poll of conference attendees, AI research companies GoodAI and SingularityNet found that 37 percent of respondents think people will create HLAI within 10 years. Another 28 percent think it will take 20 years. Just two percent think HLAI will never exist.
Almost every expert who had an opinion hedged their bets — most responses to the question were peppered with caveats and “maybes.” There’s a lot that we don’t know about the path towards HLAI, such as questions over who will pay for it or how we’re going to combine our algorithms that can think or reason but can’t do both.
Futurism caught up with a number of AI researchers, investors, and policymakers to get their perspective on when HLAI will happen. The following responses come from panels and presentations from the conference and exclusive interviews.
Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics, UNICRI, United Nations
At the moment, there is absolutely no indication that we are anywhere near AGI. And no one can say with any kind of authority or conviction that this would happen within a certain time frame. Or even worse, no one can say this can even happen period. We may never have AGI, so we need to take that into account when we are discussing anything.
Seán Ó hÉigeartaigh, Executive Director of the Cambridge Center for the Study of Existential Risk
There’s still a lot of work to be done; there are still many things we don’t understand. Given we have this understanding, maybe it’s possible that it happens within 50 years.
I think we should enjoy the technology while it advances. We should be looking out for where to go in the future. But on the other hand, it’s not like we have human-level AI right now and I don’t think it’s going to happen very quickly. I think that if I’m lucky it’ll happen in my lifetime.
A worm’s level of intelligence is actually pretty doable. If you try to look at vision and planning, this is kind of narrowly doable. The integration of planning and learning, planning as its own thing is pretty well solved. But planning in a way which works with [machine learning] is not very well solved.
I think we are almost there. I am not predicting we will have general AI in three years, 30 years. But I am confident it can happen any day.
Ben Goertzel, CEO at SingularityNET and Chief Scientist at Hanson Robotics
I don’t think we need fundamentally new algorithms. I think we do need to connect our algorithms in different ways than we do now. If I’m right, then we already have the core algorithms that we need… I believe we are less than ten years from creating human-level AI.
I don’t think we’re almost there in the technology for General AI. I think general AI is almost a branding for a very general idea. Lifelong learning is an example of that — it’s a very particular type of AI. We know the theoretical foundation of that already, we know how nature does it, and it’s very well defined. There is a very clear direction, there is a metric. I think we can reach it in a close time.
On the last day of the conference, a number of participants participated in a lightning round of sorts. Almost entirely for fun, these experts were encouraged to throw out a date at which they expected us to figure out how to make HLAI. The following answers, some of whom were given by the same people who already answered the question, should be taken with an entire shaker of salt — some were meant as jokes and others are total guesses.
John Langford
Maybe 20 [years]?
Marek Rosa
I really have no idea which year, but if I have to say one year I’d say ten years in the future. The reason is its kind fo vague, you know like anything can happen in ten years.
A sluggish, yet precise robot designed by Japanese engineers demonstrates what construction sites might look like in the future.
Credit: AIST.
The prototype developed at Japan’s National Institute of Advanced Industrial Science and Technology was recently featured in a video picking up a piece of plasterboard and screwing it into a wall.
The robot, called HRP-5P, is much less productive than a human worker. However, its motions are very precise, meaning that this prototype could evolve into a rugged model that’s apt for real-life applications in demanding fields such as constructions.
While most manufacturing fields are being disrupted by automation, with robots doing most of the work in microchip plants or car assembly lines, supervised by human personnel, the same can’t be said about construction. This field is way too dynamic — with every project being unique — and filled with all sorts of obstacles that are too challenging for today’s robots to navigate. HRP-5P, however, suggests that automation could one day become feasible in construction works as well.
For Japan, construction bots are not meant to put people out of jobs, but rather to supplement a dwindling workforce. There’s a great shortage of manual labor in the island state, which is suffering from declining birthrates and an aging population.
Previously, a New York-based company demonstrated a mason-bot capable of laying down 3,000 bricks a day — six times faster than the average human, and cheaper too. Elsewhere, such as at MIT, researchers are experimenting with 3-D printing entire homes in one go.
The Center for Process Innovation, a British technology research company, thinks they’ve got the next big step in aviation transportation figured out. They want to remove the windows from passenger planes and replace them with OLED touch-screens that extend along the plane’s entire length and display the view from outside through cameras mounted on the plane’s exterior.
According to them, windows are one of the greatest sources of unnecessary weight in passenger planes. Solid walls are stronger and allow the walls to be built thinner as well. The OLED screens that replace the windows would display the view outside and allow passengers to select entertainment and stewardess service.
The technology does have its detractors, however – some are concerned about light pollution inside the cabin, and the panoramic view probably won’t do much to help those who are afraid of flying.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
16-09-2018
Scientists are pushing the limits of 3D printing with these shape-shifting materials
Scientists are pushing the limits of 3D printing with these shape-shifting materials
The tech could be used to create magnetically controlled implants or "soft robots."
by Tom Metcalfe
MIT researchers are developing 3D-printed materials that can change their shape in response to changes in magnetic fields.
Ben Gruber / Reuters file
Three-dimensional printing has been used to create all sorts of things, from car parts and experimental rocket engines to entire houses. Now scientists at MIT have found a way to 3D print objects that can change shape almost instantaneously in response to magnetic fields.
So far the researchers have created a few demonstration objects with the new technology, which uses plastic “3D ink” infused with tiny iron particles and an electromagnet-equipped printing nozzle. These include a plastic sheet that folds and unfolds, a star-shaped object that can catch a ball and a six-pointed structure that scrunches up and crawls like a spider.
But the researchers see broad applications for small shape-changing devices — what some call "soft robots" — especially in medicine.
“You can imagine this technology being used in minimally invasive surgeries,” said Xuanhe Zhao, a professor of engineering at MIT and a member of the team that developed the 3D-printed shape-shifting technology. “A self-steering catheter inside a blood vessel, for example — now you can use external magnetic fields to accurately steer the catheter.”
Other uses could include magnetically controlled implants to control the flow of blood, and devices that could be guided by magnet through the body — to take pictures, clear a blockage or deliver drugs to a specific location, Zhao said.
The technology might one day make it possible to 3D print entire soft robots, Zhao said. These could have information stored as magnetic data directly inside their structural materials, instead of needing additional electronics.
“The MIT soft robotics development is very cool ... It's an important step in terms of being able to control materials,” said Jim McGuffin-Cawley, an engineering and materials science professor at Case Western Reserve University in Cleveland, who was not involved with the MIT project. He noted that the technology allows researchers to make precise changes to the shape-shifting materials by using magnetic fields to control very small moving parts inside the materials themselves.
MIT is releasing free software and a recipe for its magnetic ink so that other scientists around the world can use the technology and print their own shape-shifting materials, Zhao said.
“With these three components they can design their own untethered, fast-transforming soft robots,” he said. “We hope this method can find very important applications in the fields of soft robotics [and] materials.”
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
14-09-2018
Thousands are embedding microchips under their skin to 'make their daily lives easier'
Thousands are embedding microchips under their skin to 'make their daily lives easier'
Thousands of Swedish people are having microchips embedded under their skin instead of using ID cards, key cards and even having to purchase train tickets.
3,000 Swedes Have Microchips Installed
It is estimated that 3,000 people have had a microchip installed in Sweden and the chip is smaller than a single grain of rice. The increase in people having chips installed under their skin has been on the increase over the last three years. The microchip technology was first used in Sweden in 2015.
Ulrika Celsing said that the microchip in her hand had helped her to replace the need to carry around many daily necessities. This includes her office key card and gym card. When the 28-year-old arrives at work, she simply has to wave her hand close to a small box and then enter a code, and the door opens.
Rail Line Scans Passengers Hands to Take Fares
SJ Rail Line, which is owned by the state, began to scan passengers hands with microchips to take their fare when they were onboard the train. It has been said that the chips could also be used to make purchases in much the same way as a contactless credit card, but up to now, no one has tested it.
The procedure to insert the microchip is much the same as having a piercing. The chip is inserted by syringe into the hand of the person. Celsing said that she had her chip installed during a work event and all she felt during the procedure was a slight sting. Ben Libberton, a microbiologist for the MAX IV laboratory in Sweden, said that the microchip implants might cause a reaction or infection in the immune system of the body.
Group Micro-chipping Is Becoming the In-thing
There is also the risk of biohacking, modification of bodies using technology. This is said to be on the increase as more people use technology such as Fitbit and Apple Watches. Bionyfiken, a biohacking group from Sweden, began organizing implant parties where people in groups had chips inserted en masse in the US, UK, Germany, France, and Mexico.
50 employees at a vending machine company in Wisconsin had microchips inserted into their hands, and they could use them to purchase snacks, log into their computers and use other office equipment.
The 10 million strong country has proven that Sweden is more willing to share their personal details. Details have been recorded by the social-security system of the country and are readily available. It was said that people could find out the salaries of other people by calling up the public tax authority. Many people in Sweden do not have the belief that the technology is at risk of being hacked. A microbiologist said that the data that is collected is and shared is limited, so there should be no fear of hacking.
The Body Might be the Next Big Platform for Technology
It has been estimated that the human body will become the next big platform for technology. All of the wearable technology of today will be implanted within the body within the next 5 to 10 years. No one wants to carry around a smartphone or smartwatch that is clumsy when they can have the same technology installed in their body.
When you return to school after summer break, it may feel like you forgot everything you learned the year before. But if you learned like an AI system does, you actually would have — as you sat down for your first day of class, your brain would take that as a cue to wipe the slate clean and start from scratch.
AI systems’ tendency to forget the things it previously learned upon taking on new information is called catastrophic forgetting.
That’s a big problem. See, cutting-edge algorithms learn, so to speak, after analyzing countless examples of what they’re expected to do. A facial recognition AI system, for instance, will analyze thousands of photos of people’s faces, likely photos that have been manually annotated, so that it will be able to detect a face when it pops up in a video feed. But because these AI systems don’t actually comprehend the underlying logic of what they do, teaching them to do anything else, even if it’s pretty similar — like, say, recognizing specific emotions — means training them all over again from scratch. Once an algorithm is trained, it’s done, we can’t update it anymore.
For years, scientists have been trying to figure out how to work around the problem. If they succeed, AI systems would be able to learn from a new set of training data without overwriting most of what they already knew in the process. Basically, if the robots should someday rise up, our new overlords would be able to conquer all life on Earth and chew bubblegum at the same time.
But still, catastrophic forgetting is one of the major hurdles preventing scientists from building an artificial general intelligence (AGI) — AI that’s all-encompassing, empathetic, and imaginative, like the ones we see in TV and movies.
In fact, a number of AI experts who attended The Joint Multi-Conference on Human-Level Artificial Intelligence last week in Prague said, in private interviews with Futurism or during panels and presentations, that the problem of catastrophic forgetting is one of the top reasons they don’t expect to see AGI or human-level AI anytime soon.
Catastrophic forgetting is one of the top reasons experts don’t expect to see human-level AI anytime soon.
But Irina Higgins, a senior research scientist at Google DeepMind, used her presentation during the conference to announce that her team had begun to crack the code.
She had developed an AI agent — sort of like a video game character controlled by an AI algorithm — that could think more creatively than a typical algorithm. It could “imagine” what the things it encountered in one virtual environment might look like elsewhere. In other words, the neural net was able to disentangle certain objects that it encountered in a simulated environment from the environment itself.
This isn’t the same as a human’s imagination, where we can come up with new mental images altogether (think of a bird — you can probably conjure up an image of what a fictional spherical, red bird might look like in your mind’s eye.) The AI system isn’t that sophisticated, but it can imagine objects that it’s already seen in new configurations or locations.
“We want a machine to learn safe common sense in its exploration so it’s not damaging itself,” said Higgins in her speech at the conference, which had been organized by GoodAI. She had published her paper on the preprint server arXiv earlier that week and also penned an accompanying blog post.
Let’s say you’re walking through the desert (as one does) and you come across a cactus. One of those big, two-armed ones you see in all the cartoons. You can recognize that this is a cactus because you have probably encountered one before. Maybe your office bought some succulents to liven up the place. But even if your office is cactus-free, you could probably imagine what this desert cactus would look like in a big clay pot, maybe next to Brenda from accounting’s desk.
Now Higgins’ AI system can do pretty much the same thing. With just five examples of how a given object looks from various angles, the AI agent learns what it is, how it relates to the environment, and also how it might look from other angles it hasn’t seen or in different lighting. The paper highlights how the algorithm was trained to spot a white suitcase or an armchair. After its training, the algorithm can then imagine how that object would look in an entirely new virtual world and recognize the object when it encounters it there.
“We run the exact setup that I used to motivate this model, and then we present an image from one environment and ask the model to imagine what it would look like in a different environment,” Higgins said. Again and again, her new algorithm excelled at the task compared to AI systems with entangled representations, which could predict fewer qualities and characteristics of the objects.
Image Credit: Emily Cho
In short, the algorithm is able to note differences between what it encounters and what it has seen in the past. Like most people but unlike most other algorithms, the new system Higgins built for Google can understand that it hasn’t come across a brand new object just because it’s seeing something from a new angle. It can then use some spare computational power to take in that new information; the AI system updates what it knows about the world without needing to be retrained and re-learn everything all over again. Basically, the system is able to transfer and apply its existing knowledge to the new environment. The end result is a sort of spectrum or continuum showing how it understands various qualities of an object.
Higgins’ model alone won’t get us to AGI, of course. But it marks an important first step towards AI algorithms that can continuously update as they go, learning new things about the world without losing what they already had.
“I think it’s very crucial to reach anything close to artificial general intelligence,” Higgins said.
“I think it’s very crucial to reach anything close to artificial general intelligence.”
And this work is all still in its early stages. These algorithms, like many other object recognition AI tools, excel at a rather narrow taskwith a constrained set of rules, such as looking at a photo and picking out a face among many things that are not faces. But Higgins’ new AI system is doing a narrow task in such a way that more closely resembles creativity and some digital simulation of an imagination.
And even though Higgins’ research didn’t immediately bring about the era of artificial general intelligence, her new algorithm already has the ability to improve the existing AI systems we use all the time. For instance, Higgins tried her new AI system on a major set of data used to train facial recognition software. After analyzing the thousands and thousands of headshots found in the dataset, the algorithm could create a spectrum of any quality with which those photos have been labeled. As an example, Higgins presented the spectrum of faces rankedby skin tone.
Higgins then revealed that her algorithm was able to do the same for the subjective qualities that also find their ways into these datasets, ultimately teaching human biases to facial recognition AI. Higgins showed how images that people had labeled as “attractive” created a spectrum that pointed straight towards the photos of young, pale women. That means any AI system that had been trained with these photos — and there are many of them out there — now hold the same racist views as do the people who labeled the photos in the first place: that white people are more attractive.
This creative new algorithm is already better than we are when it comes to finding new ways to detect human biases in other algorithms so engineers can go in and remove them.
So while it can’t replace artists quite yet, Higgins’ team’s work is a pretty big step towards getting AI to imagine more like a human and less like an algorithm.
Here's a recipe for freaking out Twitter: Borrow a video of a realistic humanoid robot strolling up a driveway. Post it on Twitter. Wait for world famous mentalist Derren Brown to retweet it. Gather nearly 5 million video views. Enjoy the comment fallout as people question whether it's real.
Brown's retweet of a short robot video on Saturday helped spread the footage across Twitter, whose users described it as "scary," "creepy," "terrifying" and "my worst nightmare." It helped that Brown wrote, "WE ARE ALL GOING TO DIE" in his tweet caption.
Blomkamp didn't have anything to do with the driveway video, though he chimed in on Brown's Twitter thread, writing, "Props to the artist who implemented Adam into live action footage."
The artist behind the video appears to be 3D artist Maxim Sullivan, who originally posted the video to Twitter on Aug. 12 with the message, "Glitchy test of Adam from the @oatsstudios Unity film, going for a walk."
Sullivan answered some questions in his own Twitter thread, saying the creation is indeed CGI. While Sullivan's original tweeted video has almost 13,000 views, the version tweeted out of context by another Twitter user (and retweeted by Brown) has almost 5 million.
While Adam isn't real, there are plenty of actual robots out there that can give you the willies. Boston Dynamics' running robot Atlas is a top candidate that should slot nicely into your robo-fear nightmares.
There are fears that tend to come up when people talk about futuristic artificial intelligence — say, one that could teach itself to learn and become more advanced than anything we humans might be able to comprehend. In the wrong hands, perhaps even on its own, such an advanced algorithm might dominate the world’s governments and militaries, impart Orwellian levels of surveillance, manipulation, and social control over societies, and perhaps even control entire battlefields of autonomous lethal weapons such as military drones.
But some artificial intelligence experts don’t think those fears are well-founded. In fact, highly-advanced artificial intelligence could be better at managing the world than humans have been. These fears themselves are the real danger, because they may hold us back from making that potential a reality.
“Maybe not achieving AI is the danger for humanity.”
As a species, Mikolov explained, humans are pretty terrible at making choices that are good for us in the long term. People have carved away rainforests and other ecosystems to harvest raw materials, unaware of (or uninterested in) how they were contributing to the slow, maybe-irreversible degradation of the planet overall.
But a sophisticated artificial intelligence system might be able to protect humanity from its own shortsightedness.
“We as humans are very bad at making predictions of what will happen in some distant timeline, maybe 20 to 30 years from now,” Mikolov added. “Maybe making AI that is much smarter than our own, in some sort of symbiotic relationship, can help us avoid some future disasters.”
Granted, Mikolov may be in the minority in thinking a superior AI entity would be benevolent. Throughout the conference, many other speakers expressed these common fears, mostly about AI used for dangerous purposes or misused by malicious human actors. And we shouldn’t laugh off or downplay those concerns.
We don’t know for sure whether it will ever be possible to create artificial general intelligence, often considered the holy grail of sophisticated AI that’s capable of doing pretty much any cognitive task humans can, maybe even doing it better.
The future of advanced artificial intelligence is promising, but it comes with a lot of ethical questions. We probably don’t know all the questions we’ll have to answer yet.
But most of the panelists at the HLAI conference agreed that we still need to decide on the rules before we need them. The time to create international agreements, ethics boards, and regulatory bodies across governments, private companies, and academia? It’s now. Putting these institutions and protocols in place would reduce the odds that a hostile government, unwitting researcher, or even a cackling mad scientistwould unleash a malicious AIsystem or otherwise weaponize advanced algorithms. And if something nasty did get out there, then these systems would ensure we’d have ways to handle it.
With these rules and safeguards in place, we will be much more likely to usher in a future in advanced AI systems live harmoniously with us, or perhaps even save us from ourselves.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
28-08-2018
Should Evil AI Research Be Published? Five Experts Weigh In.
Should Evil AI Research Be Published? Five Experts Weigh In.
Testing Russian humanoid robot Fyodor at Android Technics enterprise in Magnitogorsk MAGNITOGORSK, RUSSIA - DECEMBER 8, 2016: Testing the Russian humanoid rescue robot Fyodor created by the Russian Foundation for Advanced Research Projects by order of the Russian Emergency Situations Ministry, at the Android Technics Scientific Production Association. The robot can be remotely controlled by a person in a special suit or work autonomously performing voice commands. Donat Sorokin/TASS
A rhetorical question for you. Let’s say you’re an AI scientist, and you’ve found the holy grail of your field — you figured out how to build an artificial general intelligence (AGI). That’s a truly intelligent computer that could pass as human in terms of cognitive ability or emotional intelligence. AGI would be creative and find links between disparate ideas — things no computer can do today.
That’s great, right? Except for one big catch: your AGI system is evil or could only be used for malicious purposes.
So, now a conundrum. Do you publish your white paper and tell the world exactly how to create this unrelenting force of evil? Do you file a patent so that no one else (except for you) could bring such an algorithm into existence? Or do you sit on your research, protecting the world from your creation but also passing up on the astronomical paycheck that would surely arrive in the wake of such a discovery?
Yes, this is a rhetorical question — for now. But some top names in the world of AI are already thinking about their answers. On Friday, speakers at the “AI Race and Societal Impacts” panel of The Joint Multi-Conference on Human-Level Artificial Intelligencein Prague gave their best responses after the question was posed by an audience member.
Here’s how five panelists, all experts on the future of AI, responded.
Siegelmann urged the hypothetical scientist to publish their work immediately. Siegelmann had earlier told Futurism that she believes there is no evil technology, but there are people who would misuse it. If that AGI algorithm was shared with the world, people might be able to find ways to use it for good.
But after Siegelmann answered, the audience member who posed the hypothetical question clarified that, for the purposes of the thought experiment, we should assume that no good could ever possibly come from the AGI.
Irakle Eridze, Senior Strategy and Policy Advisor at UNICRI, United Nations
Easy one: “Don’t publish it!”
Eridze otherwise stayed out of the fray for this specific question, but throughout the conference he highlighted the importance of setting up strong ethical benchmarks on how to develop and deploy AGI. Apparently, deliberately releasing an evil super-intelligent entity into the world would go against those standards.
Turchin believes there are responsible ways to handle such an AI system. Think about a grenade, he said — one should not hand it to a small child, but maybe a trained soldier could be trusted with it.
But Turchin’s example is more revealing than it may initially appear. A hand grenade is a weapon created explicitly to cause death and destruction no matter who pulls the pin, so it’s difficult to imagine a so-called responsible way to use one. It’s not clear whether Turchin intended his example to be interpreted this way, but he urged the AI community to make sure dangerous algorithms were left only in the most trustworthy hands.
Tak Lo, a partner at Zeroth.ai, an accelerator that invests in AI startups
Lo said the hypothetical computer scientist should sell the evil AGI to him. That way, they wouldn’t have to hold onto the ethical burden of such a powerful and scary AI — instead, you could just pass it to Lo and he would take it from there. Lo was likely (at least half-)kidding, and the audience laughed. Earlier that day, Lo said that private capital and investors should be used to push AI forward, and he may have been poking fun at his own obviously capitalistic stance. Still, someone out there would absolutely try to buy such an AGI system, should it arrive.
But what Lo suggests, in jest or not, is one of the most likely results, should this actually come to pass. While hobbyists can develop truly valuable and innovative algorithms, much of the top talent in the AI field is scooped up by large companies who then own the products of their labor. The other likely scenario is that the scientist would publish their paper on an open-access preprint server like arXiv to help promote transparency.
Seán Ó hÉigeartaigh, Executive Director of the Cambridge Center for the Study of Existential Risk
Ó hÉigeartaigh agreed with Eridze: you shouldn’t publish it. “You don’t just share that with the world! You have to think about the kind of impact you will have,” he said.
And with that, the panel ended. Everyone went on their merry way, content that this evil AGI was safe in the realm of the hypothetical.
In the “real world,” though, ethics often end up taking a back seat to more earthly concerns like money and prestige. Companies like Facebook, Google, and Amazonregularly publish facial recognition or other surveillance systems, often selling them to police or the military which uses it to monitor everyday people. Academic scientists are trapped in the “publish or perish,” cycle — publish a study, or risk losing your position. So ethical concerns are often relegated to a paper’s conclusion, as a factor for someone else to sort out at some vague point in the future.
For now, though, it’s unlikely that anyone will come up with AGI — much less evil AGI — anytime soon. But the panelists’ wide-ranging answers means that we are still far from sorting out what should be done with unethical, dangerous science.
If you met this lab-created critter over your beach vacation, you'd swear you saw a baby ray. In fact, the tiny, flexible swimmer is the product of a team of diverse scientists. They have built the most successful artificial animal yet. This disruptive technology opens the door much wider for lifelike robots and artificial intelligence.
Like most disruption, it started with a simple idea. Kit Kevin Parker, PhD, a Harvard professor researching how to build a human heart, saw his daughter entranced by watching stingrays at the New England Aquarium in Boston. He wondered if he could engineer a muscle that could move in the same sinuous, undulating fashion. The quest for a material led to creating an artificial ray with a 3-D-printed rubber body at the School of Engineering and Applied Sciences at Harvard. Scientists from the University of Illinois at Urbana-Champaign, the University of Michigan, and Stanford University's Medical Center joined the team.
They reinforced the soft rubber body with a 3-D-printed gold skeleton so thin it functions like cartilage. Geneticists adapted rat heart cells so they could respond to light by contracting. Then, they were grown in a carefully arranged pattern on the rubber and around the gold skeleton.
The muscular circuitry is one of the most interesting parts of the research, and there's more about it in this video:
The birth of biohybrid beings
The new engineered animal responds to light so well scientists were able to guide it through an obstacle course 15 times its length using strong and weak light pulses.
The study authors write, "Our ray outperformed existing locomotive biohybrid systems in terms of speed, distance traveled, and durability (six days), demonstrating the potential of self-propelled, phototactically activated tissue-engineered robots."
What biohybrid mean for robots and artificial intelligence
Science of this type is fundamental for engineering special-purpose creations such as artificial worms that sniff out and eat cancer. Or bionic body parts for those who have suffered accidents or disease. Imagine having little swimmers in your system that rush to the site of a medical emergency such as a stroke. The promise of sensor-rich soft tissue frees robots to move more easily and yet not be cut off from needed input. Sensitized robot soft tissue could perform without the energy-sucking heaviness of metal or the artificial barrier of hard-plastic exoskeletons.
Thanks to disruptive, cross-disciplinary applied science like this, entrepreneurs in the next few years will be able to play on the border of what life is, what alive means, and what life can be. Expect to see companies use biohybrid beings to commercialize applications that solve some of the largest, and most lucrative, challenges we face today.
When Microsoft co-founder Paul Allen’s Stratolaunch aircraft finally leaves the ground, it’ll become the aircraft with the largest wingspan in history to do so. That maiden voyage could take place in the next few months, before the end of 2018. And now we know what the massive craft will carry once it’s fully operational.
On Monday, Stratolaunch Systems announced some details about its four launch vehicles that will (if everything goes according to plan) carry satellites into orbit. The Stratolaunch aircraft will ferry these vehicles high into the sky and then drop them, at which point they’ll launch and shoot into orbit to deliver their payloads.
This method of launching a rocket in mid-air, known as air launch to orbit, meansyou can launch something into space from pretty much anywhere, isn’t as dependent on the weather, and requires less rocket fuel (read: cheaper) than a standard rocket launch. However, the craft carrying today’s launch vehicles to altitude aren’t large enough to handle payloads as heavy as those the Stratolaunch will be able to support.
The Stratolaunch aircraft won’t carry any of these launch vehicles on its maiden voyage later this year, but the company has already started developing them to be used in the coming years. Here’s what we know about each so far
Pegasus: This one can only carry a payload of about 815 lbs (370 kg). However, it is dependable and proven; Stratolaunch Systems has already completed 35 successful launches of the vehicle using other carriers. The first launch via a Stratolaunch should take place in 2020.
Medium Launch Vehicle (MLV): This one will have a much heavier payload capability than Pegasus: 7,500 lbs (3,400 kg). Its first flight should take place in 2022.
Medium Launch Vehicle – Heavy:This is a three-core version of the MLV variant, and it’ll be capable of delivering payloads of up to 13,200 lbs (6,000 kg). No word on when it’ll be ready for launch as it’s still in the early development stage.
Space Plane: Stratolaunch Systems also plans to develop a fully reusable space plane. Though still in the design stage, this one should be able to handle cargo launch, cargo return, and the transportation of crew.
EASY ACCESS.
Stratolaunch Systems believes these vehicles could help democratize access to space by making it easier for others to send satellites into orbit.
“We are excited to share for the first time some details about the development of our own, proprietary Stratolaunch launch vehicles, with which we will offer a flexible launch capability unlike any other,” said Stratolaunch CEO Jean Floyd in a statement. “Whatever the payload, whatever the orbit, getting your satellite into space will soon be as easy as booking an airline flight.”
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
12-08-2018
Engineers in the U.K. unveil the world’s first graphene-skinned airplane
Engineers in the U.K. unveil the world’s first graphene-skinned airplane
Graphene, the versatile miracle material that can be used for everything from creating better speakers for hearing aids to body armor that’s stronger than diamonds, has another application to add to its résumé. In the U.K., engineers at the University of Central Lancashire (UCLan) recently unveiled the world’s first graphene-skinned plane at the “Futures Day” event at Farnborough Air Show 2018. Called Juno, the 11.5-feet wide unmanned airplane also boasts graphene batteries and 3D-printed parts. The combination adds up to a pretty darn impressive whole.
“This project is a genuine world’s first,” Billy Beggs, UCLan’s Engineering Innovation manager, told Digital Trends. “It represents the latest stage of an ongoing collaborative program between academia and industry to build on innovative research, and exploit graphene applications in aerospace. We are establishing a lead in the industrial application of graphene.”
While the 3D-printed elements and graphene batteries are certainly exciting, the graphene-skinned wings are the most promising part of the project. Specifically, it is hoped that the use of graphene can help reduce the overall weight of the aircraft to increase its range and potential payload. This is made possible because the graphene carbon used in Juno is around 17 percent lighter than standard carbon fiber. Other properties of the graphene can help it counter the effects of potentially dangerous lightning strikes, due to its extreme conductivity, and protect the aircraft against ice buildup during flight.
Working with UCLan on the project is the Sheffield Advanced Manufacturing Research Center, University of Manchester’s National Graphene Institute, Haydale Graphene Industries, and assorted other businesses and research institutes.
“The U.K. Industrial Strategy highlights graphene as an example of a scientific discovery that needs to translate into industrial applications,” Peter Thomas, head of Innovation and Partnership at UCLan, told us. “Post-Brexit, the U.K. needs to continue to develop competitive advantage in aerospace through innovation.”
With Juno having made its stunning public demonstration, the next phase of the operation will include further tests to be carried out over the next two months. Should all go according to plan, airplanes such as this may well turn out to a particularly promising line of inquiry for graphene-related initiatives.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
11-08-2018
Scientists Want To Put These Spider-Like Microbots Under Your Skin
Scientists Want To Put These Spider-Like Microbots Under Your Skin
If anyone suggests having micro spider-like robots, living under the skin people might think them insane. However, people may actually want them to get under their skin, as they might be able to help fix the body when it begins to stop working, as it should.
The robots in question are spider-like microbots researchers are developing that might one day be able to crawl around the body of humans. The robots are soft, they are squishy and flexible, plus they look a lot like spiders. While they are not yet ready to go around mending the bodies of humans just yet, future versions of them might be able to undertake tasks that humans would not be able to achieve.
A team of roboticists working at the Harvard University Wyss Institute for Biologically Inspired Engineering, Boston University and Harvard John A. Paulson School of Engineering and Applied Sciences have created such robots.
New Fabrication Process Means Robots Can Be Just Millimeters
The team got together to create microrobots based on a new fabrication process allowing them to make machines that are millimeters in scale while having features that are micrometer. This is not the first time that robots of this size have been created. However, robots in the past of equal size have not been as dynamic. The team made a spider bot that is transparent, which they based around the Australian peacock spider, to show off the robot.
Assistant professor, RommasoRanzani, from Boston University said:
"The idea of designing along with fabricating a soft robot inspired by the peacock spider comes from the fact that this small insect embodies a large number of unsolved challenges in soft robotics.” He went on to say, “Indeed it is less than a centimeter wide, has features down to the micron scale, a well-defined three-dimensional structure, and a large number of independently controllable degrees of freedom in only a couple of centimeters width. In addition, it is characterized by beautiful color patterns. We saw here an opportunity to advance the manufacturing capabilities in small-scale soft robotics and to demonstrate the capabilities of our process."
The team came up with an approach to fabrication called "Morph" also known as Microfluidic Origami for Reconfigurable Pneumatic/Hydraulic. To make the robot the researchers put 12 layers of elastic silicone together to make up the legs, the abdomen, and torso of the spider robot. They then used processes including laser-micro-machining in order to ensure the measurements were precise.
The Process Leads to Robots That Mimic Real Life Spiders
The resulting micro-spider can flex its joints along with moving its legs. It is even capable of raising its abdomen just as the real Peacock spider does in real life. The spider works by injecting micro fluids into hollow channels that run from the abdomen on the spider down into the legs.
The team believes that the robotic spiders manufacturing process might lead to Microbots with soft and dynamic bodies one day being able to undertake medical tasks that are extremely delicate inside the bodies of human beings. The robots may also be capable of undertaking search along with rescue missions, which are too dangerous for humans.
Transportation is about to get a technology-driven reboot. Recently, Akka Technologies, an innovative engineering and consulting company based in France, unveiled its mind-blowing Link & Fly aircraft design.
The new vehicle is both a flying train that can take to the air and a plane with a passenger pod and detachable wings that can travel on the ground via tracks. Akka’s Link & Fly concept craft will be 33.8 meters long and 8.2 meters high, with a 48.8-meter wingspan.
Akka’s chief executive officer, Maurice Ricci, said, “After cars go electric and autonomous, the next big disruption will be in airplanes.”
With Akka’s futuristic concept, when you need to fly, you will take a tube-shaped passenger train that will bring you straight to the airport. At the airport, the passenger pod will then roll onto the runway, where the pod will attach to the wings, which sit waiting with the engines on top.
Upon landing, the plane detaches from its wings and turns back into a train, which rolls on tracks to local train stations.
The craft is planned to have a maximum cruise altitude of 39,800 feet, a range of 2,200 kilometers, and a cruise speed of Mach 0.78 (around 600 miles per hour).
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
26-07-2018
By 2050, We Will Attend Ower Own Funerals... As Robots
By 2050, We Will Attend Ower Own Funerals... As Robots
Robotics along with virtual reality is said to be the way in the future.
This is good news for anyone who has ever wondered how many people will come to their funeral when they die as it has been said that by 2050 humans will be able to attend their own funeral in a robot body.
"If you're under 40 reading this article, you're probably not going to die unless you get a nasty disease"
Dr. Ian Pearson made the astonishing claim that by 2050 humans will be able to have their minds stored in a computer. This means that actual bodies will be made redundant. It also means that when the body dies, the brain will still live on inside a robot.
People Might Live Forever in Realistic Robotic Bodies
In the future, the brain of a person could be copied and put inside a robot that has a realistic looking face. Pearson also believes that in the future the brain will be connected to a computer. Instead of becoming just a copy of the human’s brain, it will actually be an extension of the mind.
The doctor said that when the human body dies the brain stops working. However, about 99% of the mind is fine, and it would be quite happy to continue running in the cloud, just as data does now. Pearson is under the impression that if people have saved enough data in their brain along with it being prepared, the brain could be connected up to an android, with the robot becoming the body of that person.
This essentially means that a person never dies; only the body does. It would also mean that the person would be able to go to his or her own funeral to say goodbye to their old body then continue life as before in the robotic body.
There are already super realistic faces along with bodies made from silicone on the market, some with robotics inside, sold in the sex doll market. Pearson said that the person would be the same inside, with the same thoughts, feelings, and personality, but their body would be upgraded, which means it would never get old. He said, "Still you, just with a younger, highly upgraded body."
Humans Might Live in the Real World in Robotic Bodies or in Virtual Reality
This is not the first time Dr. Pearson has made such an outlandish claim. In the past, he has talked about numerous ways of being able to beat death. These claims have included renewing body parts thanks to genetic engineering, putting humans into computers and robots, or transferring the brain of a human into a computer, allowing them to live in a virtual world.
Pearson did say that not everyone would be lucky enough to be able to live forever inside a robot. He said, "Some people may need to wait until 2060 or later until the android price falls enough for them to afford one." This means only the super-rich would be able to live on like a robot at first.
Companies Might Turn Robotic Humans into Slaves
There is also the issue of storing brains on computers. Pearson pointed this out in relation to how servers store brains along with the servers being in the hands of companies such as Facebook, Apple or Google. Pearson said, "The small print may give them some rights over replication, ownership, and license to your idea, who knows what?
"So although future electronic immortality has the advantage of offering a pretty attractive version of immortality at first glance, a closer reading of the 100 page T&Cs may well reveal some nasties.
"You may in fact no longer own your mind. Oh dear!"
There are worries that companies could copy the minds of people, then go on to sell them. A person’s brain might also be used as a worker mind, which would essentially turn the person into a slave, so it is not all good news.
Yet another great plot for a science fiction novel is about to become reality instead. A renowned futurologist has laid out the plan for how robots will acquire human minds, then attend the funerals of the mind’s human body … by 2050. That means we’re a mere three decades away from immortality through robotics … well, those who will be able to afford it are. Should you wait to see how the novel ends before saving your money?
In a post on his Futurizon blog, Dr. Ian Pearson – former member of parliament, author, lecturer and creator of 1800+ inventions including text messaging and the active contact lens – gives his own somewhat dystopian views on the inevitable road to human androidization and beyond, which he sees occurring by 2050. He begins by predicting that we will arrive at the point with “99% of your mind is running on external IT rather than in the meat-ware in your head.” With your mind not in the clouds but in the ‘cloud’, the next step will then be easy:
“Assuming you saved enough and prepared well, you connect to an android to use as your body from now on, attend your funeral, and then carry on as before, still you, just with a younger, highly upgraded body.”
To brace readers for the big prediction/warning/plot-turn, Pearson first gives little warnings about how this will be expensive – while servers are cheap, robots are not.
“Some people may need to wait until 2060 or later until android price falls enough for them to afford one.”
Wait? Humans? Really? There must be a way to get our robotic containers for our human minds right away and at a reasonable price, right Dr. Pearson?
“Maybe your continued existence is paid for as part of an extended company medical plan.”
That’s one alternative Pearson proposes, assuming medical insurance is still around in 2050. However (there’s always a ‘however’), letting someone else own or lease your robotic body may allow them (read the fine print) to own, lease or at least access your mind, using it for good (allowing it to continue to write funny stuff for your readers) or bad (cloning an evil mind into millions of copies or adding it to a collective that will someday take over the world).
It gets worse. What if the medical plan goes bankrupt paying for replacement android bodies for millions of its immortal or vain clients? Perhaps you saw this coming and set up a fund for your children, grandchildren and beyond to pay for your upgrades and keep your robotic body in good working order. If they didn’t always listen to you when you were human, will they listen to your android self?
“After all, they know you know they have kids or grand-kids in school that need paid for, and homes don’t come cheap, and they really need that new kitchen.”
Grandpa, would you deprive your great-grandchildren of a new iPhone 500 UltraPlus just so you can have a titanium cranium?
But I “need” a titanium cranium!
OK, maybe letting a corporation specifically designed for the purpose of managing and upgrading cyborgs in return for using enough of your mind to pay for it isn’t such a bad idea. After all, businesses always have your best interests in mind, right?
“[Then] you could stay immortal, unable to die, stuck forever as just a corporate asset, a mere slave.”
Is this inevitable? Dr. Pearson claims to have an 85 percent accuracy record when looking 10 to 15 years into the future. Should we trust his predictions/warnings for 30?
Do we already know how this novel will end? Will it be worth attending your own funeral?
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
20-07-2018
Why this blind, catlike robot could transform search and rescue
Why this blind, catlike robot could transform search and rescue
No vision, no problem.
by Sarah Cahlan
MIT's Cheetah 3 robot can climb stairs and step over obstacles without the help of cameras or visual sensors.
Massachusetts Institute of Technology
Scientists at MIT have created a four-legged robot that can climb debris-ridden stairs and leap almost three feet into the air, but the ominous-looking catlike bot — dubbed "Cheetah 3" — is intended not to hasten the robot apocalypse but to help bring about a new generation of first-responder robots.
As seen in a video released by the university, the 90-pound, retriever-sized robot navigates with touch sensors rather than cameras — a bit like the way humans feel their way when it's too dark to see.
“Cheetah 3 is designed to do versatile tasks such as power plant inspection, which involves various terrain conditions including stairs, curbs and obstacles on the ground,” Sangbae Kim, an associate professor of mechanical engineering at MIT and one of the robot's developers, said in a statement.
Kim plans to give Cheetah sight, but for early tests he wanted to keep the robot in the dark. "In order to be as agile as animals, including humans, we need to have a great blind controller first before relying on vision," he told NBC News MACH in an email.
Growing concern over fire risk involving Kia vehicles
Robin Murphy, a professor of computer science and engineering at Texas A&M University, sees big potential for search-and-rescue robots that maneuver with touch technology. Such bots could navigate in areas shrouded in darkness or obscured by airborne dust, said Murphy, who is not involved in the Cheetah project.
“It would be so great when that technology that they're showing matures and could be added to the robots that are the size of a shoebox,” she said of the MIT researchers' work. Small bots, of course, are able to get inside nooks and crannies too confined for humans — and relay information that human rescuers can then use to extricate victims of building collapses, for example.
“If you just start excavating, you could possibly trigger a secondary collapse that would kill the survivor or another survivor that you haven't found yet,” she said.
Search-and-rescue robots aren't new, but Cheetah 3 is one of many new bots now in development. Last fall, Honda unveiled a five-foot-tall robotthat can rotate its torso 180 degrees in order to climb steep stairs. Last February, the Italian Institute of Technology released a video showing its WALK-MAN humanoid bot wielding a fire extinguisher.
Next year, Kim and his team plan to equip Cheetah 3 with robotic arms that can be controlled by a human operator. They aim to have a commercial version of their bot ready in five years.
Cancer survival rates could be greatly improved if scientists are successful in developing microscopic medical weapons that obliterate cancerous cells.
Nanomachines may be tiny – 50,000 of them would fit across the diameter of a human hair – but they have the potential to pack a mighty punch in the fight against cancer.
A graphic showing the tiny nanomachine
Image: TOUR GROUP/RICE UNIVERSITY
Researchers at Durham University in the UK have used nanobots to drill into cancer cells, killing them in just 60 seconds.
They are now experimenting on micro-organisms and small fish, before moving on to rodents. Clinical trials in humans are expected to follow and it is hoped that the results may have the potential to save millions of lives.
The number of cancer cases is predicted to rise by 2035
Image: World Health Organization (WHO) GloboCan, BBC
The mechanics of nanobots
These minute molecules have components that enable them to identify and attach themselves to a cancer cell.
When activated by light, the nanobots’ rota-like chain of atoms begin to spin at an incredible rate – around two to three million times per second. This causes the nanobot to drill into the cancer cell, blasting it open.
The study is still in its early stages, but researchers are optimistic it has the potential to lead to new types of cancer treatment.
Dr Robert Pal, of Durham University, said: “Once developed, this approach could provide a potential step change in noninvasive cancer treatment and greatly improve survival rates and patient welfare globally.”
The spinning nanobots burrow into cancer cells to destroy them
Image: Tour Group/Rice University
Nanobots in our veins
The destructive properties of the nanobots make them perfect for killing cancer cells. But the technology can also be used to repair damaged or diseased tissues at a molecular level.
In the future, these nanomachines could essentially patrol the circulatory system of the human body. They could be used to detect specific chemicals or toxins and give early warnings of organ failure or tissue rejection.
Another potential function may involve taking biometric measurements to monitor a person’s general health.
A computer-generated image of a nanobot
Image: Tour Group/Rice University
Searching for oil
The medicinal advantages of nanobots are clear to see, but industry might also benefit from the technology.
Oil and gas is one example. The idea is that nanobots could be injected into geologic formations thousands of feet into the earth. Changes to the chemical make-up of the machines would point to the location of reservoirs.
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 74 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.