The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
13-08-2016
Biohybrid Robots Built From Living Tissue Start to Take Shape
Biohybrid Robots Built From Living Tissue Start to Take Shape
Think of a traditional robot and you probably imagine something made from metal and plastic. Such "nuts-and-bolts" robots are made of hard materials. As robots take on more roles beyond the lab, such rigid systems can present safety risks to the people they interact with. For example, if an industrial robot swings into a person, there is the risk of bruises or bone damage.
Researchers are increasingly looking for solutions to make robots softer or more compliant — less like rigid machines, more like animals. With traditional actuators — such as motors — this can mean using air musclesor adding springs in parallel with motors. For example, on a Whegs robot, having a spring between a motor and the wheel leg (Wheg) means that if the robot runs into something (like a person), the spring absorbs some of the energy so the person isn't hurt. The bumper on a Roomba vacuuming robot is another example; it's spring-loaded so the Roomba doesn't damage the things it bumps into.
But there's a growing area of research that's taking a different approach. By combining robotics with tissue engineering, we're starting to build robots powered by living muscle tissue or cells. These devices can be stimulated electrically or with light to make the cells contract to bend their skeletons, causing the robot to swim or crawl. The resulting biobots can move around and are soft like animals. They're safer around people and typically less harmful to the environment they work in than a traditional robot might be. And since, like animals, they need nutrients to power their muscles, not batteries, biohybrid robots tend to be lighter too.
Tissue-engineered biobots on titanium molds.
Credit: Karaghen Hudson and Sung-Jin Park, CC BY-ND
Building a biobot
Researchers fabricate biobots by growing living cells, usually from heart or skeletal muscle of rats or chickens, on scaffolds that are nontoxic to the cells. If the substrate is a polymer, the device created is a biohybrid robot — a hybrid between natural and human-made materials.
If you just place cells on a molded skeleton without any guidance, they wind up in random orientations. That means when researchers apply electricity to make them move, the cells' contraction forces will be applied in all directions, making the device inefficient at best.
So to better harness the cells' power, researchers turn to micropatterning. We stamp or print microscale lines on the skeleton made of substances that the cells prefer to attach to. These lines guide the cells so that as they grow, they align along the printed pattern. With the cells all lined up, researchers can direct how their contraction force is applied to the substrate. So rather than just a mess of firing cells, they can all work in unison to move a leg or fin of the device.
Tissue-engineered soft robotic ray that's controlled with light.
Credit: Karaghen Hudson and Michael Rosnach, CC BY-ND
Biohybrid robots inspired by animals
Beyond a wide array of biohybrid robots, researchers have even created some completely organic robots using natural materials, like the collagen in skin, rather than polymers for the body of the device. Some can crawl or swim when stimulated by an electric field. Some take inspiration frommedical tissue engineering techniques and use long rectangular arms (or cantilevers) to pull themselves forward.
Others have taken their cues from nature, creating biologically inspired biohybrids. For example, a group led by researchers at California Institute of Technology developed a biohybrid robot inspired by jellyfish. This device, which they call a medusoid, has arms arranged in a circle. Each arm is micropatterned with protein lines so that cells grow in patterns similar to the muscles in a living jellyfish. When the cells contract, the arms bend inwards, propelling the biohybrid robot forward in nutrient-rich liquid.
More recently, researchers have demonstrated how to steer their biohybrid creations. A group at Harvard used genetically modified heart cells to make a biologically inspired manta ray-shaped robot swim. The heart cells were altered to contract in response to specific frequencies of light — one side of the ray had cells that would respond to one frequency, the other side's cells responded to another.
When the researchers shone light on the front of the robot, the cells there contracted and sent electrical signals to the cells further along the manta ray's body. The contraction would propagate down the robot's body, moving the device forward. The researchers could make the robot turn to the right or left by varying the frequency of the light they used. If they shone more light of the frequency the cells on one side would respond to, the contractions on that side of the manta ray would be stronger, allowing the researchers to steer the robot's movement.
Toughening up the biobots
While exciting developments have been made in the field of biohybrid robotics, there's still significant work to be done to get the devices out of the lab. Devices currently have limited lifespans and low force outputs, limiting their speed and ability to complete tasks. Robots made from mammalian or avian cells are very picky about their environmental conditions. For example, the ambient temperature must be near biological body temperature and the cells require regular feeding with nutrient-rich liquid. One possible remedy is to package the devices so that the muscle is protected from the external environment and constantly bathed in nutrients.
Another option is to use more robust cells as actuators. Here at Case Western Reserve University, we've recently begun to investigate this possibility by turning to the hardy marine sea slug Aplysia californica. Since A. californica lives in the intertidal region, it can experience big changes in temperature and environmental salinity over the course of a day. When the tide goes out, the sea slugs can get trapped in tide pools. As the sun beats down, water can evaporate and the temperature will rise. Conversely in the event of rain, the saltiness of the surrounding water can decrease. When the tide eventually comes in, the sea slugs are freed from the tidal pools. Sea slugs have evolved very hardy cells to endure this changeable habitat.
Sea turtle-inspired biohybrid robot, powered by muscle from the sea slug.
We've been able to use Aplysia tissue to actuate a biohybrid robot, suggesting that we can manufacture tougher biobots using these resilient tissues. The devices are large enough to carry a small payload — approximately 1.5 inches long and one inch wide.
A further challenge in developing biobots is that currently the devices lack any sort of on-board control system. Instead, engineers control them via external electrical fields or light. In order to develop completely autonomous biohybrid devices, we'll need controllers that interface directly with the muscle and provide sensory inputs to the biohybrid robot itself. One possibility is to use neurons or clusters of neurons called ganglia as organic controllers.
That's another reason we're excited about using Aplysia in our lab. This sea slug has been a model system for neurobiology research for decades. A great deal is already known about the relationships between its neural system and its muscles — opening the possibility that we could use its neurons as organic controllers that could tell the robot which way to move and help it perform tasks, such as finding toxins or following a light.
While the field is still in its infancy, researchers envision many intriguing applications for biohybrid robots. For example, our tiny devices using slug tissue could be released as swarms into water supplies or the ocean to seek out toxins or leaking pipes. Due to the biocompatibility of the devices, if they break down or are eaten by wildlife these environmental sensors theoretically wouldn't pose the same threat to the environment traditional nuts-and-bolts robots would.
One day, devices could be fabricated from human cells and used for medical applications. Biobots could provide targeted drug delivery, clean up clots or serve as compliant actuatable stents. By using organic substrates rather than polymers, such stents could be used to strengthen weak blood vessels to prevent aneurysms — and over time the device would be remodeled and integrated into the body. Beyond the small-scale biohybrid robots currently being developed, ongoing research in tissue engineering, such as attempts to grow vascular systems, may open the possibility of growing large-scale robots actuated by muscle.
This article was originally published on The Conversation. Read theoriginal article. Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google +. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Live Science.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
04-08-2016
New Tech Lets You Watch 3D Movies Without the Funky Glasses
New Tech Lets You Watch 3D Movies Without the Funky Glasses
By Charles Q. Choi, Live Science Contributor
A new prototype display could enable people to watch 3D movies from any seat in the theater, without having to wear 3D glasses.
Credit: Christine Daniloff/MIT
Someday, moviegoers may be able to watch 3D films from any seat in a theater without having to wear 3D glasses, thanks to a new kind of movie screen.
The new technology, named Cinema 3D, overcomes some of the barriers to implementingglasses-free 3D viewing on a larger scale, but it's not commercially viable yet, the researchers said when describing their findings.
Although 3D movies can offer unique perspectives and experiences, one major drawback is the cumbersome eyewear that moviegoers typically have to wear. Although glasses-free 3D strategies already exist, these technologies currently cannot be scaled up to movie theaters. [10 Technologies That Will Transform Your Life]
For example, glasses-free 3D methods for TV sets often use a series of slits known as a parallax barrier that is placed in front of the screen. These slits allow each eye to see a different set of pixels, creating the illusion of depth.
However, for parallax barriers to work, they must be placed at a set distance from viewers. This makes parallax barriers difficult to implement in larger spaces such as theaters, where people can sit at a variety of distances and angles from the screen.
In addition, glasses-free 3D displays have to account for the different positions from which people are watching. This means that they have to divide up the limited number of pixels they project so that each viewer sees an image from wherever he or she is located, the researchers said.
"Existing approaches to glasses-free 3D require screens whose resolution requirements are so enormous that they are completely impractical," study co-author Wojciech Matusik, an associate professor of electrical engineering and computer science at MIT, said in a statement.
But in the new method, the researchers used a series of mirrors and lenses to essentially give viewers a parallax barrier tailored to each of their positions.
"By careful design of optical elements, we can achieve very-good-quality 3D content without using glasses," study co-author Piotr Didyk, a researcher at the Max Planck Institute for Informatics and Saarland University, both in Germany, told Live Science.
"This is the first technical approach that allows for glasses-free 3D on a large scale," Matusik said in a statement.
In addition, the scientists reasoned that instead of displaying images to every position in a theater, they would need to display images only to a relatively tiny set of viewing positions at each theater seat.
"In our solution, we exploit the layout of the audience in a cinema," Didyk said.
The scientists developed a simple Cinema 3D prototype that could support a 200-pixel image. In experiments, volunteers could see 3D versions of pixelated figures from a number of different seats in a small theater.
The scientists cautioned that Cinema 3D is currently impractical to implement commercially. For instance, their prototype requires 50 sets ofmirrors and lenses, but the screen is just barely larger than a pad of paper. The researchers hope to build a larger version of their display and further boost the image resolution.
"It remains to be seen whether the approach is financially feasible enough to scale up to a full-blown theater," Matusik said in a statement. "But we are optimistic that this is an important next step in developing glasses-free 3D for large spaces like movie theaters and auditoriums."
The scientists detailed their findings July 26 at the SIGGRAPH computer graphics conference in Anaheim, California.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
23-07-2016
This Robot Is Part Sea Slug
This Robot Is Part Sea Slug
By Greg Uyeno, Staff Writer
A tiny robot made of sea slug muscles and 3D-printed parts sits in a lab dish
Credit: Victoria Webster
We usually think of cyborgs as part human, part machine, but roboticists don't limit themselves that way. Researchers have developed a hybrid robot built with body parts from a novel source: sea slugs.
The new robot combines a Y-shaped muscle from the mouth of a California sea hare (Aplysia californica) with a 3D-printed skeleton.
Researchers surgically removed the so-called "I2" muscle from the mouths of sea slugs and glued them to flexible, 3D-printed plastic frames. When the muscles were subjected to an external electric field, the resulting contractions produced a deliberate clawing motion that was able to move the tiny robot up to 0.2 inches (0.5 centimeters) per minute. [The 6 Strangest Robots Ever Created]
The robot was modeled after the way sea turtles crawl, because the researchers wanted to create something that could move with only one Y-shaped muscle, study lead author Victoria Webster, a graduate student at Case Western Reserve University in Cleveland, told Live Science in an email. But, it should be possible to apply similar techniques to create more complex robots with different movement styles, such as the inchworm-inspired version that the team is working on now, she added.
With a few more developments, the scientists said, teams of robots could be deployed for tasks such as searching for toxic underwater leaks or finding an airplane's "black box" flight data recorder after it has crashed into the ocean.
And one day, the designers would also like to make entirely biological robots by replacing the plastic parts of the new hybrid bot with organic material.
"We're building a living machine — a biohybrid robot that's not completely organic — yet," Webster said in a statement.
Sea slugs live in a wide range of temperatures and conditions, so theirmuscles can function in myriad environments. This natural versatility is key to developing biological machines that are capable of operating in different environments.
"By using the sea hare as our material source, we have obtained materials which are more robust than the cells which have been used in the past," Webster said.
The team is now experimenting with including the ganglia, or nervous tissue, that controls the I2 muscle. "They respond to direct chemical stimulation or to stimulation of the sensory system nerves," Webster said. "By stimulating the nerves, we may be able to steer the robot in the future."
The scientists also developed a method to mold collagen gel from the sea slugs' skin into "scaffolding" for completely organic machines. These nonhybrid robots would be inexpensive, nonpolluting and biodegradable, the scientists said, enabling them to release many robots without having to worry if some of them are lost.
"Our hope is to continue developing these devices to include organic controllers, sensors and skeletons," Webster said.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
21-07-2016
Scientists Make A Microscopic Storage System That’s Only 100 Nanometers Wide
Scientists Make A Microscopic Storage System That’s Only 100 Nanometers Wide
Devin Coldewey
IN BRIEF
Scientists from the Netherlands discovered a system wherein a disk can store data atom by atom. The technology is not yet ready for public use but offers an array of possibilities.
MICROSCOPIC HARD DRIVE?
Researchers from the Netherlands were able to create a microscopic storage system that encodes every bit with a single atom. In other words, a kilobyte could be stored in a space spanning just 100 nanometers.
Hard drives you can buy today use a system that require hundreds or thousands of atoms to store a single bit of data or 1 terabit per square inch. Meanwhile, the new system can store about 500 terabits per square inch.
In a press release, Sandre Otte, lead scientist at the Delft University of Technology, stated that “In theory, this storage density would allow all books ever created by humans to be written on a single post stamp.” How cool is that?
AS COOL AS LIQUID NITROGEN
Otte explained that every bit consists of two positions on a surface of copper atoms and one chlorine atom that can slide back and forth between these two positions. Since chlorine and copper form a perfectly square grid, it is easier to position and read them. It is a 1 if the chlorine atom is at the top and a 0 if it is at the bottom. Putting 8 atoms of chlorine in a row to form a byte.
All in all, the system is efficient enough for it to store hundreds of letters into a 96×128 nanometer space, 12 rows by 12 columns, with each cell holding 8 bytes.
Though this development is promising, it is still not ready for release yet. The array is only stable in vacuum and at 77 Kelvin, similar to the temperature of liquid nitrogen. If the temperature is more than that, the heat will disrupt the organization of the atoms.
“Every bit consists of two positions on a surface of copper atoms, and one chlorine atom that we can slide back and forth between these two positions,” explained Otte. Because chlorine on copper forms into a perfectly square grid, it’s easy (relatively, anyway) to position and read them. If the chlorine atom is up top, that’s a 1; if it’s at the bottom, that’s a 0. Put 8 chlorine atoms in a row and they form a byte.
Then there are a few special marks that indicate things like the end of a line or file, or that the next space should be ignored (in case of damage, for instance). Altogether the system is efficient enough that they were able to store hundreds of letters into a 96×128 nanometer space (12 rows and 12 columns, each cell holding 8 bytes). And it’s easy enough to do these manipulations that the process can be automated.
The data the researchers chose to demonstrate this was a fragment of a Feynman lecture, “There’s plenty of room at the bottom” (PDF) — fittingly, about storing data at extremely small scales. (You can see a high-resolution image of the array here.)
This is strictly lab-bound, though, at least for now. The chlorine-copper array is only stable in a clean vacuum and at 77 kelvin — about the temperature of liquid nitrogen. Anything past that and heat will disrupt the organization of the atoms.
It’s early-stage research, but still promising. The idea of using individual atoms as bit storage is something many scientists have dreamed of, and the applications of such dense storage are, of course, innumerable. The study is published in the journal Nature Nanotechnology.
The AeroMobil 3.0 is the latest version of the vehicle that will allow us to drive on the road and fly through the air.
AeroMobil unveiled its latest version of its (rather remarkably) futuristic vehicle, which is designed to be both driven on the road and flown through the air. Yes, that’s right—a flying car.
The AeroMobil 3.0 is one of several new experimental prototypes that the company hopes to launch by the end of the year. The current prototypes are all two-seated vehicles, but it could undergo some changes and may be able to quickly accommodate more passengers, as the AeroMobil 3.0 is only the “first product in a series of innovative vehicles,” according its CEO Juraj Vaculik.
Image source: Inhabitat
The AeroMobil team says their vehicles are not frivolous or unrealistic fabrications, but assert that it can do far more than cut commuters’ travel time. Most notably, it could be used by first responders and law enforcement in areas with poor road infrastructure.
The company plans to commercialize the AeroMobil 3.0 car by 2017.
Researchers at the Suzumori Endo Robotics Laboratory at the Tokyo Institute of Technology have a different take on robotics in the form of a musculoskeletal robot that moves like a human.
Essentially, a fake skeleton covered in a bunch of cables, the musculoskeletal robot is powered by artificial multifilament muscles that function like real human muscles when electrical current flows through them.
Basically, the fake muscles can contract and expand similar to a real human’s movements thanks to the electrical current, even enabling the skeleton’s head to move around realistically.
At the moment, the Suzumori Endo humanoid can’t support itself, but the robot’s legs do contain the exact same number of muscles that a real human being’s legs use to walk.
The researchers hope that as technology advances and the musculoskeletal robot progresses, it will eventually be able to walk on its own and self-balance, similar to the ATLAS robot.
Food Ink, the world's first 3D printing restaurant will open in London at the end of July to serve futuristic meals, 3D style.
3D PRINTED FOOD
The world’s first 3D printing restaurant opens in London at the end of July – but only for three days.
Food Ink promises a food revolution on the 25th, 26th, and 27th of July with a pop-up restaurant serving 3D printed culinary creations. The restaurant promises the evening to be a “one-of-a-kind gourmet experience…where fine cuisine meets art, philosophy and tomorrow’s technologies.”
The 3D dining experience will set you back £250 or about $330 for nine courses. The whole experience will be live streamed online so people at home can watch diners chow down on 3D printed dishes.
Not only will the food all be 3D printed but the cutlery, tables, and chairs will be similarly created.
Image source: Food Ink
FOODIE FUTURISM
Top chefs Antony Dobrzensky and Marcio Barradas from the famous La Boscana restaurant will be in charge of the creating the menu, and they hinted that the first course is said to be paired with virtual reality headsets to provide “an immersive and thrilling glimpse of the future.”
Convinced you have to take part of this futuristic dining experience? Well, you have to hurry. There will only be 10 tickets each evening and they will become available on Friday 15 July.
The pop-up restaurant will at 8 Dray Walk, E1 6NJ. Dinner will be served from 7:30pm each evening.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
11-07-2016
New Robo-Salamander Can Really Move
New Robo-Salamander Can Really Move
By Charles Q. Choi, Live Science Contributor
The Pleurobot is a salamander-inspired robot that can walk and swim like an amphibian.
Credit: Hillary Sanctuary/EPFL
With the help of X-ray videos, scientists have developed a new robot that mimics the way salamanders walk and swim.
The amphibious machine could shed light on the evolutionary leap that vertebrates made from the water to the land, the researchers said. The salamander-inspired robot could also one day be used for search-and-rescue missions or inspection operations, the scientists added.
In general, scientists investigate animal locomotion for insights that could, among other things, help people recover from devastating losses of mobility, said study senior author Auke Ijspeert, a bioroboticist at the Swiss Federal Institute of Technology in Lausanne. [The 6 Strangest Robots Ever Created]
Increasingly, scientists are creating robot copies of animals to perform such investigations of animal locomotion. One of the benefits of using robots is that the machines' actions are relatively easy to repeat, Ijspeert and his colleagues said. In addition, researchers can tinker with robot shapes in a methodical way, and the bots can perform movements that are unnatural or dangerous for animals, the scientists added.
The researchers focused on salamanders to shed light on the evolution of animal locomotion. "Salamanders have a body structure that is very close to the fossils of the first terrestrial vertebrates — that is, the first animals that switched from swimming to walking," Ijspeert told Live Science.
To create robo-salamanders, the researchers began by studyingPleurodeles waltl, a salamander about 7 inches (18 centimeters) long that moves both on land and in the water. The scientists took X-ray videos of two P. waltl specimens from the top and sides, tracking up to 64 points along the skeletons of the salamanders as they performed a variety of motions, such as walking on the ground, crawling underwater and even swimming.
The scientists then used a 3D printer to manufacture the skeleton of the robot. Onto this machine, they added 27 motors and a waterproof dry suit that was tailor-made to keep the robots' electronics from getting wet.
The so-called Pleurobot has fewer bones and joints than real-life salamanders. For instance, whereas the real amphibian has 40 vertebrae, the robot has only 11 segments along its spine mimicking vertebrae. [Super-Intelligent Machines: 7 Robotic Futures]
Still, the researchers said Pleurobot could imitate many salamander movements, especially at the limbs. This is because during the design of Pleurobot, the research team's computer models identified the minimum number of motorized segments needed to copy salamander motions, as well as the optimal placement of these parts along the robot's body.
The researchers have built salamander robots before. However, "what excites me most about Pleurobot is that for the first time we can test behaviors with a physical body that has the ability to move like the real animal, as never before," Ijspeert said.
"The robot can serve as a scientific tool to investigate how a newer mode of locomotion, walking with limbs, can be added to an older mode of locomotion, swimming," Ijspeert said. "Like the real salamander, the robot is able to perform both modes of locomotion. Both involve body undulations, but with different properties. During swimming, the undulations travel along the body like in lampreys and eels, with limbs folded backwards, while during walking, they stay in place and are well-coordinated with the limb movements in order to optimize forward speed."
In addition to providing insights on the evolution of animal locomotion, Pleurobot may also show how robots can move well in disorderly environments, Ijspeert said. "With improved control and sturdier mechanics, I hope to see Pleurobot helping in search-and-rescue scenarios in the near future," he said.
The scientists detailed their findings online June 29 in the journal Interface.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
30-06-2016
3Doodler' Pen Lets You Draw 3D-Printed Creations in Midair
3Doodler' Pen Lets You Draw 3D-Printed Creations in Midair
By Jaclyn Jansen, Live Science Contributor |
The 3Doodler is a standard 3D printer that has been transformed into a pen.
Credit: 3Doodler
It wasn't long ago that the idea of printing something in three dimensions sounded like science fiction. But over the past decade, 3D printers have become widespread and are now used to create everything from decorative baubles to robot parts to medical devices.
Still, using a 3D printer isn't always simple: The machine is frequently housed within a box the size of a microwave, and it requires technical software and, in some cases, a detailed knowledge of design. But now, a company called 3Doodler has transformed the standard 3D printer into a pen, allowing people to draw 3D creations freely in the air — without the need for a computer or any software.
In 2012, Maxwell Bogue and Peter Dilworth, co-founders of 3Doodler along with Daniel Cowen, were trying to come up with the next greatkids' toy. They said they frequently used 3D printers to craft prototypes of their designs, and one night, they spent 14 hours printing a dinosaur leg, only to find that the printer had missed a section, leaving a gap in the model. [Best Educational Toys & Games for Kids]
The two wished they "could just take the nozzle off the 3D printer and fill in the missing gap," Bogue, now CEO of the company, told Live Science. So, the inventors set out to design a product that could do just that.
Bogue and Dilworth took apart a 3D printer and added a computer chipto the nozzle so that they could control the device. When that rudimentary model worked as a proof of concept, the team set out to streamline the design to create a more user-friendly pen, they said.
The first prototypes came straight from a standard 3D printer. "We printed the shells and the casings and everything that's held together," Bogue said.
The inventors of the 3Doodler originally set out to make the next great kids' toy.
Credit: 3Doodler
When it was done, they pulled the hot nozzle off the printer and used it in their pen. Over about eight months, they refined the design, finally producing the first version of the product, Bogue said.
In a lot of ways, the 3Doodler works like a sophisticated hot-glue gun: A heating element melts plastic, and it is extruded out through a nozzle. But glue guns use a hand pump to push the plastic out of the tip, which can make it clump. The challenge with the 3Doodler was to find a way to make the plastic flow steadily and smoothly, so the inventors designed the pen with a motor to propel the plastic filament, they said.
The heater inside the 3Doodler runs about 355 degrees to 460 degrees Fahrenheit (180 to 240 degrees Celsius) to effectively melt the most common plastic filaments (known as PLA and ABS). But at that temperature, the plastic would take a long time to cool, making it impossible to draw in the air, Bogue said. As a result, Bogue and Dilworth added a cooling fan to the 3Doodler, which brings the temperature of the plastic down to about 280 degrees to 300 degrees F (140 to 150 degrees C) when it leaves the pen, and the plastic hardens within seconds, Bogue said. [The 10 Weirdest Things Created By 3D Printing]
The inventors ran a wildly successful Kickstarter campaign to raise money for the project, collecting more than $2.3 million from more than 26,000 backers. The pen is now in its third version, known as the 3Doodler Create, and it has been used for a variety of creations, including artwork, clothing and wallets.
But despite its early success, the initial iterations of the 3Doodler still didn't satisfy Bogue's original mission. "This would be an awesome kids' toy, but it's too hot," Bogue said.
The 3Doodler Create far exceeds the 127-degree F (53 degrees C) maximum temperature allowed for children's products, as set by the EU Toys Safety Directive. So the company teamed up with materials scientists to develop an entirely new type of plastic, and after three years, they created a biodegradable, food-safe plastic that melts at between 113 degrees and 122 degrees Fahrenheit (45 to 50 degrees C). This means that it is safe for kids and can even be used to draw directly on the skin without causing burns, according to the company.
The new pen, known as the 3Doodler Start, is designed for kids ages 8 and older. The rechargeable battery and 16 different colors of filaments make the pen ideal for not just recreational use but also classroom use, the inventors said. In particular, the company is hoping that the new pen will significantly enhance STEM education, Bogue added.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
21-06-2016
Star Trek Tricorder: Medical Device Can Scan DNA, Diagnose Illness
Star Trek Tricorder: Medical Device Can Scan DNA, Diagnose Illness
A newly developed handheld device can scan for disease, which is a feat previously featured only in science fiction.
After nearly a decade of development, a real life "Tricorder" has been developed to scan for sickness and disease.
(Photo : Getty Images)
The "Tricorder," a medical device used on the famed sci-fi space show "Star Trek," has become a real working device, with the ability to scan DNA and detect multiple diseases within the human body, according to Daily Mail.
Jonathan O'Halloran, a 39-year-old inventor who has been working on the device for eight years, believes that the device can conduct complex lab analysis quickly and easily, saving time and money. O'Halloran is the co-founder of QuantuMDx, a company that is working on bringing simple handheld solutions to doctors all over the world.
One of their headlining inventions, the QPOC, mirrors the abilities of the infamous Tricorder.The device can take samples of blood, check for specific diseases and deliver pinpointed analysis in about 10 to 15 minutes.
"Trying to explain a handheld DNA test in the background of everything that's out there at the moment is very tricky," said O'Halloran. "We're trying to get to a point where we can do in-field diagnosis."
Right now, the current models of the QPOC are being handmade in Singapore for around $88, according to The Kernel.
The company is at work to improve the device and make it more cost effective.
Arizona startup Local Motors has just unveiled its electric bus that's 3D printed, and it can talk to its passengers using IBM Watson Internet of Things (IoT) for Automotive.
ALREADY THERE
Self-driving vehicles are the Holy Grail of the transportation technology of the next age. It’s the stuff of science fiction, but it’s coming to life. Big companies like Google, Tesla, General Motors, and even the US Government are accelerating research into the field. But it seems like a small startup beat them to the punch.
The electric vehicle, which can carry up to 12 people, is equipped with IBM Watson Internet of Things (IoT) for Automotive, which is IBM’s car-focused cognitive learning platform.
IBM
The company, unlike others that mix traditional automotive making and self-driving tech, builds the vehicles from the ground up, and produces most components with 3D printers.
The company is able to print a vehicle in about 10 hours and assemble it in another hour. They are also looking at “micro-factories” that could opensource the building of future designs. In fact, the use of 3D printing allows designs based on what individual customers want, and lacks the large infrastructure costs of traditional automakers.
BRAINS OF THE OPERATION
Olli is the first vehicle to utilize the cloud-based cognitive computing capability of IBM Watson IoT to analyze and learn from high volumes of transportation data, which is produced by more than 30 sensors embedded throughout the vehicle.
In fact, IBM Watson will not only allow automated driving, but also interaction with passengers. Passengers will be able to interact conversationally with Olli while traveling from point A to point B, discussing topics about how the vehicle works, where they are going, and why Olli is making specific driving decisions.
“By having authentic conversations with riders about their journey the more they will be connected to technology itself, making them part of the experience rather than an observer. This user experience is the key to making self-driving vehicles a real part of our lives rather than a tech vision of the future”, says Harriet Green, General Manager of IBM Watson Internet of Things in a press release.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
16-06-2016
An Ex-NASA Chief is Making Chips That Use The Same Biological Principles As The Brain
An Ex-NASA Chief is Making Chips That Use The Same Biological Principles As The Brain
Above: KnuEdge founder and former NASA head Daniel Goldin.
Image Credit: KnuEdge
IN BRIEF
Ex-NASA Chief Dan Goldin has revealed his company KnuEdge, which develops neural chips that follow the principles of the human brain and produces voice recognition and authentication technology.
OUT OF THE SHADOWS
After almost 10 years of working incognito, former National Aeronautics and Space Administration head Daniel Goldin is finally ready to formally present KnuEdge to the world.
KnuEdge is a “neural technology innovation company,” an outfit that builds hardware and software based on neural technology, with a main focus on human-machine interaction. While newly revealed publicly, it has been in stealth mode for a decade now, and has already raised $100 million in funding to build its neural chips.
The company has revealed its two primary products: KnuVerse, which is a voice authentication technology, and KnuPath, its state-of-the-art neural chip. It has also unveiled Knurld.io, a software development kit with a cloud-based voice recognition and authentication service.
PRODUCTS
Foremost of these offerings is KnuPath. Its inspiration comes from the inner workings of the brain, much like several products of IBM. Specifically, the chip is built on the same biological principles that the brain uses to get a lot of computing work done with a small amount of power, something called “sparse matrix heterogeneous machine learning algorithms.”
Say that three times fast.
The KnuPath neural chip. Credit: KnuEdge
KnuPath has something called LambdaFabric computing. The chip, KnuEdge’s first model, has 256 cores. Each of the cores could be made to run a different algorithm and run them simultaneously, since the LambdaFabric makes it possible to instantly connect these cores to each other. The LambdaFabric is designed to connect up to 512,000 devices, which gives it an interaction delay of only 400 nanoseconds.
KnuVerse comes from military-grade voice recognition and authentication technology, and hopes to develop the potential of voice interfaces in next-generation computing. It primarily focuses on the biometrics side, the use of the human voice to authenticate computers, mobile/web apps and IoT (Internet-of-Things) devices. Its technology eliminates noise, allowing for use even in extremely loud environments.
Part of the development of voice technologies is Knurld.io, which allows businesses and other parties to tap into the technology of KnuVerse. It delivers speaker authentication interfaces for developers and businesses, so that the voice recognition and authentication service can be integrated into apps and other UI’s.
Above: KnuEdge’s KnuPath chip.
Image Credit: KnuEdge
And to think—the idea for the company and its cutting edge technology arose over a perceived need to supply faster and more efficient computing, able to crunch tens of millions of lines of computer code, for a potential Mars mission.
“It all started over a mission to Mars,” says Goldin, with immense and justifiable pride in the new endeavor.
Developments and improvements in the process and the materials in the field of 3D printing have taken up applications in different sectors. 3D printing industry applications range from aerospace design to health care. However, one application that will put a smile on your face is helping injured animals.
A team of doctors in Sao Paulo has come together because of their common love for science and animals. The team, known as ‘Animal Avengers,’ has successfully reconstructed artificial beaks for three toucans, a parrot, and a goose. They have designed the first ever titanium prosthetic pecker for a Macaw and built a brand new plastic-based replacement hull for a traumatized tortoise.
A tortoise whose shell was completely burnt in a bush fire was fighting for its life when Animal Avengers came to his rescue. The reptile named Freddy is the world’s first turtle with a 3D-printed prosthetic shell.
The team explained that this task was very difficult. Graphic designer Cicero Moraes explained the process: “It took about 40 photos [to build a model and reconstruct the shell]. We took a healthy animal, took the same 40 photos, reconstructed that animal in 3D and put it into the computer.”
To create the complete shell, the design was printed out with the help of a desktop printer in four individual pieces using a low-cost corn-derived material. These four pieces were later slotted together like a jigsaw puzzle. It took 50 hours to print a single piece.
Image courtesy sciencealert.com
Nidhi Goyal
Nidhi is a gold medalist Post Graduate in Atmospheric and Oceanic Sciences. You can also find Nidhi on Google+.
We may have a new combat exoskeleton prototype in 2018. It will have body armor that makes use of a liquid that solidifies in milliseconds, and a tiny, powerful engine for recharging the suit's systems.
We’ve all seen what Iron Man can do in the movies, and it’s rather impressive—what with all the flying and absorbing bullets.
Ultimately, all of his abilities come from the incredible technology behind his suit. And it’s to this inspiration that Special Operations Command (SOCOM) is turning for their future combat exoskeleton prototypes, which are to be ready in 2018.
An initial design of the TALOS exoskeleton. Credit: Army
LIQUID ARMOR
In 2013, SOCOM expanded their development of such a suit, which they call the Tactical Assault Light Operator Suit (TALOS). Navy SEALs or Special Forces would use these suits for special operations.
However, unlike the metallic, clunky suit of Iron Man, these military operators need to move with great mobility; therefore, the suits will be made with a “liquid body armor” that transforms into solid within milliseconds when a magnetic field or an electric current is applied through the material.
The technology is being developed by scientists at the Massachusetts Institute of Technology. A Polish company, Moratex, is working on a similar kind of liquid body armor, using a non-Newtonian liquid called Shear-Thickening Fluid (STF).
Essentially, what’s being designed is a suit of armor that remains soft and malleable during normal operations, but hardens instantaneously at the point of contact—deflecting and dispersing the immense destructive energy produced by a hit from an enemy round or shrapnel.
POWERED BY A SMALL ENGINE
TALOS is reportedly going to be a physiological subsystem that will use various sensors on the skin to monitor the wearer’s vital signs in great detail. More than that, it will provide vastly increased strength.
And all of this added capability needs a great deal of power.
General Atomics intends to provide the TALOS suits with a tiny combustion engine, that can nevertheless run at 10,000 RPM. This technology will rely on Liquid Piston’s ‘X’ engine, which employs the High Efficiency Hybrid Cycle. According to the company, this engine has a theoretical efficiency of 75%, and can be very quiet since it only consists of two moving parts: a shaft and a rotor.
The purpose of the engine in the exoskeletal suit would be to recharge batteries, which in turn supply energy to all of the power-thirsty components which will be integrated with the TALOS suit—including computer and sensor systems, as well as robotic strength augmentation.
It’s a fascinating glimpse into the future of warfighting technology—and it brings a whole new meaning to the Army’s old recruiting slogan “Be All You Can Be.”
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
03-06-2016
Artificial Intelligence: What We Have to Look Forward to and What We Have to Fear
Artificial Intelligence: What We Have to Look Forward to and What We Have to Fear
Getty
IN BRIEF
"Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial."
Max Tegmark
WHAT IS AI?
From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.
Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI).
While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.
WHY RESEARCH AI SAFETY?
In the near term, the goal of keeping AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics such as verification, validity, security and control. Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid.
The creation of strong AI might be the biggest event in human history…it might also be the last.
In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind.
By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.
There are some who question whether strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial. At FLI we recognize both of these possibilities, but also recognize the potential for an artificial intelligence system to intentionally or unintentionally cause great harm. We believe research today will help us better prepare for and prevent such potentially negative consequences in the future, thus enjoying the benefits of AI while avoiding pitfalls.
HOW CAN AI BE DANGEROUS?
Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:
The AI is programmed to do something devastating:Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.
As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem.
You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants.
WHY THE RECENT INTEREST IN AI SAFETY
Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have recently expressed concern in the media and via open letters about therisks posed by AI, joined by many leading AI researchers. Why is the subject suddenly in the headlines?
The idea that the quest for strong AI would ultimately succeed was long thought of as science fiction, centuries or more away. However, thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached, making many experts take seriously the possibility of superintelligence in our lifetime.
Since it may take decades to complete the required safety research, it is prudent to start it now.
While some experts still guess that human-level AI is centuries away, most AI researchers at the 2015 Puerto Rico Conferenceguessed that it would happen before 2060. Since it may take decades to complete the required safety research, it is prudent to start it now.
Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave. We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to, wittingly or unwittingly, outsmart us. The best example of what we could face may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest. If we’re no longer the smartest, are we assured to remain in control?
FLI’s position is that our civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it. In the case of AI technology, FLI’s position is that the best way to win that race is not to impede the former, but to accelerate the latter, by supporting AI safety research.
If you love to spend your leisure time looking at the water or always dream about living in the water, Italian mini-yacht manufacturer Jet Capsule has a surprise for you to fulfill your dreams.
The company has come up with a creative concept for a saucer-shaped UFO, or ‘Unidentified Floating Object’, which offers a completely off-grid existence floating on the sea.
As per the company’s co-founders, Pierpaolo Lazzarini and Luca Solla, the floating UFO home is intended for “living in a floating house and moving slowly around the world.”
Image courtesy Jet Capsule
Here are some of its special features:
Energy is generated from 40 square meters (430 square feet) of solar panels, which are located in the closeable space at the top of the roof for protection during a storm. Moreover, wind and water turbines could also be added, which would generate enough power for household operations and the motor on cloudy days.
As per Jet Capsule, this UFO is “unsinkable.” Furthermore, “The main structure of the floating object can be aligned with a compass, keeping the position angle oriented on the desired cardinal direction, even in rough sea conditions.”
Drinking water is made available by solar-powered desalination and a system that purifies rainwater.
Food comes from the vegetable garden that encircles the structure and measures 12.5 m (41 ft) in diameter.
The lower level of the UFO is submerged in the water and has a viewing window to enjoy the underwater world.
If you are planning to buy this floating UFO home, then you have to wait as it is still a concept. At present, the company is seeking investors to build the first working prototype, at an estimated cost of $800,000 USD. However, the company is hopeful that the UFO’s price will eventually come down to $200,000 USD.
Image courtesy Jet Capsule
Nidhi Goyal
Nidhi is a gold medalist Post Graduate in Atmospheric and Oceanic Sciences. You can also find Nidhi on Google+.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
28-05-2016
Primeur: 's werelds eerste robot-baby geboren in Amsterdam!
Primeur: 's werelds eerste robot-baby geboren in Amsterdam!
Caroline Kraaijvanger
Beschuit met muisjes in een laboratorium in Amsterdam. Daar hebben zich voor het eerst twee robots voortgeplant en een heuse robot-baby op de wereld gezet.
Professor Guszti Eiben en zijn onderzoeksteam presenteren de robot-baby vanmiddag aan het grote publiek tijdens de Campus Party in de Jaarbeurs in Utrecht. Het is de start van een nieuw tijdperk: de industriële evolutie. Want robots die zich kunnen voortplanten, kunnen evolueren en zo hun ‘brein’, ‘lijf’ en gedrag naar behoefte aanpassen.
Hier zie je één van de robot-ouders. De robot heeft met een beetje fantasie de vorm van een gekko. In zijn ‘hoofd’ zitten de CPU, accu en lichtsensor.
De daad Voortplantende robots. Het klinkt als sciencefiction, maar dat is het dus niet langer. Maar wat moeten we ons daar nu precies bij voorstellen? “Het is niet zo spannend als de meesten denken,” vertelt Eiben in alle nuchterheid aan Scientias.nl. “Voor mensen seks hebben, vindt er overleg plaats, we noemen dat ‘dating’ of ‘elkaar het hof maken’. De robots doen dat ook. Ze moeten elkaar eerst tot een bepaalde afstand naderen en vervolgens gaan ze elkaar beoordelen. Is de beoordeling positief, dan wordt er gereproduceerd.” De robots sturen dan hun digitale DNA naar een 3D-printer. “Dat DNA is eigenlijk niets anders dan het bouwplan van de robot, de code die de robot – zowel de software als de hardware – beschrijft. De beide ouders sturen dat bouwplan naar de 3D-printer en de 3D-printer combineert die bouwplannen en print op basis daarvan een groot deel van de benodigde onderdelen. Dat gebeurt volledig willekeurig. Dus ook al zouden twee robots meerdere kinderen krijgen, dan ziet elk kind er anders uit.” En dat is dan dus robotische voortplanting: met wederzijdse toestemming, zonder aanraking en geholpen door een 3D-printer.
Hier zie je de andere robot-ouder van de eerste robot-baby. Deze heeft meer de vorm van een spin. De CPU, accu en lichtsensor zitten in het midden. Je ziet: deze ouder is heel anders van vorm dan de andere ouder. En dus is het altijd spannend hoe het nageslacht van deze twee robots eruit gaat zien.
De kraamdagen Uit die ‘geslachtsgemeenschap’ ontstaat een ‘pasgeboren’ robot die een tijdje in een soort ‘kraamkamer’ verblijft totdat deze ‘geslachtsrijp’ is. “Afhankelijk van de toepassing moeten we een aantal criteria bedenken waar deze aan moet voldoen alvorens deze volwassen wordt. Misschien moet de robot eerst een keer opgeladen zijn of bijvoorbeeld een minimale snelheid behalen.” Voldoet de robot niet aan alle criteria, dan is de ‘natuur’ keihard: de robot wordt gerecycled. Voldoet de robot wel aan alle criteria, dan mag deze de kraamkamer verlaten en de ‘arena’ in, waar deze andere robots ontmoet en een partner kan zoeken. Het verhaal begint dan feitelijk weer van voor af aan.
HOOP VOOR DE SINGLE ROBOT
Een date is geen garantie voor succes. Ook voor robots niet. Zo kan het best zijn dat een robot die prima presteert toch geen partner kan vinden die met hem wil ‘paren’. “Het invoeren van aseksuele voortplanting zou dan een optie kunnen zijn,” denkt Eiben. “Als een robot heel goed is en toch geen partner kan vinden, kunnen we toestaan dat deze alleen zijn eigen code naar de geboortekliniek (de 3D-printer, red.) stuurt.” Er ontstaat dan een kloon van de robot. “Dat is het mooie hiervan. We kunnen alles bouwen. We zijn alleen gebonden aan de wetten van de fysica. De wetten van de biologie maken we zelf.”
De date Het is de voortplanting van de robot in een notendop. Maar het begint dus – heel menselijk eigenlijk – met een ontmoeting. Een date. Net zoals wij mensen een mogelijke (seks)partner tijdens een date beoordelen, doen robots dat ook. Maar waar let een robot eigenlijk op? Dat is een beetje afhankelijk van de taakomschrijving, vertelt Eiben, en kan door mensen worden bepaald. Stel: je hebt te maken met robots die ertsen mijnen in zee. Dan wil je natuurlijk dat die robots zoveel mogelijk ertsen boven water halen. Je kunt de robots dan zo programmeren dat ze zich alleen voortplanten als ze goed presteren oftewel heel veel ertsen verzamelen. “Dan kan de ene robot de andere bijvoorbeeld vragen hoeveel kilo erts deze al boven water heeft gehaald.” En alleen als beide robots een indrukwekkende hoeveelheid ertsen hebben bovengehaald, vindt de voortplanting plaats, waaruit – hopelijk – weer een robot voortkomt die ook goed of zelfs nog beter presteert.
Natuurlijke selectie Dat is tenslotte het mooie aan robots die zich voortplanten: ze evolueren en worden gaandeweg steeds beter in de taak die ze is toegedicht. Want hoewel de 3D-printer DNA van vader en moeder geheel willekeurig combineert, vindt er toch een natuurlijke selectie plaats. “Die krijg je er gratis bij en kun je niet eens uitschakelen als je dat zou willen,” stelt Eiben. Want de robots die uit de 3D-printer komen zetten, moeten zien te overleven, een partner vinden en kinderen krijgen. Robots die om wat voor reden dan ook niet daartoe in staat zijn, verdwijnen – met hun DNA – van het toneel en alleen de betere robots en hun DNA gedijen.
En dit is ‘m dan. De allereerste robot-baby, voortgekomen uit het ‘DNA’ van de twee robots die je hierboven ziet.
Veiligheid Misschien krijg je er een beetje de kriebels van: robots die zich voortplanten en evolueren. Waar gaat dat heen? Moet je daar vannacht eigenlijk niet wakker van liggen? Eiben kan daar kort over zijn. “Nee. We hebben bewust gekozen voor een centrale faciliteit waar kinderen vandaan komen.” Zodra Eiben die faciliteit sluit, stopt ook de reproductie. De controle ligt dus bij de mens. “Robots die zelf kinderen in elkaar zetten, zwanger raken of eieren leggen: dat gaan we in mijn lab gewoon niet doen, want dan kunnen robots zich overal, ongecontroleerd vermenigvuldigen en ontwikkelen.” Dat klinkt al aardig geruststellend. Maar de robots van Eiben evolueren toch? Kan het niet zo zijn dat de robots door hun evolutie heen zo slim en behendig worden dat ze de faciliteit niet meer nodig hebben en we toch de controle verliezen? “Ik heb daar behoorlijk lang het hoofd over gebroken: kunnen de robots er omheen werken? Maar ik zie niet in hoe ze dat kunnen doen.”
“ROBOTS DIE ZELF KINDEREN IN ELKAAR ZETTEN, ZWANGER RAKEN OF EIEREN LEGGEN: DAT GAAN WE IN MIJN LAB GEWOON NIET DOEN”
Industriële evolutie Voor nu hoeven we van robots die zichzelf voortplanten en evolueren dus niet wakker te liggen. Maar het werk van Eiben is nog maar het begin. Een eerste stapje richting wat hij de ‘industriële evolutie’ noemt. “Nu is het allemaal nog best krakkemikkig,” stelt hij. “De 3D-printer heeft zo’n 20 uur nodig om de onderdelen te printen en vervolgens moeten we graaien in dozen met CPU’s en bedrading en de robots met de hand in elkaar zetten. Daar worden we wel steeds handiger in, maar dat kost toch ook nog 1 à 2 uur tijd.” Daarmee duurt zo’n robot-bevalling toch al bijna een dag. Maar dat gaat veranderen. Naar verwachting zijn de 3D-printers binnen 3 tot 5 jaar in staat om elektronica en bewegende onderdelen te printen. “Als dat gebeurt, komt het echt binnen handbereik,” voorspelt Eiben. “En dan kun je er gif op innemen dat dit op veel meer plekken gaat gebeuren.” En zijn ze op al die plekken zo voorzichtig als Eiben? Dat is maar zeer de vraag. “Deze technologie breidt de mogelijkheden van robots enorm uit, maar het is bijna niet te voorkomen dat er misbruik van wordt gemaakt,” denkt Eiben.
Zo zou een habitat waarin robots zich kunnen voortplanten eruit kunnen zien. Je ziet onder meer de geboortekliniek (Birth Clinic), kraamkamer (Nursery) en de Arena waar robots elkaar kunnen ontmoeten.
Toepassingen Maar laten we er even van uitgaan dat dat niet gebeurt en robots die zichzelf voortplanten straks op grote schaal voor legitieme doeleinden worden ingezet. Waar kunnen we ze dan tegenkomen? “Ik zie drie scenario’s voor me. Het eerste is heel ‘down to earth‘: de robots worden gebruikt als onderzoeksinstrument en geven meer inzicht in kunstmatige intelligentie en evolutie. Een tweede optie is dat we de robots inzetten op plaatsen op aarde of andere planeten die we nog niet zo goed kennen. Een voorbeeld: robots gaan aan de slag als boswachters en hebben één taak: monitoren. Hoe moeten die robots er dan uitzien? Hebben ze pootjes of zijn wieltjes handiger? Moeten ze groot zijn of juist klein? Om daarachter te komen, moeten we met de robots gaan fokken, net zoals boeren koeien fokken voor meer melk. Je maakt dan gebruik van natuurlijke selectie en menselijke selectie: een mens kiest de beste robots voor voortplanting.” Uiteindelijk rolt daar een geoptimaliseerde robot uit die gekloond kan worden en in het bos kan worden losgelaten. In het bos vindt geen voortplanting plaats, dat gebeurt alleen onder toezicht van mensen, op een ‘fokrobotbedrijf’. De derde optie die Eiben voor zich ziet, gaat nog een stap verder. “Een derde scenario is dat we de robots de geboortekliniek meegeven. Dat zou bijvoorbeeld kunnen wanneer robots ingezet worden om andere planeten bewoonbaar te maken.” De robots kunnen zich dan zelfstandig optimaliseren voor de toegewezen taak.
Het is allemaal toekomstmuziek die misschien wel hoort bij het hoogtepunt van de industriële evolutie. Zover zijn we nu nog niet. Met de geboorte van deze robot-baby is pas een eerste stap gezet. “Nee, ik barstte niet in tranen uit,” vertelt Eiben op de vraag wat die geboorte nu precies met hem doet. Maar lang niet iedereen is zo nuchter als Eiben. “Onlangs gaf ik intern een proefdemonstratie voor een hoogleraar kunstmatige intelligentie, een no-nonsense informaticus en een evolutionair bioloog. Ik liet ze de ouders zien, legde de robot-baby in hun handen en vertelde dat ze de eerste robot-baby ter wereld vasthielden. Later moesten ze bekennen dat ze kippenvel hadden gehad.”
Dubai has installed the world's first 3D-printed office space—a sleek, futuristic-looking building with energy-efficient features that took only 17 days to print and two days to install.
DUBAI’S NEWEST EXPERIMENT
The city of Dubai, the largest and most populous in the United Arab Emirates (UAE), has already proven itself to be in the vanguard of innovation.
It’s really an incubator of innovative technological and architectural ideas—an unrestricted canvas for the human imagination to have free play in designing the city of the future.
“We implement what we plan, and we pursue actions not theories. The rapidly changing world requires us to accelerate our pace of development, for history does not recognize our plans but our achievements,” said His Highness Sheikh Mohamed bin Rashid Al Maktoum, Vice President and Prime Minister of the UAE, and hereditary Ruler of Dubai.
The comments, which strike a suitably optimistic note, were made at the opening of the “Office of the Future”—the world’s first 3D-printed office building. It’s Dubai’s newest offering, and it has the potential to reshape the way we think about the spaces in which we work and live (pun intended).
Dubai’s new 3D-printed office. Credit: Ahmed Jadallah/Reuters
THE ARCHITECTURE OF THE FUTURE
The new, 3D-printed structure is located on the grounds of the Emirates Towers, and will temporarily house the Dubai Future Foundation and Mostaqbal.
And it looks pretty futuristic, too—no harsh lines and blocky regularity, it’s all rounded edges and organic curves. Even the material is innovative, a special cement mixture made in the UAE and the United States.
It was printed using a giant 3D printer measuring 36.5 meters long, 12 meters wide, and 6 meters high (120 ft x 40 ft x 20 ft). The machine used a multi-axis robotic arm to print in 3 dimensions. It took only 17 days to print the structure, and two days to install it on-site.
Sheikh Mohammed bin Rashid Al Maktoum, Ruler of Dubai, stands before his city’s newest technological and architectural wonder. Credit: Ahmed Jadallah/Reuters
And the savings in labor costs were enormous—up to 50%. That’s because only one worker was required to supervise the printing process; during the actual installation, seven workers were needed to provide finishing touches and 10 electricians and mechanics were needed to manage specialized connections and equipment.
We’ll see where this new technology takes us; it has the potential to revolutionize housing, making it low-cost and affordable. But at the same time, it means big changes in the labor economy—and the possibility of ushering in widespread unemployment in the housing sector.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
25-05-2016
Light Rider Unveils The World’s First 3D Printed Electric Motorcycle
Light Rider Unveils The World’s First 3D Printed Electric Motorcycle
APWorks
IN BRIEF
APWORKS, a subsidiary of Airbus that focuses on cutting edge engineering, just announced their new product-- the Light Rider, a 3D metal printed motorcycle that claims to be the first of its kind.
LIGHTER RIDES
Additive layer manufacturing (ALM), more commonly known as 3D printing, is increasingly being used extensively in engineering today. Just a few months ago, we saw the world’s first 3D printed car roll out, and now, APWorks just unveiled the Light Rider, the world’s first 3D printed electric motorcycle.
The company is a full subsidiary of the aircraft manufacturer Airbus, and was established to put more focus on state-of-the-art technologies such as ALM, and creating advanced materials. The Light Rider is one of the larger testaments to their work having a complex design produced through metallic 3D printing.
APWorks claims that their vehicle is 30% lighter than other conventional e-motorcycles with a remarkable weight of 35 kg (77 lbs). Lightweight as it is, the vehicle is powered by a 6 kW electric motor, allowing it to go up to 80 kph (49.71 mph).
BIONIC DESIGN
In a press release, the APWorks emphasizes that 3D printing technology has given their design and manufacturing process a revolutionary turn beyond aesthetics. The company used an algorithm that makes use of natural patterns and structures to generate the highly optimized design of the Light Rider.
The result? A frame that looks more like an “organic exoskeleton.” But this was what they had aimed for anyway; the complex hollow structure makes way for a sturdy build, with a lot of spaced saved. Moreover, APWorks even put their own material into the product—Scalmalloy—an aluminum alloy as strong as titanium.
While the technology is beyond that of your conventional bike, the Light Rider’s look does not entirely depart from the common motorcycle. APWorks asserts that their ALM-based production will not just go into products like these, but beyond into robotics, and even aerospace applications.
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 74 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.
RECOMMENDED REFERENCES
Videos
Media Articles
Essays by AI Researchers
Research Articles
Research Collections
Case Studies
{ http://futurism.com/ }