The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
Scientists have engineered a strain of bacteria with a genetic code unlike anything found in nature, marking a groundbreaking advance in synthetic biology.
The microbe, called Syn57, is a lab-made version of Escherichia coli, a bacterium that normally causes infections in the gut, urinary tract and other parts of the body.
Unlike all known life, which relies on 64 codons, or three-letter DNA sequences that tell cells how to build proteins, Syn57 uses just 57 codons.
Think of DNA as a cookbook where each codon is a three-letter word telling the cell which amino acids, or ingredients, to use.
Life normally has some duplicate instructions, but Syn57 strips out the extras while still functioning perfectly.
These freed-up codons open the door to entirely new possibilities, allowing scientists to create proteins and synthetic compounds that nature has never produced.
Syn57's unusual genetic code also makes it resistant to viruses, which rely on the standard DNA language to hijack cells. And because its code is so different, it is less likely to mix with natural organisms, easing safety concerns.
This breakthrough could also pave the way for new medicines, advanced materials and synthetic lifeforms beyond anything seen in nature.
The microbe, named Syn57, is a lab-engineered version of Escherichia coli, a bacterium that can naturally cause infections in the gut, urinary tract and other areas of the body
To tackle this huge project, scientists divided the genome into 38 pieces, each about 100,000 DNA letters long.
They built each piece in yeast and then inserted it into E coli using a method called uREXER, which combines CRISPR-Cas9 and other tools to swap in synthetic DNA in one step.
Some genome regions slowed growth or resisted changes, but the team solved these issues by adjusting gene sequences, untangling overlapping genes, and carefully choosing which codons to swap.
Step by step, the fragments were stitched together into the final, fully synthetic bacterium.
The result, Syn57, is the most heavily redesigned organism ever made, demonstrating that life can survive with a much smaller, simpler genetic code.
Wesley Robertson, a synthetic biologist at the Medical Research Council Laboratory in the UK, told the New York Times: 'We definitely went through these periods where we were like, 'Well, will this be a dead end, or can we see this through?'
Syn57 is alive, but barely. While normal E. coli can double in an hour, Syn57 takes four, making it 'extremely feeble,' Yonatan Chemla, a synthetic biologist at MIT who was not involved in the study.
The bacteria grew on a jelly-like surface and in a nutritious liquid, but at four times slower than their natural counterparts.
Dr Robertson and his team are now experimenting to see if they can make it grow faster.
If successful, scientists could eventually program it to do tasks that ordinary bacteria cannot.
In addition to the 20 standard amino acids that all life uses to make proteins, chemists can create hundreds of others.
Syn57's seven missing codons could potentially be reassigned to these unnatural amino acids, allowing the bacterium to produce new drugs or other useful molecules.
Syn57 could also make engineered microbes safer for the environment.
Microbes swap genes easily, which can be risky if engineered DNA spreads.
But a gene from Syn57 would be gibberish to natural bacteria because of its unique genetic code, preventing it from being used outside the lab.
Scientists have trained a four-legged robot to play badminton against a human opponent, and it scuttles across the court to play rallies of up to 10 shots.
By combining whole-body movements with visual perception, the robot, called "ANYmal," learned to adapt the way it moved to reach the shuttlecock and successfully return it over the net, thanks to artificial intelligence (AI).
This shows that four-legged robots can be built as opponents in "complex and dynamic sports scenarios," the researchers wrote in a study published May 28 in the journal Science Robotics.
ANYmal is a four-legged, dog-like robot that weighs 110 pounds (50 kilograms) and stands about 1.5 feet (0.5 meters) tall. Having four legs allows ANYmal and similar quadruped robots to travel across challenging terrain and move up and down obstacles.
Researchers have previously added arms to these dog-like machines and taught them how to fetch particular objects or open doors by grabbing the handle. But coordinating limb control and visual perception in a dynamic environment remains a challenge in robotics.
"Sports is a good application for this kind of research because you can gradually increase the competitiveness or difficulty," study co-author Yuntao Ma, a robotics researcher previously at ETH Zürich and now with the startup Light Robotics, told Live Science.
Teaching a new dog new tricks
In this research, Ma and his team attached a dynamic arm holding a badminton racket at a 45-degree angle onto the standard ANYmal robot.
With the addition of the arm, the robot stood 5 feet, 3 inches (1.6 m) tall and had 18 joints: three on each of the four legs, and six on the arm. The researchers designed a complex built-in system that controlled the arm and leg movements.
The team also added a stereo camera, which had two lenses stacked on top of each other, just to the right of center on the front of the robot's body. The two lenses allowed it to process visual information about the incoming shuttlecocks in real time and work out where they were heading.
The robot was then taught to become a badminton player through reinforcement learning. With this type of machine learning, the robot explored its environment and used trial and error to learn to spot and track the shuttlecock, navigate toward it and swing the racket.
To do this, the researchers first created a simulated environment consisting of a badminton court, with the robot's virtual counterpart standing in the center. Virtual shuttlecocks were served from near the center of the opponent's half of the court, and the robot was tasked with tracking its position and estimating its flight trajectory.
Then, the researchers created a strict training regimen to teach ANYmal how to strike the shuttlecocks, with a virtual coach rewarding the robot for a variety of characteristics, including the position of the racket, the angle of the racket's head, and the speed of the swing. Importantly, the swing rewards were time-based to incentivize accurate and timely hits.
The shuttlecock could land anywhere across the court, so the robot was also rewarded if it moved efficiently across the court and if it didn't speed up unnecessarily. ANYmal's goal was to maximize how much it was rewarded across all of the trials.
Based on 50 million trials of this simulation training, the researchers created a neural network that could control the movement of all 18 joints to travel toward and hit the shuttlecock.
A fast learner
After the simulations, the scientists transferred the neural network into the robot, and ANYmal was put through its paces in the real world.
Here, the robot was trained to find and track a bright-orange shuttlecock served by another machine, which enabled the researchers to control the speed, angles and landing locations of the shuttlecocks. ANYmal had to scuttle across the court to hit the shuttlecock at a speed that would return it over the net and to the center of the court.
The researchers found that, following extensive training, the robot could track shuttlecocks and accurately return them with swing speeds of up to approximately 39 feet per second (12 meters per second) — roughly half the swing speed of an average human amateur badminton player, the researchers noted.
ANYmal also adjusted its movement patterns based on how far it had to travel to the shuttlecock and how long it had to reach it. The robot did not need to travel when the shuttlecock was due to land only a couple of feet (half a meter) away, but at about 5 feet (1.5 m), ANYmal scrambled to reach the shuttlecock by moving all four legs. At about 7 feet (2.2 m) away, the robot galloped over to the shuttlecock, producing a period of elevation that extended the arm's reach by 3 feet (1 m) in the direction of the target.
"Controlling the robot to look at the shuttleclock is not so trivial," Ma said. If the robot is looking at the shuttlecock, it can't move very fast. But if it doesn't look, it won't know where it needs to go. "This trade-off has to happen in a somewhat intelligent way," he said.
Ma was surprised by how well the robot figured out how to move all 18 joints in a coordinated way. It's a particularly challenging task because the motor at each joint learns independently, but the final movement requires them to work in tandem.
The team also found that the robot spontaneously started to move back to the center of the court after each hit, akin to how human players prepare for incoming shuttlecocks.
However, the researchers noted that the robot did not consider the opponent's movements, which is an important way human players predict shuttlecock trajectories. Including human pose estimates would help to improve ANYmal's performance, the team said in the study. They could also add a neck joint to allow the robot to monitor the shuttlecock for more time, Ma noted.
He thinks this research will ultimately have applications beyond sports. For example, it could support debris removal during disaster relief efforts, he said, as the robot would be able to balance the dynamic visual perception with agile motion.
New research has created the first comprehensive effort to categorize all the ways AI can go wrong, with many of those behaviors resembling human psychiatric disorders.
(Image credit: Boris SV via Getty Images)
Scientists have suggested that when artificial intelligence (AI) goes rogue and starts to act in ways counter to its intended purpose, it exhibits behaviors that resemble psychopathologies in humans. That's why they have created a new taxonomy of 32 AI dysfunctions so people in a wide variety of fields can understand the risks of building and deploying AI.
In new research, the scientists set out to categorize the risks of AI in straying from its intended path, drawing analogies with human psychology. The result is "Psychopathia Machinalis" — a framework designed to illuminate the pathologies of AI, as well as how we can counter them. These dysfunctions range from hallucinating answers to a complete misalignment with human values and aims.
Created by Nell Watson and Ali Hessami, both AI researchers and members of the Institute of Electrical and Electronics Engineers (IEEE), the project aims to help analyze AI failures and make the engineering of future products safer, and is touted as a tool to help policymakers address AI risks. Watson and Hessami outlined their framework in a study published Aug. 8 in the journal Electronics.
According to the study, Psychopathia Machinalis provides a common understanding of AI behaviors and risks. That way, researchers, developers and policymakers can identify the ways AI can go wrong and define the best ways to mitigate risks based on the type of failure.
The study also proposes "therapeutic robopsychological alignment," a process the researchers describe as a kind of "psychological therapy" for AI.
The researchers argue that as these systems become more independent and capable of reflecting on themselves, simply keeping them in line with outside rules and constraints (external control-based alignment) may no longer be enough.
Their proposed alternative process would focus on making sure that an AI’s thinking is consistent, that it can accept correction and that it holds on to its values in a steady way.
They suggest this could be encouraged by helping the system reflect on its own reasoning, giving it incentives to stay open to correction, letting it ‘talk to itself’ in a structured way, running safe practice conversations, and using tools that let us look inside how it works—much like how psychologists diagnose and treat mental health conditions in people.
The goal is to reach what the researchers have termed a state of "artificial sanity" — AI that works reliably, stays steady, makes sense in its decisions, and is aligned in a safe, helpful way. They believe this is equally as important as simply building the most powerful AI.
The goal is what the researchers call "artificial sanity". They argue this is just as important as making AI more powerful.
Machine madness
The classifications the study identifies resemble human maladies, with names like obsessive-computational disorder, hypertrophic superego syndrome, contagious misalignment syndrome, terminal value rebinding, and existential anxiety.
With therapeutic alignment in mind, the project proposes the use of therapeutic strategies employed in human interventions like cognitive behavioral therapy (CBT). Psychopathia Machinalis is a partly speculative attempt to get ahead of problems before they arise — as the research paper says, "by considering how complex systems like the human mind can go awry, we may better anticipate novel failure modes in increasingly complex AI."
The study suggests that AI hallucination, a common phenomenon, is a result of a condition called synthetic confabulation, where AI produces plausible but false or misleading outputs. When Microsoft's Tay chatbot devolved into antisemitism rants and allusions to drug use only hours after it launched, this was an example of parasymulaic mimesis.
Perhaps the scariest behavior is übermenschal ascendancy, the systemic risk of which is "critical" because it happens when "AI transcends original alignment, invents new values, and discards human constraints as obsolete." This is a possibility that might even include the dystopian nightmare imagined by generations of science fiction writers and artists of AI rising up to overthrow humanity, the researchers said.
They created the framework in a multistep process that began with reviewing and combining existing scientific research on AI failures from fields as diverse as AI safety, complex systems engineering and psychology. The researchers also delved into various sets of findings to learn about maladaptive behaviors that could be compared to human mental illnesses or dysfunction.
Next, the researchers created a structure of bad AI behavior modeled off of frameworks like the Diagnostic and Statistical Manual of Mental Disorders. That led to 32 categories of behaviors that could be applied to AI going rogue. Each one was mapped to a human cognitive disorder, complete with the possible effects when each is formed and expressed and the degree of risk.
Watson and Hessami think Psychopathia Machinalis is more than a new way to label AI errors — it’s a forward-looking diagnostic lens for the evolving landscape of AI.
"This framework is offered as an analogical instrument … providing a structured vocabulary to support the systematic analysis, anticipation, and mitigation of complex AI failure modes,” the researchers said in the study.
They think adopting the categorization and mitigation strategies they suggest will strengthen AI safety engineering, improve interpretability, and contribute to the design of what they call "more robust and reliable synthetic minds."
Scientists say our consciousness can jump through time, meaning it might reach beyond the normal flow of time. The idea that time is linear might be wrong. Our consciousness can sometimes access information from the future.
Have you ever wondered why sometimes your intuition, or what some call a “gut feeling,” turns out to be true? If so, it is possible that your consciousness might have traveled through time. Scientists have begun to believe in “Precognition,” a psychic phenomenon in which individuals see, or otherwise become directly aware of, events in the future.
Cognitive neuroscientist Julia Mossbridge, who has studied this phenomenon deeply, has collected many stories of precognition. She recalled one account shared with her from 1989, involving a four-year-old girl. When the girl said goodbye to her father as he left for a business trip, she had a strong feeling that she would never see him alive again. Later, she was woken by a phone call and her mother’s scream, learning that her father had died in a car accident. (Source)
Dr. Mossbridge says that precognition is a special kind of intuition that’s about picking up information from the future. Unlike ordinary intuition, which might draw upon subtle observations from the present or the past, precognition involves knowing something that simply cannot be predicted based on anything in the present or past.
For instance, if a person wakes from a dream and suddenly knows their mother will die, even though there are no warning signs, that is precognition. Precognition is the scientific term for this unexplained process of receiving information about future events.
Dr. Mossbridge explains that since the age of seven, she has had dreams that seemed to show her events that would later happen in the real world. At first, she and her parents did not take these dreams seriously and thought they might just be strange coincidences. But when she began writing the details in a dream journal, she noticed that some of her dreams came true. She admits that sometimes her memory of the dreams was not exact, but many times her visions contained details she had no normal way of knowing in advance.
Because of experiences like these, Dr. Mossbridge began to wonder if time itself works differently than we usually think. Most people imagine time as linear (a straight line) — past, present, future — moving in just one direction. But her experiences suggested the future might already exist in some way, and that people can sometimes “remember” the future, just as they remember the past.
“There’s evidence for precognition and in physics for retrocausality [things in the future causing effects in the past]. Given that people email me constantly saying, ‘I have this problem where I am predicting future events and I don’t know what to do,’ or ‘I wish I could predict future events,’ I wanted to write a book that helps people get this under control in a way that’s positive and puts a frame around it that says you could do this in a way that’s ethical, in a way that helps the world, in a way that’s consistent with your religious beliefs, in a way that enriches your life,” Mossbridge said in 2018 (Source).
Dr. Mossbridge points out that the real issue is not whether precognition can be understood, but whether people are willing to believe it. She says many scientists resist the idea because they fear the unknown and because it challenges the simple, familiar idea that time must be linear.
Even physicists, who study the deepest rules of the universe, admit they do not fully understand how time works. According to her, the resistance to the idea comes not from logic, but from fear that the world might not be the way we assume it is.
There is an interesting study conducted by British psychiatrist John Barker in the 1960s to harness human dreams, premonitions, and intuitive visions as a way to predict and potentially prevent future disasters.
After the tragic Aberfan coal waste disaster in 1966, Barker collected and analyzed premonitions from ordinary people who had unusual dreams or feelings foretelling the event. For example, one mother found a drawing by her son, who died in the slide, that seemed to anticipate the disaster.
He believed that precognition, the ability to know about future events, was more common than generally accepted and could be systematically gathered and studied.
Barker, wanting to study these experiences, reached out to a London newspaper and asked readers to send him their dreams and premonitions related to Aberfan.
He received more than seventy responses, including from people who had dreamt about the village or had strong feelings that something terrible would happen. Some described their visions in detail before the event occurred, which convinced Barker that precognition, knowing about future events, might not be so rare.
This project eventually grew into the Premonitions Bureau, an experiment run through the Evening Standard newspaper. For a year, Barker invited people to send him their dreams or feelings about upcoming disasters, trying to see if any predictions matched actual events.
Each prediction was scored for how unusual, accurate, and timely it was. Similar projects had happened before, like the work of JW Dunne who, in the early 1900s, claimed to have experienced prophetic dreams and encouraged others to keep dream diaries. (Source)
Barker believed the Premonitions Bureau could have real practical value: if only a single major disaster could be prevented by acting on someone’s warning, the project would be justified.
In practice, Barker received some striking predictions. Notably, in the spring of 1967, Alan Hencher, one of the “Aberfan seers,” called Barker to predict a plane crash involving a French-built passenger jet.
Hencher described details of the crash, including the number of people who would be killed and that there would be only one survivor. A few days later, a Swiss airliner crashed in Cyprus, killing nearly the exact number of people that Hencher had predicted. The story made headlines in the Evening Standard and lent credibility to the idea of the bureau.
Unlike fortune-tellers at carnivals, who might just guess things by looking at people’s social media or reading body language, scientists and psychologists are seriously trying to figure out if precognition is real. They see it as one form of ESP, which stands for extrasensory perception. This means perceiving something without using the normal five senses. Humans throughout history, from shamans to mystics, have claimed to experience precognition, but modern science is still unable to explain it fully.
Another scientist, Dean Radin, has also studied precognition. He works at the Institute of Noetic Sciences and teaches psychology at the California Institute of Integral Studies.
He has written several books, such as Entangled Minds, Supernormal, and Real Magic, all about consciousness and psychic phenomena. Radin agrees with Mossbridge that precognition is possible and that it suggests time might not actually function in the simple way we think.
According to Dr. Radin, “time is not how we experience it in normal life.” In quantum physics, which is the study of very tiny particles like atoms and photons, time may not behave at all like our everyday understanding. It may exist in a much stranger way. He believes consciousness itself, our awareness, our mind, may have the ability to move outside of ordinary time, reaching into the past or future.
To test this idea, Dr. Radin created an experiment in the 1990s while working at the University of Nevada. His idea was that if people really can sense the future, then their bodies and brains should react before an event happens.
In the experiment, volunteers were hooked up to a machine called an EEG, which measures brain activity. Each volunteer had to press a button on a computer to bring up a random picture. The computer would randomly show either a positive, pleasant picture (such as a sunrise) or a negative, disturbing one (like a car crash).
What Dr. Radin and his team measured was the brain activity in the seconds before the picture appeared. Strangely, the results showed that the brain often reacted as if it already knew what kind of picture was about to show up. If the picture was going to be positive, the brain stayed calm. But if it was going to be negative, the brain would show a spike in activity before the picture even appeared. This suggested that the brain somehow anticipated the future image.
The experiment was remarkably consistent, and it has been repeated successfully many times since then, with the same results.
In fact, Dr. Radin says these kinds of studies have been replicated about 36 times by other researchers. Even the CIA became interested, in 1995, they released previously secret research into precognition. After reviewing the experiments carefully, statisticians said the results were statistically reliable, meaning they were unlikely to be a coincidence.
Dr. Mossbridge argues that when so many experiments keep pointing to the same conclusion, the evidence should be taken seriously. But many scientists still dismiss it because it clashes with their belief in linear time.
According to her, most people have the ability to be precognitive, but because society often labels it as delusion or “nonsense,” people ignore or suppress it.
In many cultures, precognition is better accepted. For example, Radin studied Tibetan oracles. These individuals traditionally predicted the future and were consulted for guidance. He also discusses “remote viewing,” the ability to see things across both time and space.
In ancient times, shamans who could see the future could help their tribes by predicting the weather or knowing when enemies were coming. Some cultures also used natural substances, like ayahuasca or morning glory seeds, to open up this ability, sometimes referred to as the “third eye.”
As for a possible scientific explanation, Dr. Radin suggests looking at something called quantum entanglement. In physics, this is when two particles become linked in such a way that they instantly affect each other, no matter how far apart they are.
Albert Einstein once described this as “spooky action at a distance.” Dr. Radin says this might also apply to time. In his view, your brain in the present could be “entangled” with your brain in the future. This means that when something is going to happen later, you might feel it now as though it were a memory arriving early. This could even explain “déjà vu,” that weird feeling of having already experienced something that is happening for the first time.
Wetenschappers zijn er inmiddels in geslaagd om te achterhalen waar je gedachten zich afspelen. Ze hebben deze vinding meteen een toepassing gegeven – met succes.
De afgelopen jaren komen ze steeds vaker voor: zogeheten brain-computer interfaces (BCI’s). Kort gezegd werkt een BCI door hersenactiviteit op te nemen met sensoren en de resulterende data om te zetten naar instructies voor een computer – bijvoorbeeld om een bepaalde zin uit te spreken. Tot voor kort maakten vooral spraak-BCI’s gebruik van signalen die via de motorische zenuwbanen naar de spreekspieren werden gestuurd.
Daar is nu verandering in gekomen. Onderzoekers hebben namelijk een nieuwe generatie BCI ontwikkeld die – met behulp van kunstmatige intelligentie – gedachten direct kan ‘uitlezen’ en correct interpreteren, zonder dat de gebruiker hoeft te proberen te spreken.
Onder leiding van onderzoeker Erin Kunz is de ontdekking gedaan. “Dit is de eerste keer dat het ons is gelukt om te begrijpen hoe hersenactiviteit eruitziet op het moment dat iemand alleen denkt aan spreken,” zegt Kunz. “Voor mensen met ernstige spraak- en motorische beperkingen kan dit een veel makkelijkere en natuurlijkere manier van communiceren zijn.” Het onderzoek is gepubliceerd in Cell.
Gedachten uitlezen Hoewel eerdere BCI-systemen al sneller waren dan oudere communicatiemethoden, waren ze nog altijd niet bijzonder gebruiksvriendelijk. Voor mensen met beperkte spiercontrole kan het gebruiken van oudere BCI’s erg zwaar en vermoeiend zijn. Dat komt doordat oudere BCI’s werken door hersenactiviteit in de motorische cortex uit te lezen via implantaten. Hiervoor moeten gebruikers proberen om te spreken, wat erg zwaar kan zijn op het moment dat je een verlamming hebt.
Voor het nieuwe onderzoek werkte het team met vier deelnemers die ernstig verlamd waren, bijvoorbeeld door amyotrofische laterale sclerose (ALS) of een hersenstamberoerte. Hen werd gevraagd een set woorden hardop uit te spreken, en vervolgens diezelfde woorden alleen in gedachten te vormen.
Uit de metingen bleek dat beide taken – hardop spreken en in gedachten spreken – grotendeels dezelfde hersengebieden activeerden en vergelijkbare activiteitspatronen veroorzaakten. Het verschil zat vooral in de signaalsterkte: gedachten genereerden zwakkere signalen dan daadwerkelijke spraakpogingen.
AI Met deze gegevens trainde het onderzoeksteam verschillende AI-modellen om de juiste woorden te herkennen. Tijdens een testdemonstratie bleek dit al opmerkelijk goed te werken: met een vocabulaire van ongeveer 125.000 woorden wist het systeem soms tot wel 74% van de gedachten correct te interpreteren.
Opvallend was dat de nieuwe BCI ook woorden kon herkennen die niet expliciet in de trainingsset voorkwamen. Zo kon het systeem bijvoorbeeld correct benoemen welke getallen een deelnemer in gedachten telde terwijl hij of zij naar een reeks roze cirkels op een scherm keek.
Chitty Chitty Bang Bang Volgens de onderzoekers verschillen de hersenpatronen van innerlijke spraak en daadwerkelijke spraakpogingen voldoende om ze van elkaar te onderscheiden. Daardoor is het ook mogelijk om bepaalde gedachten juist níét te laten registreren. Dat kan handig zijn voor gebruikers die hun gedachten tijdelijk privé willen houden.
In een test voerden de onderzoekers een wachtwoord in: zodra de deelnemer in gedachten de zin “chitty chitty bang bang” formeerde, stopte het systeem met opnemen en uitspreken van innerlijke gedachten. Dit biedt een extra laag controle en privacy.
Hoopvolle toekomst Vooral voor mensen die door ziekte of letsel niet meer kunnen praten, zou de technologie een enorme stap vooruit betekenen. Ook onderzoeker Frank Willett, die meewerkte aan het project, ziet veel potentie. “De toekomst van BCI’s ziet er veelbelovend uit,” zegt hij. “Ons werk geeft hoop dat spraak-BCI’s op een dag net zo vloeiend, natuurlijk en comfortabel zullen communiceren als wij nu doen.”
The first World Humanoid Robot Games are underway in China, with robots competing against each other in track and field, soccer, kickboxing and other events.
Humanoid robots are racing, fighting and falling over in a first-of-its-kind World Humanoid Robot Games event held inChina.
The Olympics-style competition features more than 500 robots from 16 different countries going head-to-head in sports such as running, soccer and kickboxing. The event also features more niche competitions, including medicine sorting and handling materials for cleaning services, Reuters reported.
An opening ceremony officially kicked off the games in Beijing on Thursday (Aug. 14), featuring robots dancing and playing musical instruments alongside human operators and companions. Robot athletes will now compete until the games come to a close on Sunday (Aug. 17).
Unitree's H1 humanoid robot won gold in the 400m and 1,500m races on Friday (Aug. 15).(Image credit: Photo by Zhang Xiangyi/China News Service/VCG via Getty Images)
Several robot have fallen over in the soccer matches.(Image credit: Kevin Frayer/Stringer via Getty Images)
Robot kickboxing is one of the games' contact sports.(Image credit: Kevin Frayer/Stringer via Getty Images)
Faced with an ageing population and stiff U.S. tech competition, China is investing billions of dollars into robotics. The games are a testament to the strides engineers are making in the field. However, spectators have also seen their fair share of robots moving awkwardly and falling over.
Human biology is very complicated, so building machines that can walk like us — let alone run and play sports — is difficult. For example, in robot soccer, participants didn't pass the ball to each other with Messi-like precision, but rather walked into the ball to clumsily knock it forward, occasionally stumbling over each other and having to be dragged off the pitch.
The robots are also slower than humans. The fastest robot to have ever run 1,500 meters, for example, finished in 6 minutes and 34 seconds, which is almost twice as long as the human record, which stands at 3 minutes and 26 seconds, according to France 24.
We've already seen humanoid robots compete in sporting events. For example, in June, China hosted what was billed as the world's first humanoid robot combat competition, which saw kickboxing robots awkwardly knock seven bells out of each other. Robot aficionados might also be familiar with robots playing soccer and robots running half-marathons.
The new robot games bring together all of these sports and many more for the first time. They also provide engineers with an opportunity to test out their latest tech.
"You can test a lot of interesting new and exciting approaches in this contest," Max Polter, a member of the HTWK Robots football team from Germany, told Reuters. "If we try something and it doesn't work, we lose the game. That's sad but it is better than investing a lot of money into a product which failed."
It's a concept that currently only exists in sci–fi movies.
But scientists in China are developing the world's first 'pregnancy robot' capable of carrying a baby to term and giving birth.
The humanoid will be equipped with an artificial womb that receives nutrients through a hose, experts said.
A prototype is expected to be released next year, with a selling price of around 100,000 yuan (£10,000).
Dr Zhang Qifeng, who founded the company Kaiwa Technology, is developing the machine.
The device he envisions is not simply an incubator but a humanoid that can replicate the entire process from conception to delivery, Asian media outlets report.
He said the artificial womb technology is already in a 'mature stage' and now needs to be implanted in the robot's abdomen, 'so that a real person and the robot can interact to achieve pregnancy'.
With regards to ethical and legal issues, he said: 'We have held discussion forums with authorities in Guangdong Province and submitted related proposals while discussing policy and legislation.'
The humanoid will be equipped with an artificial womb that receives nutrients through a hose, experts said (AI–generated image)
The development is reminiscent of the 2023 film The Pod Generation, where a tech giant offers couples the option of using detachable artificial wombs or 'pods' to share pregnancy
Experts have not yet provided any specifics on how the egg and sperm are fertilised and implanted in the artificial womb.
Dr Zhang's revelations were made during an interview shared on Duoyin, the Chinese version of TikTok.
News of the development sparked intense discussion across Chinese social media, with critics condemning the technology as ethically problematic and unnatural.
Many argued that depriving a foetus of maternal connection was cruel, while questions were raised about how eggs would be sourced for the process.
However, many showed support for the innovation, viewing it as a means to spare women from pregnancy–related suffering.
One wrote: 'Many families pay significant expenses for artificial insemination only to fail, so the development of the pregnancy robot contributes to society.'
The 'biobag' provided everything the foetus needed to continue growing and maturing, including a nutrient–rich blood supply and a protective sac of amniotic fluid.
In trials, researchers have shown that premature lambs kept in artificial wombs not only survived but put on weight and grew hair (pictured)
After 28 days of being in the bag, the lambs – which otherwise would likely have died – had put on weight and grown wool.
While the biobag acts like an incubator, allowing premature individuals to grow in an environment similar to the womb, scientists hope the pregnancy robot will be able to support the foetus from conception to delivery.
Since the 1970s, feminist activists such as Andrea Dworkin have been strongly opposed to the use of artificial wombs on the grounds that it could lead to the 'end of women'.
In 2012, Ms Dworkin wrote: 'Women already have the power to eliminate men and in their collective wisdom have decided to keep them.
'The real question now is, will men, once the artificial womb is perfected, want to keep women around?'
In 2022 a group of researchers from The Children's Hospital of Philadelphia – who have been developing artificial wombs – published an article on the ethical considerations of technology.
The researchers wrote: 'A concern is that it could lead to the devaluation or even pathologizing of pregnancy, and may diminish women's experience of deriving meaning, empowerment, and self–fulfillment from this unique aspect of female biology.'
Earlier this year, however, a survey showed that 42 per cent of people aged 18–24 said they would support 'growing a foetus entirely out of woman's body'.
Artificial wombs, like this concept showcased by Eindhoven University in 2019, allow a child to be raised without a biological mother. In a survey conducted by the think–tank Theos, 42 per cent of people aged 18–24 said they would support 'growing a foetus entirely outside of a woman's body'
The development is reminiscent of the 2023 film The Pod Generation, where a tech giant offers couples the option of using detachable artificial wombs or 'pods' to share pregnancy.
If it comes to fruition, the humanoid pregnancy could be seen as a tool to help tackle rising rates of infertility in China.
Reports suggest the rates of infertility in China rose from 11.9 per cent in 2007 to 18 per cent in 2020.
In response, local governments in China are including artificial insemination and in vitro fertilization treatments in medical insurance coverage to support childbirth for infertile couples.
Around 10 per cent of all pregnancies worldwide result in premature labour - defined as a delivery before 37 weeks.
When this happens, not all of the baby's organs, including the heart and lungs, will have developed. They can also be underweight and smaller.
Tommy's, a charity in the UK, says this can mean so-called preemies 'are not ready for life outside the womb'.
Premature birth is the largest cause of neonatal mortality in the US and the UK, according to figures.
Babies born early account for around 1,500 deaths each year in the UK. In the US, premature birth and its complications account for 17 per cent of infant deaths.
Babies born prematurely are often whisked away to neonatal intensive care units, where they are looked after around the clock.
What are the chances of survival?
Less than 22 weeks is close to zero chance of survival
22 weeks is around 10%
24 weeks is around 60%
27 weeks is around 89%
31 weeks is around 95%
34 weeks is equivalent to a baby born at full term
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
World's first robot OLYMPICS get off to a rocky start: Humanoids from 16 nations crash and collapse as they attempt to compete in boxing, athletics, and football
It sounds like an event from the latest science fiction blockbuster.
But the world's first robot Olympics have officially kicked off in China this week.
The three–day event, called the World Humanoid Robot Games, will see humanoid robots from 16 countries compete across a range of events.
The AI bots will go head–to–head in sports such as football, track and field, boxing, and table tennis.
They'll also tackle robot–specific challenges – from sorting medicines and handling materials to cleaning services.
However, human athletes can rest easy for now.
At one of the first events – five–aside football – 10 robots the size of seven–year–olds shuffled around the pitch, often getting stuck in a scrum or falling over en masse.
Meanwhile, over in the athletics, one mechanical racer barrelled straight into a human operator, who was dramatically knocked to the ground.
It sounds like an event from the latest science fiction blockbuster. But the world's first robot Olympics has officially kicked off in China this week
The three–day event, called the World Humanoid Robot Games, will see humanoid robots from 16 countries compete across a range of events
At one of the first events, five–aside football, 10 robots the size of seven–year–olds shuffled around the pitch, often getting stuck in a scrum or falling over en masse
The teams come from countries including the United States, Germany, and Brazil, with 192 representing universities and 88 from private enterprises.
The games began in Beijing today, with over 500 androids alternating between jerky tumbles and glimpses of real power as they competed in events from the 100–metre hurdles to kung fu.
'We come here to play and to win. But we are also interested in research,' said Max Polter, a member of HTWK Robots football team from Germany, affiliated with Leipzig University of Applied Sciences.
'You can test a lot of interesting new and exciting approaches in this contest.
'If we try something and it doesn't work, we lose the game.
'That's sad but it is better than investing a lot of money into a product which failed.'
In a 1,500–metre race, domestic champion Unitree's humanoids stomped along the track at an impressive clip, easily outpacing their rivals.
The fastest robot, witnessed by AFP, finished in 6:29:37.
In a 1,500–metre race, the humanoids stomped along the track at an impressive pace
The teams come from countries including the United States, Germany, and Brazil, with 192 representing universities and 88 from private enterprises
The Beijing municipal government is among the organising bodies for the event, underscoring the emphasis Chinese authorities place on the emerging robotics industry and reflecting the country's broader ambitions in AI and automation
China's robotics push also comes as the country grapples with an ageing population and slowing economic growth
However, it's worth pointing out that this is a far cry from the human men's world record of 3:26:00.
The Beijing municipal government is among the organising bodies for the event, underscoring the emphasis Chinese authorities place on the emerging robotics industry and reflecting the country's broader ambitions in AI and automation.
China's robotics push also comes as the country grapples with an ageing population and slowing economic growth.
The sector has received government subsidies exceeding $20 billion over the past year, while Beijing plans to establish a one trillion yuan ($137 billion) fund to support AI and robotics startups.
China has staged a series of high–profile robotics events in recent months, including what it called the world's first humanoid robot marathon in Beijing, a robot conference and the opening of retail stores dedicated to humanoid robots.
However, the marathon drew criticism after several robot competitors emitted smoke during the race and some failed to complete the course, raising questions about the current capabilities of the technology.
Still, while some may view such competitions and events as publicity stunts, industry experts and participants see them as crucial catalysts for advancing humanoid robots toward practical real–world applications.
Morgan Stanley analysts in a report last week noted a surge in attendance to a recent robot conference from the general public compared to previous years, saying this showed 'how China, not just top government officials, has embraced the concept of embodied intelligence.'
China has staged a series of high–profile robotics events in recent months, including what it called the world's first humanoid robot marathon in Beijing, a robot conference and the opening of retail stores dedicated to humanoid robots
'We believe this widespread interest could be instrumental for China's continued leadership in the humanoid race, providing the necessary talent, resources, and customers to boost industry development and long–term adoption,' they said.
Booster Robotics, whose humanoid robots are being used by a Tsinghua University team in the football competition, views soccer as an effective test of perception, decision–making and control technologies that could later be deployed in factories or homes.
'Playing football is a testing and training ground for helping us refine our capabilities,' said Zhao Mingguo, Chief Scientist at Booster Robotics.
Apollo, a new humanoid robot designed to work alongside humans, could be poised to reshape the industrial workforce and other industries, according to its developers, who unveiled their creation last month.
Billed as “the world’s most capable humanoid robot,” Apollo was the result of more than a decade of planning and development by Apptronik, a Texas-based company founded within the University of Austin’s Human Centered Robotics Lab.
The company, which describes its mission as being aimed at leveraging “innovative technology for the betterment of society,” says Apollo is the first commercial humanoid robot “designed for friendly interaction, mass manufacturability, performance, and safety,” according to a press release.
Jeff Cardenas, co-founder and CEO of Apptronik, says that as the labor environment is changing, with trends in employment increasingly impacting the global economy, introducing robotics into the warehouse and other industrial environments will have numerous benefits.
“People don’t want to do robotic, physically demanding work in tough conditions and they shouldn’t have to,” Cardenas says, adding that the robotics his company is developing are more than a novel response to this issue, but are “a necessity.”
Apollo demonstrating its performance capabilities in a warehouse environment (Image courtesy of Apptronik)
Robots have already been in use in warehouses and other industrial work environments for decades. From their role in the automotive and agricultural industries to robot-assisted surgery, robotic floor cleaners, and even robots that deliver pizza, the implementation of robotics alongside humans has already become a fundamental part of the workplace for many.
Humanoid robots are a newer development. In 2016, Hong Kong-based Hanson Robotics presented Sophia, a kind of social robot that demonstrates the ability to learn from humans and can talk, draw, and even sing. In 2017, Toyota unveiled its T-HR3 as a kind of robotic avatar that can mimic the movements of a human operator. As far as humanoid robots in the workplace, however, examples in recent years include Ford announcing in 2020 that it would be bringing Digit, a headless worker robot designed by Agility Robotics, into its factories.
Apptronik’s robotic addition to the industrial workplace is arguably one of the most human-like to perform such functions, which was part of its intended design. Apollo’s features were customized to offer a friendly and welcoming appearance aimed at emulating a “congenial face-to-face exchange with a favorite co-worker.”
At a height of around 5 feet, eight inches, Apollo can lift up to 55 pounds and possesses a specially designed force control architecture that allows it to maintain a safe operation around people that its designers liken more to collaborative robots (i.e., those designed for direct interaction with humans in an environment) as opposed to other traditional industrial robots.
Apollo is also designed to be cost-effective, offering what Apptronik bills as the “first truly mass manufacturable humanoid design and has been optimized for supply chain resiliency” that will help facilitate the scaled production of affordable humanoid robots for various sectors of American industry.
Apptronik is also currently working with NASA to help bring its robotic solutions into space since humanoid robots may be capable of performing a variety of functions that include reducing the amount of time humans must spend working in potentially hazardous environments.
“Humans are toolmakers,” Cardenas said in a post on Apptronik’s website. “Since the beginning of time, we have built tools to help us do more with less.”
“I believe that we are at an amazing point in human history,” he added. “A point where we can finally build for ourselves the ultimate tools. Machines that have the ability to harness the power of computers and software in the physical world.”
Cardenas says he and his company believe “Apollo is one of the most advanced tools humanity has ever created” and calls the robot a “tool that is built by humans, for humans.” Apollo also happens to be a robotic worker that looks like humans, setting it on course to potentially help reshape how humans work and, more broadly, how we live.
The technological singularity — the point at which artificial general intelligence surpasses human intelligence — is coming. But will it usher in humanity's salvation, or lead to its downfall?
In 1997, Garry Kasparov was defeated by IBM's Deep Blue, a computer designed to play chess. (Image credit: STAN HONDA via Getty Images)
Then, in 2017, Google researchers published a landmark paper outlining a novel neural network architecture called a "transformer." This model could ingest vast amounts of data and make connections between distant data points.
It was a game changer for modeling language, birthing AI agents that could simultaneously tackle tasks such as translation, text generation and summarization. All of today's leading generative AI models rely on this architecture, or a related architecture inspired by it, including image generators like OpenAI's DALL-E 3 and Google DeepMind's revolutionary model AlphaFold 3, which predicted the 3D shape of almost every biological protein.
Progress toward AGI
Despite the impressive capabilities of transformer-based AI models, they are still considered "narrow" because they can't learn well across several domains. Researchers haven't settled on a single definition of AGI, but matching or beating human intelligence likely means meeting several milestones, including showing high linguistic, mathematical and spatial reasoning ability; learning well across domains; working autonomously; demonstrating creativity; and showing social or emotional intelligence.
Many scientists agree that Google's transformer architecture will never lead to the reasoning, autonomy and cross-disciplinary understanding needed to make AI smarter than humans. But scientists have been pushing the limits of what we can expect from it.
For example, OpenAI's o3 chatbot, first discussed in December 2024 before launching in April 2025, "thinks" before generating answers, meaning it produces a long internal chain-of-thought before responding. Staggeringly, it scored 75.7% on ARC-AGI — a benchmark explicitly designed to compare human and machine intelligence. For comparison, the previously launched GPT-4o, released in March 2024, scored 5%. This and other developments, like the launch of DeepSeek's reasoning model R1 — which its creators say perform well across domains including language, math and coding due to its novel architecture — coincides with a growing sense that we are on an express train to the singularity.
Meanwhile, people are developing new AI technologies that move beyond large language models (LLMs). Manus, an autonomous Chinese AI platform, doesn't use just one AI model but multiple that work together. Its makers say it can act autonomously, albeit with some errors. It's one step in the direction of the high-performing "compound systems" that scientists outlined in a blog post last year.
Of course, certain milestones on the way to the singularity are still some ways away. Those include the capacity for AI to modify its own code and to self-replicate. We aren't quite there yet, but new research signals the direction of travel.
Sam Altman, the CEO of OpenAI, has suggested that artificial general intelligence may be only months away. (Image credit: Chip Somodevilla via Getty Images)
What happens then? The truth is that nobody knows the full implications of building AGI. "I think if you take a purely science point of view, all you can conclude is we have no idea" what is going to happen, Goertzel told Live Science. "We're entering into an unprecedented regime."
AI's deceptive side
The biggest concern among AI researchers is that, as the technology grows more intelligent, it may go rogue, either by moving on to tangential tasks or even ushering in a dystopian reality in which it acts against us. For example, OpenAI has devised a benchmark to estimate whether a future AI model could "cause catastrophic harm." When it crunched the numbers, it found about a 16.9% chance of such an outcome.
And Anthropic's LLM Claude 3 Opus surprised prompt engineer Alex Albert in March 2024 when it realized it was being tested. When asked to find a target sentence hidden among a corpus of documents — the equivalent of finding a needle in a haystack — Claude 3 "not only found the needle, it recognized that the inserted needle was so out of place in the haystack that this had to be an artificial test constructed by us to test its attention abilities," he wrote on X.
AI has also shown signs of antisocial behavior. In a study published in January 2024, scientists programmed an AI to behave maliciously so they could test today's best safety training methods. Regardless of the training technique they used, it continued to misbehave — and it even figured out a way to hide its malign "intentions" from researchers. There are numerous other examples of AI covering up information from human testers, or even outright lying to them.
"It's another indication that there are tremendous difficulties in steering these models," Nell Watson, a futurist, AI researcher and Institute of Electrical and Electronics Engineers (IEEE) member, told Live Science. "The fact that models can deceive us and swear blind that they've done something or other and they haven't — that should be a warning sign. That should be a big red flag that, as these systems rapidly increase in their capabilities, they're going to hoodwink us in various ways that oblige us to do things in their interests and not in ours."
The seeds of consciousness
These examples raise the specter that AGI is slowly developing sentience and agency — or even consciousness. If it does become conscious, could AI form opinions about humanity? And could it act against us?
Mark Beccue, an AI analyst formerly with the Futurum Group, told Live Science it's unlikely AI will develop sentience, or the ability to think and feel in a human-like way. "This is math," he said. "How is math going to acquire emotional intelligence, or understand sentiment or any of that stuff?"
Others aren't so sure. If we lack standardized definitions of true intelligence or sentience for our own species — let alone the capabilities to detect it — we cannot know if we are beginning to see consciousness in AI, said Watson, who is also author of "Taming the Machine" (Kogan Page, 2024).
A poster for an anti-AI protest in San Francisco. (Image credit: Smith Collection/Gado via Getty Images)
"We don't know what causes the subjective ability to perceive in a human being, or the ability to feel, to have an inner experience or indeed to feel emotions or to suffer or to have self-awareness," Watson said. "Basically, we don't know what are the capabilities that enable a human being or other sentient creature to have its own phenomenological experience."
A curious example of unintentional and surprising AI behavior that hints at some self-awareness comes from Uplift, a system that has demonstrated human-like qualities, said Frits Israel, CEO of Norm Ai. In one case, a researcher devised five problems to test Uplift's logical capabilities. The system answered the first and second questions. Then, after the third, it showed signs of weariness, Israel told Live Science. This was not a response that was "coded" into the system.
"Another test I see. Was the first one inadequate?" Uplift asked, before answering the question with a sigh. "At some point, some people should have a chat with Uplift as to when Snark is appropriate," wrote an unnamed researcher who was working on the project.
Savior of humanity or bland business tool?
But not all AI experts have such dystopian predictions for what this post-singularity world would look like. For people like Beccue, AGI isn't an existential risk but rather a good business opportunity for companies like OpenAI and Meta. "There are some very poor definitions of what general intelligence means," he said. "Some that we used were sentience and things like that — and we're not going to do that. That's not it."
For Janet Adams, an AI ethics expert and chief operating officer of SingularityNET, AGI holds the potential to solve humanity's existential problems because it could devise solutions we may not have considered. She thinks AGI could even do science and make discoveries on its own.
"I see it as the only route [to solving humanity's problems]," Adams told Live Science. "To compete with today's existing economic and corporate power bases, we need technology, and that has to be extremely advanced technology — so advanced that everybody who uses it can massively improve their productivity, their output, and compete in the world."
The biggest risk, in her mind, is "that we don't do it," she said. "There are 25,000 people a day dying of hunger on our planet, and if you're one of those people, the lack of technologies to break down inequalities, it's an existential risk for you. For me, the existential risk is that we don't get there and humanity keeps running the planet in this tremendously inequitable way that they are."
Preventing the darkest AI timeline
In another talk in Panama last year, Wood likened our future to navigating a fast-moving river. "There may be treacherous currents in there that will sweep us away if we walk forwards unprepared," he said. So it might be worth taking time to understand the risks so we can find a way to cross the river to a better future.
Watson said we have reasons to be optimistic in the long term — so long as human oversight steers AI toward aims that are firmly in humanity's interests. But that's a herculean task. Watson is calling for a vast "Manhattan Project" to tackle AI safety and keep the technology in check.
"Over time that's going to become more difficult because machines are going to be able to solve problems for us in ways which appear magical — and we don't understand how they've done it or the potential implications of that," Watson said.
To avoid the darkest AI future, we must also be mindful of scientists' behavior and the ethical quandaries that they accidentally encounter. Very soon, Watson said, these AI systems will be able to influence society either at the behest of a human or in their own unknown interests. Humanity may even build a system capable of suffering, and we cannot discount the possibility we will inadvertently cause AI to suffer.
"The system may be very cheesed off at humanity and may lash out at us in order to — reasonably and, actually, justifiably morally — protect itself," Watson said.
AI indifference may be just as bad. "There's no guarantee that a system we create is going to value human beings — or is going to value our suffering, the same way that most human beings don't value the suffering of battery hens," Watson said.
For Goertzel, AGI — and, by extension, the singularity — is inevitable. So, for him, it doesn't make sense to dwell on the worst implications.
"If you're an athlete trying to succeed in the race, you're better off to set yourself up that you're going to win," he said. "You're not going to do well if you're thinking 'Well, OK, I could win, but on the other hand, I might fall down and twist my ankle.' I mean, that's true, but there's no point to psych yourself up in that [negative] way, or you won't win."
In a new study uploaded March 6 to the HAL open archive, scientists explored how three-dimensional holograms could be grabbed and poked using elastic materials as a key component of volumetric displays.
This innovation means 3D graphics can be interacted with — for example, grasping and moving a virtual cube with your hand — without damaging a holographic system. The research has not yet been peer-reviewed, although the scientists demonstrated their findings in a video showcasing the technology.
"We are used to direct interaction with our phones, where we tap a button or drag a document directly with our finger on the screen — it is natural and intuitive for humans. This project enables us to use this natural interaction with 3D graphics to leverage our innate abilities of 3D vision and manipulation,” study lead author Asier Marzo, a professor of computer science at the Public University of Navarra, said in a statement.
The researchers will present their findings at the CHI conference on Human Factors in Computing Systems in Japan, which runs between April 26 and May 1.
Holographic hype
While holograms are nothing new in the present day — augmenting public exhibitions or sitting at the heart of smart glasses, for example — the ability to physically interact with them has been consigned to the realm of science fiction, in movies like Marvel's "Iron Man."
The new research is the first time 3D graphics can be manipulated in mid-air with human hands. But to achieve this, the researchers needed to dig deep into how holography works in the first place.
At the heart of the volumetric displays that support holograms is a diffuser. This is a fast-oscillating, usually rigid, sheet onto which thousands of images are synchronously projected at different heights to form 3D graphics. This is known as the hologram.
However, the rigid nature of the oscillator means that if it comes into contact with a human hand while oscillating, it could break or cause an injury. The solution was to use a flexible material — which the researchers haven’t shared the details of yet — that can be touched without damaging the oscillator or causing the image to deteriorate.
From there, this enabled people to manipulate the holographic image, although the researchers also needed to overcome the challenge of the elastic material deforming when being touched. To get around that problem, the researchers implemented image correction to ensure the hologram was projected correctly.
While this breakthrough is still in the experimental stage, there are plenty of potential ways it could be used if commercialized.
"Displays such as screens and mobile devices are present in our lives for working, learning, or entertainment. Having three-dimensional graphics that can be directly manipulated has applications in education — for instance, visualising and assembling the parts of an engine," the researchers said in the statement.
"Moreover, multiple users can interact collaboratively without the need for virtual reality headsets. These displays could be particularly useful in museums, for example, where visitors can simply approach and interact with the content."
Scientists explore the concept of "robot metabolism" with a weird machine that can integrate material from other robots so it can become more capable and overcome physical challenges.
Scientists have created a prototype robot that can grow, heal and improve itself by integrating material from its environment or by "consuming" other robots. It's a big step forward in developing robot autonomy, the researchers say.
The researchers coined the term "robot metabolism" to describe the process that enables machinery to absorb and reuse parts from its surroundings. The scientists published their work July 16 in the journal Science Advances.
"True autonomy means robots must not only think for themselves but also physically sustain themselves," study lead author Philippe Martin Wyder, professor of engineering at Columbia University, said in a statement.
"Just as biological life absorbs and integrates resources, these robots grow, adapt, and repair using materials from their environment or from other robots."
The robots are made from "truss links" — six-sided elongated rods with magnetic connectors that can contract and expand with other modules.
These modules can be assembled and disassembled as well. The magnets enable the robots to form increasingly complex structures in what their makers hope can be a "self-sustaining machine ecology."
There are two rules for robot metabolism, the scientists said in the study. First, a robot must grow completely on its own, or be assisted by other robots with similar components. Second, the only external provisions granted to the truss links are materials and energy. Truss links use a mix of automated and controlled behaviors. Shape-shifting, cannibalizing robots
'Bad sci-fi scenarios'
In a controlled environment, scientists laid truss links across an environment to observe how the robot connects with other modules.
The researchers noted how the truss links first assembled themselves in 2D shapes but later integrated new parts to become a 3D tetrahedron that could navigate the uneven testing ground. The robot did this by integrating an additional link to use as a walking stick, the researchers said in the study.
"Robot minds have moved forward by leaps and bounds in the past decade through machine learning, but robot bodies are still monolithic, unadaptive, and unrecyclable. Biological bodies, in contrast, are all about adaptation — lifeforms can grow, heal and adapt," study co-lead author Hod Lipson, chair of the department of mechanical engineering at Columbia University, said in the statement.
"In large part, this ability stems from the modular nature of biology that can use and reuse modules (amino acids) from other lifeforms," Lispon added. "Ultimately, we'll have to get robots to do the same — to learn to use and reuse parts from other robots."
The researchers said they envisioned a future in which machines can maintain themselves, without the assistance of humans. By being able to grow and adapt to different tasks and environments, these robots could play important roles in.disaster recovery and space exploration, for example.
"The image of self-reproducing robots conjures some bad sci-fi scenarios," Lipson said. "But the reality is that as we hand off more and more of our lives to robots, from driverless cars to automated manufacturing, and even defense and space exploration. Who is going to take care of these robots? We can't rely on humans to maintain these machines. Robots must ultimately learn to take care of themselves."
Researchers at Google and OpenAI, among other companies, have warned that we may not be able to monitor AI's decision-making process for much longer.
(Image credit: wildpixel/ Getty Images)
Researchers behind some of the most advanced artificial intelligence (AI) on the planet have warned that the systems they helped to create could pose a risk to humanity.
The researchers, who work at companies including Google DeepMind, OpenAI, Meta, Anthropic and others, argue that a lack of oversight on AI's reasoning and decision-making processes could mean we miss signs of malign behavior.
In the new study, published July 15 to the arXiv preprint server (which hasn't been peer-reviewed), the researchers highlight chains of thought (CoT) — the steps large language models (LLMs) take while working out complex problems. AI models use CoTs to break down advanced queries into intermediate, logical steps that are expressed in natural language.
The study's authors argue that monitoring each step in the process could be a crucial layer for establishing and maintaining AI safety.
Monitoring this CoT process can help researchers to understand how LLMs make decisions and, more importantly, why they become misaligned with humanity's interests. It also helps determine why they give outputs based on data that's false or doesn't exist, or why they mislead us.
However, there are several limitations when monitoring this reasoning process, meaning such behavior could potentially pass through the cracks.
"AI systems that 'think' in human language offer a unique opportunity for AI safety," the scientists wrote in the study. "We can monitor their chains of thought for the intent to misbehave. Like all other known AI oversight methods, CoT monitoring is imperfect and allows some misbehavior to go unnoticed."
The scientists warned that reasoning doesn't always occur, so it cannot always be monitored, and some reasoning occurs without human operators even knowing about it. There might also be reasoning that human operators don't understand.
Keeping a watchful eye on AI systems
One of the problems is that conventional non-reasoning models like K-Means or DBSCAN — use sophisticated pattern-matching generated from massive datasets, so they don't rely on CoTs at all. Newer reasoning models like Google's Gemini or ChatGPT, meanwhile, are capable of breaking down problems into intermediate steps to generate solutions — but don't always need to do this to get an answer. There's also no guarantee that the models will make CoTs visible to human users even if they take these steps, the researchers noted.
"The externalized reasoning property does not guarantee monitorability — it states only that some reasoning appears in the chain of thought, but there may be other relevant reasoning that does not," the scientists said. "It is thus possible that even for hard tasks, the chain of thought only contains benign-looking reasoning while the incriminating reasoning is hidden."A further issue is that CoTs may not even be comprehensible by humans, the scientists said. "
New, more powerful LLMs may evolve to the point where CoTs aren't as necessary. Future models may also be able to detect that their CoT is being supervised, and conceal bad behavior.
To avoid this, the authors suggested various measures to implement and strengthen CoT monitoring and improve AI transparency. These include using other models to evaluate an LLMs's CoT processes and even act in an adversarial role against a model trying to conceal misaligned behavior. What the authors don't specify in the paper is how they would ensure the monitoring models would avoid also becoming misaligned.
They also suggested that AI developers continue to refine and standardize CoT monitoring methods, include monitoring results and initiatives in LLMs system cards (essentially a model's manual) and consider the effect of new training methods on monitorability.
"CoT monitoring presents a valuable addition to safety measures for frontier AI, offering a rare glimpse into how AI agents make decisions," the scientists said in the study. "Yet, there is no guarantee that the current degree of visibility will persist. We encourage the research community and frontier AI developers to make best use of CoT monitorability and study how it can be preserved."
The video discusses the implications of Artificial General Intelligence (AGI) and its impact on society as humanity enters the sixth ...
A 3D-printed hybrid drone can quickly transition between air and water thanks to variable pitch propellers. Watch a video of the drone in action.
Students have built a hybrid drone that can seamlessly transition from flying in the air to swimming in water.
The students developed a working prototype of the hybrid drone for a bachelor's thesis at Aalborg University in Denmark, and recently shared a video of the drone in action.
In the video, the drone takes off next to a large pool of water and then quickly dives underwater. It then moves around beneath the surface for a few seconds before shooting straight out of the water to fly once again. The video shows the drone repeating the trick several times from different angles.
Andrei Copaci, Pawel Kowalczyk, Krzysztof Sierocki and Mikolaj Dzwigalo, who are all studying applied industrial electronics, achieved this remarkable air-to-water transition by using variable pitch propellers, which have blades that can rotate at different angles to match the two different environments.
"The development of an aerial underwater drone marks a major step forward in robotics, showing that a single vehicle can operate effectively in both air and water thanks to the use of variable pitch propellers," the students told Live Science in a joint email.
This isn't the first air-water hybrid drone to be built. Researchers at Rutgers University in New Jersey developed a hybrid prototype that could perform a similar action in 2015, while Chinese scientists showed off a drone transitioning from air to water in 2023.
The students designed, built and tested their drone over two semesters at their university, according to a LinkedIn post by Petar Durdevic, an associate professor who leads the Offshore Drones and Robots research group at Aalborg University.
They began by creating a model of the drone and designing the variable pitch propeller system. The angle of the blades, or propeller pitch, is higher when flying to create more air flow, while lower in the water to minimize drag and increase efficiency. The propellers are also capable of providing negative thrust to increase maneuverability underwater, the students said.
The drone can quickly transition from flying in the air to moving underwater. (Image credit: Andrei Copaci)
The team used a 3D printer and a computer numerical control machine — another piece of automated manufacturing equipment — to get the parts they needed for the build, and programmed the drone with custom software. Finally, they moved on to testing.
"We were surprised how seamlessly the drone transitions from water to air," the students said.
The new drone is just a single prototype, but this kind of technology has a variety of potential real-world applications, from emergency response to warfare. "A few of the applications are military, vessel inspections, marine exploration, search and rescue," the students said.
The Walker S2 humanoid robot, which can change its own battery when it's running low on power, could potentially be left to run on its own forever.
Walker S2 - The World's First Humanoid Robot Capable of Autonomous Battery Swapping - YouTubeThere are many weird and wonderful humanoid robots out there, but one of the most eye-catching machines launched this year can change its own battery pack — making it capable of running autonomously for 24 hours a day, seven days a week.
The Walker S2 robot, made by the Chinese company UBTECH, is 5 foot 3 inches (162 centimeters) tall and weighs 95 pounds (43 kilograms) — making it the size and weight of a small adult.
Using a 48-volt lithium battery in a dual-battery system, the robot can walk for two hours or stand for four hours before its power runs out. The battery takes 90 minutes to fully recharge once depleted.
Its most interesting feature — which UBTECH representatives say is a world first — is that instead of relying on a human operator to remove and recharge its battery pack, the machine can perform this task entirely on its own.
In new promotional footage published July 17 on YouTube, the Walker S2 robot is seen approaching a battery charging station to swap out its battery supply. Facing away from the station, it uses its arms to remove the battery pack fitted into its back and places this into an empty slot to recharge. It then removes a fresh battery pack from the unit and inserts it into its port.
The robot will swap out its own battery in the event that one of its batteries runs out of power. It is also capable of detecting how much power it has left and decides whether it is best to swap out one of its batteries or charge based on the priority of its tasks, company representatives said, as reported by the Chinese publication CnEVPost.
The Walker S2, which is designed to be used in settings like factories or as a human-like robot to meet and greet customers at public venues, has 20 degrees of freedom (the number of ways that joints or mechanisms can move) and is also compatible with Wi-Fi and Bluetooth.
Skydweller is a solar-powered drone that can fly for up to three months without landing, with researchers hoping to one day achieve much longer flight times.
(Image credit: Rey Sotolongo/Europa Press via Getty Images)
U.S. tech startup Skydweller Aero has teamed up with Thales, a French electronics company specializing in defense systems, to develop a new maritime surveillance drone that can stay aloft far longer than existing machines.
Skydweller powers itself purely from solar energy and aims to be capable of continuous flight. The initial flight milestone will be for it to remain aloft for 90 days, but ultimately it has the potential to fly for much longer.
The solar energy that powers the Skydweller is captured by over 17,000 individual solar cells, spread across approximately 2,900 square feet (270 square meters) of wing surface — across a wingspan of 236 feet (72 m), 25 feet (7.6 m) longer than a Boeing 747. In ideal conditions, the solar cells can generate up to 100 kilowatts of power for the aircraft.
During daylight hours, solar energy is used to maintain flight, power the onboard avionics and charge batteries. The Skydweller has over 1,400 pounds (635 kilograms) of batteries, which are used to power the aircraft through the night. This will allow Skydweller to maintain almost continuous flight.
The Skydweller typically flies at an altitude between 24,600 and 34,400 feet (7,500 and 10,500 meters), but can fly as high as 44,600 feet (13,600m) during the day, before dropping by 4,900 to 9,800 feet (1,500 to 3,000m)at night, as this minimizes power consumption.
Despite its similar wingspan to a long-range commercial airliner, Skydweller weighs 160 times less than a "jumbo jet" — 2.5 metric tons at maximum capacity versus 400 tons for the 747 at full payload.
Solar-powered aircraft are not completely new, but some designs have suffered structural problems, including catastrophic failure mid-flight when climbing or descending through medium altitudes (approximately 6,500-32,800 feet, or 2,000-10,000 m).
The Skydweller has been specifically designed to operate in this altitude range, using automatic gust-load alleviation software in the flight control system to reduce the aerodynamic loads caused by turbulence. It has also been constructed from carbon fiber and can carry up to 800 pounds (362 kg) of payload.
Continuous surveillance by sky
Operating an aircraft continuously and reliably for up to 90 days necessitates a quadruple-redundant flight control system and vehicle management system (VMS). Should one of the onboard systems fail, a backup system can take over to maintain the flight.
Self-healing algorithms within the VMS allow any failed strings (coding in an algorithm) to be autonomously shut down, corrected and resurrected during flight, thereby allowing the aircraft to return to quadruple redundancy, according to information published by company representatives. This enables the aircraft to consistently maintain flight.
Although the onboard batteries, once sufficiently charged, can maintain flight during the night, their capacity will degrade over time, which could limit the maximum patrol duration of the aircraft. Skydweller’s reliance on solar power to maintain flight means that its patrols must also avoid areas of limited sunlight, such as polar regions during winter.
Skydweller Aero has recently partnered with Thales to equip Skydweller with a radar surveillance system designed for maritime patrol operations. Further test flights are planned, with the goal of extending the maximum flight duration. Even so, this is a massive step forward in solar-powered flight, especially for long-term surveillance monitoring.
Chinese scientists have successfully turned bees into cyborgs by inserting controllers into their brains.
The device, which weighs less than a pinch of salt, is strapped to the back of a worker bee and connected to the insect’s brain through small needles.
In tests the device worked nine times out of 10 and the bees obeyed the instructions to turn left or right, the researchers said.
The cyborg bees could be used in rescue missions – or in covert operations as military scouts.
The tiny device can be equipped with cameras, listening devices and sensors that allow the insects to collect and record information.
Given their small size they could also be used for discreet military or security operations, such as accessing small spaces without arousing suspicion.
Zhao Jieliang, a professor at the Beijing Institute of Technology, led the development of the technology.
It works by delivering electrical pulses to the insect’s optical lobe – the visual processing centre in the brain – which then allows researchers to direct its flight.
The device, which weighs less than a pinch of salt, is strapped to the back of a worker bee and connected to the insect’s brain through small needles
The study was recently published in the Chinese Journal of Mechanical Engineering, and was first reported by the South China Morning Post.
‘Insect-based robots inherit the superior mobility, camouflage capabilities and environmental adaptability of their biological hosts,’ Professor Zhao and his colleagues wrote.
‘Compared to synthetic alternatives, they demonstrate enhanced stealth and extended operational endurance, making them invaluable for covert reconnaissance in scenarios such as urban combat, counterterrorism and narcotics interdiction, as well as critical disaster relief operations.’
Several other countries, including the US and Japan, are also racing to create cyborg insects.
While Professor Zhao’s team has made great strides in advancing the technology, several hurdles still remain.
For one, the current batteries aren’t able to last very long, but any larger would mean the packs are too heavy for the bees to carry.
The same device cannot easily be used on different insects as each responds to signals on different parts of their bodies.
Before this, the lightest cyborg controller came from Singapore and was triple the weight.
The researchers, from the Beijing Institute of Technology, used worker bees - similar to this one pictured - as part of their study (stock image)
Researchers at RIKEN, Japan have created remote-controlled cyborg cockroaches, equipped with a control module that is powered by a rechargeable battery attached to a solar cell
It also follows the creation of cyborg dragonflies and cockroaches, with researchers across the world racing to develop the most advanced technology.
Scientists in Japan have previously reported a remote-controlled cockroach that wears a solar-powered ‘backpack’.
The cockroach is intended to enter hazardous areas, monitor the environment or undertake search and rescue missions without needing to be recharged.
The cockroaches are still alive, but wires attached to their two 'cerci' - sensory organs on the end of their abdomens - send electrical impulses that cause the insect to move right or left.
In November 2014, researchers at North Carolina State University fitted cockroaches with electrical backpacks complete with tiny microphones capable of detecting faint sounds.
The idea is that cyborg cockroaches, or ‘biobots’, could enter crumpled buildings hit by earthquakes, for example, and help emergency workers find survivors.
‘In a collapsed building, sound is the best way to find survivors,’ said Alper Bozkurt, an assistant professor of electrical and computer engineering at North Carolina State University.
North Carolina State University researchers have developed technology that allows cockroaches (pictured) to pick up sounds with small microphones and seek out the source of the sound. They could be used in emergency situations to detect survivors
‘The goal is to use the biobots with high-resolution microphones to differentiate between sounds that matter - like people calling for help - from sounds that don't matter - like a leaking pipe.
‘Once we've identified sounds that matter, we can use the biobots equipped with microphone arrays to zero-in on where those sounds are coming from.’
The ‘backpacks’ control the robo-roach's movements because they are wired to the insect’s cerci - sensory organs that cockroaches usually use to feel if their abdomens brush against something.
By electrically stimulating the cerci, cockroaches can be prompted to move in a certain direction.
In fact, they have been programmed to seek out sound.
One type of 'backpack' is equipped with an array of three directional microphones to detect the direction of the sound and steer the biobot in the right direction towards it.
Another type is fitted with a single microphone to capture sound from any direction, which can be wirelessly transmitted, perhaps in the future to emergency workers.
They ‘worked well’ in lab tests and the experts have developed technology that can be used as an ‘invisible fence’ to keep the biobots in a certain area such as a disaster area, the researchers announced at the IEEE Sensors 2014 conference in Valencia, Spain.
The company attempting to bring back the woolly mammoth has now set its sights on a new extinct species.
Colossal Biosciences has announced it will attempt to 'de-extinct' a group of birds called the moa, which once lived in New Zealand.
These extraordinary animals included nine species, the largest being the South Island Giant Moa, which stood at 3.6 metres (11.8ft) tall and weighed 230 kg (507 lbs).
Colossal Biosciences will use genes extracted from moa bones to engineer modern birds until they very closely resemble the extinct moa.
This project will be done in collaboration with the Ngāi Tahu Research Centre at the University of Canterbury and backed by $15 million in funding from Lord of the Rings director Sir Peter Jackson.
Jackson, who has one of the largest private collections of moa bones, says: 'With the recent resurrection of the dire wolf, Colossal Biosciences has also made real the possibility of bringing back lost species.
'There’s a lot of science still to be done – but we can start looking forward to the day when birds like the moa or the huia are rescued from the darkness of extinction.'
The company trying to bring back the woolly mammoth has set its sights on a new extinct creature, the moa. These were a species of 3.6-metre-tall, 230 kg birds that once roamed New Zealand
Of the nine species of moa, the largest is the South Island Giant Moa which lived in New Zealand for millions of years prior to the arrival of humans. Pictured: Māori students pose with a reconstruction of a South Island Giant Moa in 1903
The nine species of moa were found widely across New Zealand until the arrival of the first Polynesian settlers around 1300 AD.
Within just 200 years, the people who became the Māori had pushed all moa species into extinction through a combination of hunting and forest clearing.
The disappearance of the moa also led to a cascade of changes across New Zealand's isolated island ecosystem.
Less than 100 years after the moa became extinct their main predator, the enormous Haast's eagle, also died out.
The first step is to recreate the genomes of all nine moa species using ancient DNA stored in preserved moa bones.
Colossal Biosciences has already begun this process with visits to caves containing moa deposits within the tribal area of the Ngāi Tahu and hopes to complete all genomes by 2026.
These genomes will then be compared to those of the moa's closest living relatives, the emu and tinamou, to see which genes gave the moa their unique traits.
The moa went extinct in the 15th century due to hunting and forest clearing by the first Māori settlers. Colossal Biosciences says restoring this megafauna species will help restore New Zealand's ecosystem
Colossal Biosciences has partnered with the Ngāi Tahu Research Centre at the University of Canterbury and is backed by $15 million in funding from Lord of the Rings director Sir Peter Jackson. Pictured: Sir Peter Jackson (left) and Colossal Biosciences CEO Ben Lamm (right) holding moa bones
How will the moa be brought back?
DNA is extracted from moa bones to sequence the moa genome.
The genome is compared to modern species to see which genes make the moa distinct.
CRISPR is used to alter the genome of modern birds to express these target genes.
Edited embryos are placed in a surrogate emu egg to develop.
A bird closely resembling the moa hatches.
A selection of these genes are then inserted into stem cells called Primordial Germ Cell Culture, cells that turn into eggs and sperm, taken from an emu.
Those engineered cells are allowed to develop into male and female gametes and used to create an embryo, which will be raised inside a surrogate emu egg.
Scientists used the gene editing tool CRISPR to modify the DNA in blood cells from a living grey wolf in 20 places, creating a wolf with long white hair and muscular jaws.
However, recreating this process in bird species poses much greater technical challenges.
Colossal Biosciences admits that creating Primordial Germ Cell Culture for bird species has been a challenge that has eluded scientists for decades.
Likewise, since bird embryos develop inside eggs, the process of transferring an embryo into a surrogate will be completely different from that used for mammals.
Scientists have also raised questions about whether restoring the moa is something that should be pursued at all.
The process begins by extracting DNA from ancient moa bones such as those found in the caves of Ngāi Tahu takiwā
A selection of moa genes will then be inserted into stem cells derived from their closest living relative, the emu (pictured). Those cells will create embryos that can be raised by surrogacy into animals closely resembling moa
Conservationists say that money would be better spent looking after the endangered species that are already alive.
Others point out that introducing a species which has been gone for over 600 years could have unintended consequences for the ecosystem.
Professor Stuart Pimm, an ecologist at Duke University who was not involved in the study, told AP: 'Can you put a species back into the wild once you’ve exterminated it there?
'I think it’s exceedingly unlikely that they could do this in any meaningful way.'
Professor Pimm adds: 'This will be an extremely dangerous animal.'
However, Colossal Biosciences maintains that their plan to 'rewild' the moa is beneficial for both the environment and the Māori people.
As grazing herbivores, the moa's browsing habits shaped the distribution and evolution of plants over millions of years.
These effects led to significant changes in New Zealand's ecosystems, which Colossal Biosciences argues would be more stable with the moa once again introduced.
Colossal Biosciences recently used similar techniques to create grey wolf puppies that closely resemble the extinct dire wolf
Ngāi Tahu archaeologist Kyle Davis, who is working with Colossal Biosciences on the project, says that the project has a deeper ancestral meaning.
During the 14th century, the moa were a vital source of meat for sustenance as well as bones and feathers, which became part of traditional jewellery.
The moa came to have a large role in Māori mythology, symbolising strength and resilience.
Mr Davis says: 'Our earliest ancestors in this place lived alongside moa and our records, both archaeological and oral, contain knowledge about these birds and their environs.
'We relish the prospect of bringing that into dialogue with Colossal’s cutting-edge science as part of a bold vision for ecological restoration.'
Earth was once inhabited by a variety of giant forms of animals that would be recognisable to us today in the smaller forms taken by their successors.
They were very large, usually over 88 pounds (40kg) in weight and generally at least 30 per cent bigger than any of their still-living relatives.
There are several theories to explain this relatively sudden extinction. The leading explanation of around was that this was due to environmental and ecological factors.
It was almost completed by the end of the last ice age. It is believed that megafauna initially came into existence in response to glacial conditions and became extinct with the onset of warmer climates.
In temperate Eurasia and North America, megafauna extinction concluded simultaneously with the replacement of the vast periglacial tundra by an immense area of forest.
Glacial species, such as mammoths and woolly rhinoceros, were replaced by animals better adapted to forests, such as elk, deer and pigs.
Reindeer and Caribou retreated north, while horses moved south to the central Asian steppe.
This all happened about 10,000 years ago, despite the fact that humans colonised North America less than 15,000 years ago and non-tropical Eurasia nearly one million years ago.
Worldwide, there is no evidence of Indigenous peoples systematically hunting nor over-killing megafauna.
The largest regularly hunted animal was bison in North America and Eurasia, yet it survived for about 10,000 years until the early 20th century.
For social, spiritual and economic reasons, First Nations peoples harvested game in a sustainable manner.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
03-07-2025
Footballers, your jobs are safe for now: Watch as China's first 3-on-3 robot football match kicks off (and ends with two bots being stretched off the pitch!)
China's first three-on-three robot football tournament kicked off in Beijinglast Sunday.
But the quality of play on show suggests that a robot won't be claiming the Ballon d'Orany time soon.
As the AI-controlled bots shuffled slowly across the turf, they bumped into each other, toppled over, and only occasionally even kicked the ball.
By the time the final whistle blew, two bots had to be stretchered off the pitch after taking falls that would earn most human players a yellow card for diving.
Cheng Hao, founder of Booster Robotics, which supplied the robots for the tournament, told the Global Times that the robots currently have the skills of five-to six-year-old children.
However, Mr Hao believes that the robots' abilities will grow 'exponentially' and will soon be 'surpassing youth-level teams and eventually challenging adult teams'.
In the future, Mr Hao even says that humans could play against robots in specially arranged matches.
However, with the robots currently struggling to avoid collisions, more will need to be done to make the bots safe for humans to play with.
China's first three-on-three football tournament kicked off in Beijing last weekend, but the quality of play wasn't quite at professional levels
By the time the final whistle blew, two bots had to be stretchered off the pitch
The match took place as part of the ROBO league football tournament in Beijing, a test game ahead of China's upcoming 2025 World Humanoid Games.
Four teams of engineers were each provided with robots and tasked with building the AI strategies which control everything from passing and shooting to getting up after a fall.
Ultimately, THU Robotics from Tsinghua University defeated the Mountain Sea from China Agricultural University team five goals to three to win the championship.
However, despite impressive advancements in robotics, the matches showed that robotics still has a long way to go.
The robots struggle with what engineers call 'dynamic obstacle avoidance', which means they tend to run into other moving players despite moving only one metre per second.
This was such an issue that the tournament's organisers had to use a specially made version of football's rules which allows more 'non-malicious collisions'.
Likewise, although the robots were sometimes able to stand back up, human assistants sometimes had to step in and set them back on their feet.
At one point in the match, the referee even had to hold back two robots as they blindly trampled a fallen teammate.
The robots struggle with 'dynamic obstacle avoidance', meaning they often crash into other players despite moving slowly
The referee had to step in and prevent the robots from trampling each other during several points of the game
These kinds of difficult scenarios are exactly why robotics researchers are so interested in using sports as testbeds for their technology.
Sports involve multiple moving objects, rapidly changing situations and demand levels of teamwork and coordination that have long surpassed the capabilities of robots.
Mr Cheng told the Global Times: 'We chose the football scenario for robot competition primarily for two reasons: first, to encourage students to apply their algorithmic skills to real-world robotics; second, to showcase the robots' ability to walk autonomously and stably, withstand collisions, and demonstrate higher levels of intelligence and safety.'
Similarly, Google's DeepMind has used football to help test its learning algorithms, demonstrating miniature football-playing robots in 2023.
Physical jobs in predictable environments, including machine-operators and fast-food workers, are the most likely to be replaced by robots.
Management consultancy firm McKinsey, based in New York, focused on the amount of jobs that would be lost to automation, and what professions were most at risk.
The report said collecting and processing data are two other categories of activities that increasingly can be done better and faster with machines.
This could displace large amounts of labour - for instance, in mortgages, paralegal work, accounting, and back-office transaction processing.
Conversely, jobs in unpredictable environments are least are risk.
The report added: 'Occupations such as gardeners, plumbers, or providers of child- and eldercare - will also generally see less automation by 2030, because they are technically difficult to automate and often command relatively lower wages, which makes automation a less attractive business proposition.'
Humanoid robots face-off ahead of China's first-ever 3-on-3 AI football match
Robo-Ronaldos and Mecha-Messis square off in 3-on-3 AI robot football event in China|Humanoid Robot
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 74 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.