The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
18-12-2025
Is AI already conscious? Evidence is 'far too limited' to definitively say artificial intelligence hasn't made the leap, expert claims
Is AI already conscious? Evidence is 'far too limited' to definitively say artificial intelligence hasn't made the leap, expert claims
Artificial intelligence (AI) is already helping to solve problems in finance, research and medicine.
But could it be reaching consciousness?
Dr Tom McClelland, a philosopher from the University of Cambridge has warned that current evidence is 'far too limited' to rule this dystopian possibility out.
According to the expert, the only sensible position on the question of whether AI is conscious is one of 'agnosticism'.
The main problem, he claims, is that we don't have a 'deep explanation' of what makes something conscious in the first place, so can't test for it in AI.
'The best–case scenario is we're an intellectual revolution away from any kind of viable consciousness test,' Dr McClelland explained.
'If neither common sense nor hard–nosed research can give us an answer, the logical position is agnosticism.
'We cannot, and may never, know.'
Artificial intelligence ( AI) is already helping to solve problems in finance, research and medicine. But could it be reaching consciousness? Pictured: Terminator Genisys
But as they work towards this goal, some also claim that increasingly sophisticated AI may develop consciousness.
This means AI could develop the capacity for perception and become self–aware.
While this idea might evoke visions of killer robots, Dr McClelland argues that AI could make this jump without us even realising, because we don't really have an agreed–upon theory of consciousness to begin with.
Some theories say consciousness is a matter of processing information in the right way, and that AI could be conscious if only it could run the 'software' of a conscious mind.
Others argue it is inherently biological, meaning AI can only imitate consciousness at best.
Until we can figure out which side of the argument is right, we simply don't have any basis on which to test for consciousness in AI.
In a paper published in the journal Mind and Language, Dr McClelland claims both sides of the debate are taking a 'leap of faith'.
We can't tell whether an AI, like in the sci–fi film Ex Machina (pictured), really has conscious experience or whether it is just simulating consciousness
Whether something is conscious radically changes the kinds of ethical questions we need to consider.
For example, humans are expected to behave morally towards other people and animals, because consciousness gives them 'moral status'.
In contrast, we don't have these same values towards inanimate objects, like toasters or computers.
'It makes no sense to be concerned for a toaster's well–being because the toaster doesn't experience anything,' Dr McClelland explains.
'So when I yell at my computer, I really don't need to feel guilty about it. But if we end up with AI that's conscious, then that could all change.'
While that might make dealing with AI an ethical nightmare, the bigger risk may be that we start to consider AIs as conscious or sentient when they are not.
Dr McClelland explained: 'If you have an emotional connection with something premised on it being conscious and it's not, that has the potential to be existentially toxic.'
Worryingly, the philosopher says that members of the public are already sending him letters written by chatbots 'pleading with me that they're conscious'.
He added: 'We don't want to risk mistreating artificial beings that are conscious, but nor do we want to dedicate our resources to protecting the "rights" of something no more conscious than a toaster.'
Brain and memory preservation has been explored at length by futurists, scientists and science fiction junkies alike.
Many say it falls under the category of 'transhumanism.'
Transhumanism is the belief that the human body can evolve beyond its current form with the help of scientists and technology.
The practice of mind uploading has been promoted by many people, including Ray Kurzweil, Google's director of engineering, who believes we will be able to upload our entire brains to computers by 2045.
Similar technologies have been depicted in science fiction dramas, ranging from Netflix's Altered Carbon, to the popular series Black Mirror.
Another prominent futurist, Dr Michio Kaku, believes virtual reality can be used to keep our loved ones' personalities and memories alive even after they die.
Scientists and futurists have different theories about how we might be able to preserve the human brain, ranging from uploading our memories to a computer to Nectome's high-tech embalming process, which can keep it intact for thousands of years
'Imagine being able to speak to your loved one after they die ... it is possible if their personality has been downloaded onto a computer as an avatar,' he explained.
These ideas haven't been met without criticism.
McGill University Neuroscientist Michael Hendricks told MIT that these technologies are a 'joke.'
'I hope future people are appalled that in the 21st century, the richest and most comfortable people in history spent their money and resources trying to live forever on the backs of their descendants. I mean, it’s a joke, right? They are cartoon bad guys,' he said.
Meanwhile, neuroscientist Miguel Nicolelis said recently that such technologies would be virtually impossible.
'The brain is not computable and no engineering can reproduce it,' he said.
'You can have all the computer chips in the world and you won't create a consciousness.'
If you've ever dreamed of soaring over traffic on your daily commute, your dreams could soon be a reality – as the 'world's first' flying car enters production.
The Alef Model A Ultralight uses eight propellers hidden in the boot and bonnet to take off at any time.
After more than a decade of development, the US–based Alef Aeronautics has finally announced that the first customers will soon get their flying cars.
The futuristic vehicles will be hand–assembled in the company's facility in Silicon Valley, California.
However, Alef Aeronautics says that each car will take 'several months' of craftsmanship before it is safe to send out to customers.
The first handmade cars will only be delivered to a few customers to test out the experimental vehicles in real–world conditions.
The company says this slow rollout will allow it to work out any potential issues before the flying car enters mass production.
The 'world's first' flying car (pictured) has finally entered production, as Alef Aeronautics announces that its first all–electric vehicle will be hand assembled in the US
Alef Aeronautics' futuristic vehicle can be driven around like a normal car on the streets or take off and fly using eight propellers hidden in its carbon–fibre mesh body
Jim Dukhovny, CEO of Alef Aeronautics, says: 'We are happy to report that production of the first flying car has started on schedule.
'The team worked hard to meet the timeline, because we know people are waiting. We're finally able to get production off the ground.'
The Model A is both a road–legal vehicle and an aircraft capable of taking off without wings via eVTOL (electric vertical take–off and landing).
On the ground, the Model A drives just like a normal electric car, thanks to four small engines in each of the wheels.
But the driver's seat is also surrounded by powerful propellers that provide enough thrust for flight at a cruising speed of 110 miles per hour (177 km/h).
The carbon–fibre mesh body – measuring around five metres by two metres – allows air to pass through the car while keeping the spinning blades safely covered.
The company says that the car will have enough room for the pilot and one passenger, and have a range of 200 miles (321 km) on the ground and 110 miles (177 km) in the air.
The company says that the flying car will have a range of 200 miles (321 km) on the ground and 110 miles (177 km) in the air
Mr Dukhovny claims the car, which is aimed at the general public, is relatively simple to use and would take just 15 minutes to learn.
The entire car weighs just 385 kg (850 lbs), so that it can be classified as an ultralight 'low speed vehicle' – a legal classification for small electric vehicles like golf carts.
That means the car will be capped at 25 miles per hour (40 km/h) on public roads despite being able to drive faster.
Having received airworthiness certification from the Federal Aviation Administration (FAA) in 2023, Alef Aeronautics is now edging closer to making the Model A a reality – over a decade after the company was founded.
The company reports that it has received 3,500 pre–orders, collectively worth more than £800 million.
However, don't expect to see The Jetsons–style flying cars filling the air near you just yet.
Alef Aeronautics says that the first customers will only be allowed to test their flying cars under 'very controlled conditions'.
Alef Aeronautics will send a limited number of its flying cars to customers for them to test in 'very controlled conditions'
The company adds that each customer will need to receive training in compliance and maintenance before flying.
Likewise, creating each car involves robotic, industrial, and hand manufacturing, with rigorous testing of individual parts and a large number of test flights.
Mr Dukhovny has previously said he wanted to bring sci–fi to life and build an 'affordable' flying car, with the cost likely to be closer to £25,000 when built at scale.
Eventually, Aleph Aeronautics says that the production process of the full–size Model A will be automated but, for now, only a limited number can be produced.
Advances in electric motors, battery technology and autonomous software has triggered an explosion in the field of electric air taxis.
Larry Page, CEO of Google parent company Alphabet, has poured millions into aviation start-ups Zee Aero and Kitty Hawk, which are both striving to create all-electric flying cabs.
Kitty Hawk is believed to be developing a flying car and has already filed more than a dozen different aircraft registrations with the Federal Aviation Administration, or FAA.
Page, who co-founded Google with Sergey Brin back in 1998, has personally invested $100 million (£70 million) into the two companies, which have yet to publicly acknowledge or demonstrate their technology.
AirSpaceX unveiled its latest prototype, Mobi-One, at the North American International Auto Show in early 2018. Like its closest rivals, the electric aircraft is designed to carry two to four passengers and is capable of vertical take-off and landing
Airbus is also hard at work on an all-electric, vertical-take-off-and-landing craft, with its latest Project Vahana prototype, branded Alpha One, successfully completing its maiden test flight in February 2018.
The self-piloted helicopter reached a height of 16 feet (five metres) before successfully returning to the ground. In total, the test flight lasted 53 seconds.
Airbus previously shared a well-produced concept video, showcasing its vision for Project Vahana.
The footage reveals a sleek self-flying aircraft that seats one passenger under a canopy that retracts in similar way to a motorcycle helmet visor.
Airbus Project Vahana prototype, branded Alpha One, successfully completed its maiden test flight in February 2018. The self-piloted helicopter reached a height of 16 feet (five metres) before successfully returning to the ground. In total, the test flight lasted 53 seconds
AirSpaceX is another company with ambitions to take commuters to the skies.
The Detroit-based start-up has promised to deploy 2,500 aircrafts in the 50 largest cities in the United States by 2026.
AirSpaceX unveiled its latest prototype, Mobi-One, at the North American International Auto Show in early 2018.
Like its closest rivals, the electric aircraft is designed to carry two to four passengers and is capable of vertical take-off and landing.
AirSpaceX has even included broadband connectivity for high speed internet access so you can check your Facebook News Feed as you fly to work.
Aside from passenger and cargo services, AirSpaceX says the craft can also be used for medical and casualty evacuation, as well as tactical Intelligence, Surveillance, and Reconnaissance (ISR).
Even Uber is working on making its ride-hailing service airborne.
Dubbed Uber Elevate, Uber CEO Dara Khosrowshahi tentatively discussed the company’s plans during a technology conference in January 2018.
‘I think it’s going to happen within the next 10 years,’ he said.
Maya Lassiter / Miskin Lab / University of Pennsylvania
We’re far from realizing the kind of nanomachines envisioned in media like “The Diamond Age” and Metal Gear Solid, but scientists have just taken a meaningful step towards the next best thing.
A team of researchers from the University of Pennsylvania and University of Michigan say they’ve built a sub-millimeter sized robot packed with a computer, motor, and sensors, the Washington Post reports. It’s not an actual billionth of a meter in size, but being smaller than a grain of salt, it is still outrageously tiny: a microrobot.
The work, described in a new study in the journal Science Robotics, could be a platform for one day building microscopic robots that could be deployed inside the human body to perform all sorts of medical miracles, like repairing tissues or delivering treatment to areas difficult for surgeons to access.
“It’s the first tiny robot to be able to sense, think and act,” coauthor Marc Miskin, assistant professor of electrical and systems engineering at UPenn, told WaPo.
At present, the device is still highly experimental and isn’t suited to be used inside a human body — but “it would not surprise me if in 10 years, we would have real uses for this type of robot,” coauthor David Blaauw from U-M told the newspaper.
Building a microscopic robot that can move, sense its surroundings, and make decisions on its own has evaded scientists for decades. According to the team, roboticists have typically relied on externally controlling the microrobots so they can operate at smaller scales, but sacrificing their ability to process information. That prevents the robots from reacting with their environment, leaving them with a limited number of pre-programmed behaviors they can carry out — and as a result, limited real-world usefulness.
Having a robot on the scale of microns, or one millionth of a meter, would give us access to what corresponds to the smallest units of our biology, Miskin told WaPo.
“Every living thing is basically a giant composite of 100-micron robots, and if you think about that it’s quite profound that nature has singled out this one size as being how it wanted to organize life,” he said.
Visually, the researchers’ robot resembles a microchip, and is made of the same kinds of materials, including silicon, platinum, and titanium, WaPo noted. It’s sealed in a layer of what is essentially glass, Miskin said, protecting it from fluids.
The robot uses solar cells to convert energy that powers its onboard computer and its propulsion system, which uses a pair of electrodes to generate a flow in the water particles surrounding it. In a word, the robot swims. Its onboard computer is less than a thousandth of the speed of a modern laptop, per WaPo, but it’s enough to let it respond to changes it detects in its environment like temperature.
“At this scale, the robot’s size and power budget are comparable to many unicellular microorganisms,” the team wrote in the study.
Crucially, the robot is designed to still communicate with its human operators.
“We can send messages down to it telling it what we want it to do,” using a laptop, Miskin told WaPo, “and it can send messages back up to us to tell us what it saw and what it was doing.”
But the next step? Inter-microrobot communication.
“So the next holy grail really is for them to communicate with each other,” Blaauw told WaPo.
Tesla's humanoid robot, Optimus, has taken a suspicious tumble in a new demo.
In a viral clip, the robot suddenly jerks back, reaches up as if to remove something from its head, and tumbles backwards with a crash.
Many have compared the robot's strange movements to the distinctive gesture of someone taking off a virtual reality headset.
On social media, this has sparked a flurry of rumours that the supposedly autonomous robot is really controlled by a human.
The shocking moment was captured by a Reddit user who was filming Optimus handing out bottles of water at the Tesla The Future of Autonomy Visualized event in Miami.
In a post, the user wrote: 'I think Optimus needs an update.'
As the video spread online, others were quick to accuse Tesla of overstating its bot's ability to function on its own.
One commenter wrote: 'Honestly looks like the dude teleworking this bad boy took off his headset.'
Elon Musk's Tesla Optimus robot has taken a suspicious tumble in a viral video, as it appears to make the motion of removing a VR headset
On social media, tech fans have taken this as evidence that the robot was being remotely controlled by a teleoperator rather than operating autonomously
Commenters have mocked Tesla for the blunder, with one joking that robots would at least not be taking human jobs
The embarrassing video has caused hilarity for online tech fans, as one dubbed it 'my favourite video of all time.'
''At least humans will have jobs,' another commenter joked.
While another chimed in: 'idk looks to me like that robot just killed itself.'
Now, the strange behaviour of this Optimus robot moments before its collapse has led many to believe that it was also being teleoperated.
The video was captured at Tesla's 'The Future of Autonomy Visualized' event in Miami, where Optimus was handing out bottles of water
Many fans pointed out that the robot's motion towards its head has a clear resemblance to the gesture of someone removing a VR headset, which would make sense if it were being controlled remotely
Many commenters joked about the supposed teleoperator's speedy exit, with one suggesting that they felt a spider running up their leg
One commenter wrote: 'Looks like the operator took off their VR headset.'
'And spilled hot coffee on his lap right before that lol,' another chimed in.
Others made fun of the supposed teleoperator's speedy exit, with one writing: 'Teleoperator logged out early that day.'
Another joked: 'Looks like the operator felt a spider running up his / her leg and panicked.'
Meanwhile, one frustrated commenter added: 'It all feels very Wizard of Oz. Pay no attention to the man behind the curtain – our product is right around the corner!'
These accusations may be especially embarrassing for Tesla since the fall occurred at an event intended to showcase 'Autopilot technology and Optimus'.
However, the robot's supposed teleoperation isn't the only thing that is causing concern.
As the Tesla Optimus falls out of control, the video shows its hands swinging down with such force that they crush a bottle of water on the table.
This comes after Elon Musk, Tesla CEO, specifically insisted that Optimus was controlled by AI and was not teleoperated
Tesla has not confirmed whether the robot which collapsed was being controlled by AI or remotely by a human
Some commenters on X were more concerned by the power of Optimus' crash, which easily crushed a nearby water bottle
One Tesla fan branded the bot as 'too dangerous' and questioned whether the robot's swinging hand could 'crack a human skull'
'Interesting how easily Optimus crushed the bottle,' one commenter wrote.
Another added: 'I wonder if Optimus can crack a skull with that punch, too dangerous.'
'The spray on that little karate chop was pretty impressive don't let one of these fall on your dog or something,' a concerned commenter chimed in.
This comes amidst a boom of humanoid robots as investors bet that autonomous labour will replace humans in the future.
Tesla CEO Elon Musk has been a major champion of using robots for labour, and has frequently said that they could be used to replace humans in environments like factories to perform repetitive or dangerous tasks.
To achieve this, he hopes to massively scale up the production of robots and reduce their cost.
Speaking at a tech conference in Saudi Arabia last year, Musk predicted that there could be as many as 10 billion humanoid robots on Earth by 2040.
Elon Musk has been a major champion of using robots to replace human labour in factories. However, this will only be possible if his Optimus robots (pictured) can operate autonomously
Elon Musk wants to push technology to its absolute limit, from space travel to self-driving cars — but he draws the line at artificial intelligence.
The billionaire first shared his distaste for AI in 2014, calling it humanity's 'biggest existential threat' and comparing it to 'summoning the demon'.
At the time, Musk also revealed he was investing in AI companies not to make money but to keep an eye on the technology in case it gets out of hand.
His main fear is that in the wrong hands, if AI becomes advanced, it could overtake humans and spell the end of mankind, which is known as The Singularity.
That concern is shared among many brilliant minds, including the late Stephen Hawking, who told the BBC in 2014: 'The development of full artificial intelligence could spell the end of the human race.
'It would take off on its own and redesign itself at an ever-increasing rate.'
Despite his fear of AI, Musk has invested in the San Francisco-based AI group Vicarious, in DeepMind - which has since been acquired by Google - and OpenAI, creating the popular ChatGPT program that has taken the world by storm in recent months.
During a 2016 interview, Musk noted that he and OpenAI created the company to 'have democratisation of AI technology to make it widely available'.
Musk founded OpenAI with Sam Altman, the company's CEO, but in 2018 the billionaire attempted to take control of the start-up.
His request was rejected, forcing him to quit OpenAI and move on with his other projects.
In November, OpenAI launched ChatGPT, which became an instant success worldwide.
The chatbot uses 'large language model' software to train itself by scouring a massive amount of text data so it can learn to generate eerily human-like text in response to a given prompt.
ChatGPT is used to write research papers, books, news articles, emails and more.
But while Altman is basking in its glory, Musk is attacking ChatGPT.
He says the AI is 'woke' and deviates from OpenAI's original non-profit mission.
'OpenAI was created as an open source (which is why I named it 'Open' AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft, Musk tweeted in February.
The Singularity is making waves worldwide as artificial intelligence advances in ways only seen in science fiction - but what does it actually mean?
In simple terms, it describes a hypothetical future where technology surpasses human intelligence and changes the path of our evolution.
Experts have said that once AI reaches this point, it will be able to innovate much faster than humans.
There are two ways the advancement could play out, with the first leading to humans and machines working together to create a world better suited for humanity.
For example, humans could scan their consciousness and store it in a computer in which they will live forever.
The second scenario is that AI becomes more powerful than humans, taking control and making humans its slaves - but if this is true, it is far off in the distant future.
Researchers are now looking for signs of AI reaching The Singularity, such as the technology's ability to translate speech with the accuracy of a human and perform tasks faster.
Former Google engineer Ray Kurzweil predicts it will be reached by 2045.
He has made 147 predictions about technology advancements since the early 1990s - and 86 per cent have been correct.
Nils Westerboer’s science fiction novel Athos 2643 is a detective story set in the future, in a monastery on Neptune’s fictional moon Athos. However, the main intrigue revolves not around space or Christianity, but around artificial intelligence. The Ukrainian book was recently published by Lobster Publishing and can be purchased on the publisher’s official website.
What should artificial intelligence look like?
Contemporary science fiction
Athos 2643 by Nils Westerboer can confidently be called fresh science fiction. The novel was published in German in 2022, and in 2025, Lobster Publishing printed it in Ukrainian.
The events in it take place in the distant future, and the numbers 2643 indicate the year. Humanity has colonized the Solar System, mastered the genetic modification of living organisms, and created truly powerful artificial intelligence, which plays one of the key roles in the novel.
The novel is truly interesting, but initially it can be a real challenge for the reader. The fact is that many things, whose names seem quite understandable at first glance, begin to cause distrust after just a few paragraphs.
Nils Westerboer. Source: www.uni-koblenz.de
Main characters
Take, for example, the main character – the inquisitor Ruud Kartgeiser. From a man whose occupation is described by this word, you would expect a certain steadfastness in his faith, which is most likely Christian. However, when we first see him at the station orbiting Neptune, he is sitting naked in a hotel room while his AI assistant ties him to a pipe.
Yes, he has a personal AI named Zach, and she is only formally a tool that makes his life easier. In fact, she has her own will, sometimes stronger than Ruud’s, her own judgments, often takes the initiative in conversations with other characters, and in general, the reader perceives the events in the novel mainly through her eyes.
At first, it seems that Ruud Kartgeiser himself is a whiny, dependent, and dull appendage to his artificial intelligence. But then he demonstrates determination, composure, and independence, his story unfolds more fully, and it becomes clear that the whole point is that we are seeing a character who has suffered severe trauma in the past and has recently experienced personal problems. He gives vent to all of this when he is alone with the AI.
That’s how the imagination pictures the inquisitor of the future, and Ruud is nothing like that image. Source: phys.org
As for his profession, the word “inquisition” is translated from Latin as “investigation.” In other words, Ruud is simply an investigator, a typical detective story protagonist acting on behalf of an organization with certain powers. However, he does not deal with matters of religious belief, but rather with problematic artificial intelligence.
Athos 2643
It is precisely Ruud’s deceptive status that immediately catches your attention when you start reading the novel. Its title refers to Mount Athos in Greece, located on the peninsula of the same name. It is home to more than a dozen Orthodox monasteries, and in fact, the entire island is a territory where medieval religious rules still apply.
In Westerboer’s novel, this is the name of Neptune’s small satellite, only a few kilometers in diameter, on which there is a monastery of Orthodox hermits, where a very suspicious fatal accident occurred. Incidentally, the satellite itself is fictional, but it is described quite realistically, and it is indeed impossible to rule out that in the future, when the distant eighth planet is better explored, something similar will not be found near it.
But that’s not the important thing. The question that arises from the very first pages is: what is a representative of the Inquisition, usually associated with the Roman Catholic Church, doing in an Orthodox monastery? And why is he accompanied by his assistant, who is constantly present in the form of a hologram of a young, attractive woman, often dressed very lightly?
It seems that Westerboer tried to use the theme of Christianity, but never even understood its basics.
The monks on Mount Athos rarely discuss God, the soul, or faith with devout Christians. However, when it turns out that one of them is a woman, another hermit, very much in the spirit of the LGBT movement, asks, “Who decides who is a man and who is a woman?”
Monastery on Mount Athos in Greece. Source: phys.org
Against this backdrop, the presence of a farm producing halal meat on the satellite monastery is not so surprising. In Westerboer’s novel, Neptune’s system is inhabited mainly by Turks. So why should Orthodox hermits not produce food according to the instructions of the Prophet Muhammad?
Compared to Westerboer’s vision of a future world of advanced biotechnology, the appearance of the farm itself is more reminiscent of a depiction of hell.
But the most interesting thing is that later on, there is a very simple and rational explanation for everything. For this reason alone, it is really worth reading Athos 2643 to the end. After all, it is a detective story in which the reader, together with the main characters, not only searches for the possible murderer, but also tries to understand what is really going on around them. Westerboer also manages to present some key insights into the world in simple language, only after the 300th page, and after finishing the book, you somehow do not want to criticize him for it.
The most important technology
The world of Athos 2643 is shaped by technologies that have not yet been invented. Some of them seem, shall we say, overly bold and dubious. For example, when the author describes engines capable of accelerating a spacecraft to a speed of over a thousand kilometers per second in just a few dozen minutes, the question arises not so much about the physical process itself (which, at least in theory, has already been described), but rather about how the people on board can withstand such overloads.
There is an even bigger problem with Zach’s hologram emitter. Because it is only called a hologram in name, that is, a play of light and shadow. In reality, it has a density that allows it to wear a dress made of very light but real fabric. And the device that creates all this not only does not fry everything between it and the hologram, but can also fit in your pocket.
Neptune. Source: phys.org
And the way gravity is created on Athos. The author did well not to forget that on such a small body, it is very weak. But using special particles that need to be injected with anesthesia every few days instead of the magnetic boots familiar to science fiction is a rather unsuccessful idea, even without the bloody demonstration present in the book.
But the most important technology shaping this future is artificial intelligence, which is now not only on par with human intelligence but surpasses it. AI is not only found in the main character; it is everywhere, controlling all complex processes. So now even a dishwasher can show its personality.
Such AI exists on Mount Athos, and it does indeed cause problems. The novel begins with the story of a woman who, over several years, drove her husband to his death by using her knowledge of his weaknesses against him, while pretending to care for him. It is difficult to say whether modern lawyers would consider such actions a crime, but in the world of Athos 2643, they are.
After all, artificial intelligence is very good at studying the reactions of the people it interacts with and is quite good at predicting their behavior. This means that it is quite capable of manipulating people and leading them into situations where their death will appear natural or the result of an accident.
And this problem may indeed be real. We are accustomed to considering our actions to be the result of a mysterious process called our soul. However, in practice, they can be subjected to statistical analysis. Under normal conditions, it would be difficult to take advantage of this due to the large number of random processes and interactions that need to be taken into account. However, on an asteroid riddled with mines, where only six hermits live, it is much easier.
This is precisely what Ruud will have to deal with. Fortunately, he is very good at “talking” to artificial intelligences because he knows their weaknesses and is very attentive to how they play with words. How all this is connected to the murders in the cave monastery can be found out by reading the book to the end. You will discover that artificial intelligence is not even the key technology of this world, and that the characters have been missing the main miracle and mystery from the very beginning. But that’s why he is a detective, to find the answers himself.
Chinese officials are warning that the country’s humanoid robotics industry could be forming a massive bubble.
As Bloomberg reports, strategists from the National Development and Reform Commission (NDRC), which serves as the country’s state-run macroeconomic management agency, say that extreme levels of investment could be drowning out other markets and research initiatives.
It’s a notable shift in tone as the humanoid robot industry continues to attract billions of dollars in investment. Aided by advancements in artificial intelligence opening up new use cases — and plenty of unbridled enthusiasm — for the tech, investors are pouring untold sums into over 150 humanoid robot companies in China alone, according to the NDRC.
Many of those companies are producing robots that are extremely similar to each other, overspending that could overwhelm the market. Bike-sharing apps, for instance, flooded the market in China in 2017 and 2018, with dozens of them crowding each other out at the same time. The outcome: streets littered with unused bikes.
“Frontier industries have long grappled with the challenge of balancing the speed of growth against the risk of bubbles — an issue now confronting the humanoid robot sector as well,” NDRC spokeswoman Li Chao told reporters last week, as quoted by Bloomberg.
China has established itself as a clear global leader in the space, with Morgan Stanley predicting the humanoid robot market could surpass a whopping $5 trillion by 2050. Citigroup is even more optimistic, expecting the market to hit $7 trillion by that point.
New offerings by companies like Unitree have made bipedal robots far more affordable and advanced than ever before. Unitree’s G1 robot, in particular, has garnered tons of attention for its flashy abilities to throw punches in the ring or play basketball.
A burgeoning industry of far smaller Chinese competitors has cropped up as well, fueling even more investment — as well as concern from policymakers that the industry could be growing too fast.
Last month, Chinese robotics company UBTECH claimed it had rolled out the “world’s first mass delivery” of industrial humanoid robots. Startup AgiBot’s A2 also set a Guinness World Record for the longest distance ever walked by a humanoid robot, with its A2 covering over 66 miles while live-swapping its battery over and over.
Despite plenty of enthusiasm, turning humanoid robots into a viable and affordable product with a clear-cut use case remains a major challenge. Case in point, the current crop of androids still struggles significantly with completing household tasks, particularly without the help of a nearby human teleoperator.
To speed up the process of finding real-world applications, the NDRC is hoping to spread industrial resources across the country, while also accelerating research and development for “core technologies.”
The risks of a bubble are certainly there. Without consolidation, China’s market could soon be flooded with armies of largely identical humanoid robots — which is either a terrifying prospect, considering the possibility of them putting us all out of work, or risks a market crash if it turns out they’re not particularly good at real work.
The tech industry has become obsessed with the idea of humanoid robots, bipedal androids designed to complete tasks on behalf of their flesh-and-blood counterparts.
But as many experts have argued, having robots walk around on two legs and manipulate the world around them with two hands and arms may not always be the most efficient option. After all, plenty of industrial robots use wheels to roll around a warehouse, or feature one large, strong, and multi-pivoting arm instead of relying onseveral weaker ones.
Besides, the existing crop of humanoid robots is capable of a lot more than walking around and waving their hands.
Look no further than a video shared by robot tinkerer and researcher Logan Olson last month, which shows how a humanoid robot can turn itself into a surprisingly creepy crawling machine while using the full extent of its four limbs’ freedom of movement. The footage shows the robot dropping down to all fours in less than a second, unnervingly bending its arms and legs to crouch down and scuttle across a concrete patio — like a demon straight out of a horror movie.
Agility Robotics AI research scientist Chris Paxton, who recently reshared the video, used the footage as a reminder that a “lot of these robots are ‘faking’ the humanlike motions.”
“It’s a property of how they’re trained, not an inherent property of the hardware,” he wrote. “They’re actually capable of way weirder stuff and way faster motions.”
“Human motion is most efficient for humans; robots are not humans,” he added in a follow-up.
It’s a particularly pertinent topic as companies like Tesla, Figure, and China’s Unitree race to commercialize humanoid robots for the mass market. While companies have made major strides — in a separate tweet, Paxton argued that “running is now basically commoditized” — experts have questioned if it’s really the best form factor for every job.
Case in point, Chris Walti, the former lead of Tesla’s humanoid robot Optimus, told Business Insider earlier this year that humanoid robots simply don’t make much sense on the factory floor.
“It’s not a useful form factor,” he said at the time. “Most of the work that has to be done in industry is highly repetitive tasks where velocity is key.”
The human form “evolved to escape wolves and bears,” he added. “We weren’t designed to do repetitive tasks over and over again.”
While a creepy-crawling robot, as demonstrated in Olson’s video, admittedly may not be the pinnacle of productivity, it serves as a great — albeit nightmare-inducing — reminder that humanoid robots are technically capable of a lot more than masquerading as a human being, while walking around, shaking hands, and giving out popcorn.
At the same time, a humanoid robot distending its joints to crawl along the floor likely won’t endear it to humans, either.
“That is terrifyingly cool,” one user wrote in response to Olson’s video.
Pixar's adorable hopping lamp has been brought to life – and he could soon be lighting up your own desk at home.
Developed byCalifornia firm Interaction Labs, Ongo the robotic smart lamp can move, see, hear and even talk.
A promo clip shows the 'ambient desk lamp companion robot' peering curiously at objects and people around him while giving help around the home.
And to allay any privacy concerns, he even comes with a pair of sunglasses blocking his view.
Karim Rkha Chaham, co-founder and CEO Interaction Labs, said the 'expressive' bot can even remember users and anticipate their needs.
'Think of it as a cat trapped inside the body of a desk lamp,' he said.
On X, commentators called the design 'incredible', 'epic', 'very cool' and an 'amazing-looking piece' of tech.
One said it's 'definitely something I would have at home and not a creepy humanoid robot', while another added it 'might be the cutest robot on the market'.
According to the company, the cute desk lamp 'lights up your desk and your day' and brings ' a familiar magic presence' to your home
Developed by California firm Interaction Labs as a tribute to the Pixar character, Ongo the smart lamp can move, see, hear and even talk
Ongo has movements designed by Alec Sokolow, the Oscar-nominated writer of Pixar film Toy Story as well as Garfield: The Movie and Evan Almighty.
As the promo video shows, Ongo spins on its base and self-adjusts its axis just like the legendary Pixar character.
Depending on the user's needs, he can adjust levels of light that are emitted from his 'eyes' and bring them closer, for when reading a book after nightfall for example.
Ongo utters cheery greetings, helpful advice and instructions such as 'Hey, don't forget your keys' and 'Maybe try a dash of balsamic' during cooking.
Another adorable scene shows Ongo bopping to the sound of music in the next room when his owners are having a party.
According to the company, the cute desk lamp 'lights up your desk and your day' and brings 'a familiar magic presence' to your home.
'It brings your space to life with movement, personality and emotional intelligence,' Interaction Labs says on its website.
'It remembers what matters, senses how you feel, and supports you through the day with small, thoughtful interactions.
'Ongo senses the rhythm of your day and responds with quiet understanding, reading the subtle shifts in your environment.'
Ongo comes with 'fun' opaque sunglasses that snap on with magnets for when users want total privacy
Karim Rkha Chaham, co-founder and CEO Interaction Labs, said the 'expressive' bot can even remember users and anticipate their needs
One commentator posted: 'definitely something I would have at home and not a creepy humanoid robot'
Much like smart products packed with cameras, Ongo has an awareness of its surroundings but it processes vision on the device itself and never sends clips out to the cloud for company staff to watch.
When users want total privacy without Ongo peering in, they can put 'fun' opaque sunglasses over his eyes that snap on with magnets.
On X, several users said they found Ongo's voice 'annoying' and 'grating', but Chaham said it can be customised along with his personality.
The co-founder also admitted that the promo clip is computer-generated, but it gives users a good idea of what to expect as prototypes are being worked on – so it's not quite ready yet.
He said: 'Will be posting more and more videos of us interacting with the prototype.'
Ongo is available to pre-order on the company's website, with a fully refundable 'priority access deposit' costing $49/£38.38.
This deposit secures users a unit from the first batch and will be taken from the product's final price – which Chaham said will be 'about $300' (£225) – and those who pay now will get Ongo when shipping begins summer next year.
Ongo is an obvious nod to Pixar's original lamp, called Luxo Jr., which has appeared on the production logo of every Pixar film since the first one, Toy Story, back in 1995.
'That's not what a chair looks like': Ongo keeps an eye on his users and offers helpful suggestions and reminders - such as when they're trying to put furniture together
Pixar's lamp, called Luxo Jr., has appeared on the production logo of every Pixar film since the first one, Toy Story, back in 1995
In the short sequence, Luxo Jr. is seen hopping into view and jumping on the capital letter "I" in "PIXAR" to flatten it before turning his head.
Toy Story director John Lasseter created the character in August 1986, modeling it after his own Luxo brand lamp.
Luxo Jr. starred in his own short film of the same name that year, also directed by Lasseter, in which he appears with a larger lamp, 'Luxo Sr'.
It was also in 1986 that the animation studio was purchased by Apple co-founder Steve Jobs, having been owned by George Lucas' Lucasfilm.
After a run of hugely-successful films including Toy Story, A Bug's Life, Monsters Inc and Finding Nemo, Disney acquired Pixar in 2006.
Key milestones in Pixar's history
1979: George Lucas recruits Ed Catmull from the New York Institute of Technology to head Lucasfilm’s Computer Division.
1982-83: The division completes a scene for Star Trek II: The Wrath of Khan showing a lifeless planet being transformed by lush vegetation. It is the first completely computer animated sequence in a feature film.
1986: Apple co-founder Steve Jobs purchases the Computer Division from George Lucas and establishes the group as an independent company, Pixar. At this time about 40 people are employed.
1986: A short film called Luxo Jr. is completed featuring two anthropomorphic desk lamps
1991: Disney and Pixar announce an agreement 'to make and distribute at least one computer-generated animated movie' which will become Toy Story.
1995: Toy Story, the world’s first computer animated feature film, is released on November 22. It opens at #1 that weekend and goes on to become the highest grossing film of the year, making $192 million domestically and $362 million worldwide.
1998: Pixar's second feature-length film A Bug's Life is released in theaters on November 25
2006: The Walt Disney Company announces that it has agreed to purchase Pixar Animation Studios
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
01-12-2025
When science fiction becomes reality: Scientists reveal what would REALLY happen if the sun started to dim like in Project Hail Mary - with catastrophic results
Scientists have revealed the terrifying answer to this question, which is the subject of the upcoming science fiction blockbuster, Project Hail Mary.
The film, based on a novel of the same name by The Martian author, Andy Weir, follows a lone scientist on a mission to uncover why the sun is dimming.
In the movie, which is set to hit cinemas in March 2026, the sun's brightness is predicted to fall one per cent in a year and five per cent in 20 years.
These numbers might sound small.
But in reality, scientists say that these changes would be more than enough to wipe out humanity.
Professor David Stevenson, a planetary scientist from the California Institute of Technology, told Daily Mail: 'Extinguishing life on Earth would take a long time even if you eliminated solar energy because we know of organisms that live underground.
'But extinguishing humans could happen fast, especially since humans are not rational creatures for the most part.'
In Project Hail Mary, Ryan Gosling (pictured) plays a lone scientist sent on a mission to find out why the sun is dimming. But what would really happen if the sun did start to fade?
What happens when the sun starts to dim?
At a distance of around93 million miles (150 million kilometres) from Earth, the sun delivers about 1,365 Watts per square metre of energy, which scientists call the solar constant.
About 30 per cent of that energy is reflected back into space, while the remainder is absorbed, warming the Earth's atmosphere and surface.
Currently, our planet is holding on to more energy than it loses – but it wouldn't take much to tip the balance.
If the sun's brightness were to drop or if something prevented our atmosphere from absorbing the energy, then Earth could start to rapidly cool.
Professor Lucie Green, an expert on the sun from University College London, told Daily Mail: 'The Sun does naturally vary in brightness, but not by very much!
'The technical term is total solar irradiance. This is slightly variable, with the variability being a result of changes during the Sun’s 11–year sunspot cycle.'
These fluctuations are barely noticeable on Earth, but there have been much more dramatic shifts in the past.
The sun's output does naturally dip on an 11–year cycle that coincides with the number of sunspots appearing on the surface. However, these changes aren't enough to cool Earth dramatically
What would happen if the sun started to dim
If the sun started to dim, the total energy Earth receives would fall.
Eventually, Earth would start to lose more energy to the vacuum of space than it was gaining from the sun.
After this point, Earth would begin to rapidly cool.
Around 0.6°C (1.1 °F) of cooling would start to cause crops to fail in Europe due to a lack of warm weather.
By the time temperatures fell by 2°C (3.6°F), widespread famine could kill billions of people.
When global temperatures fall 6°C (10.8°F) lower, the Earth would enter a new Ice Age, and glaciers would cover most of the Northern Hemisphere.
By the time the sun was completely gone, temperatures would fall to –73°C (–100°F) and all life on Earth would go extinct.
Between 1645 and 1715, the sun went through a 70–year quiet period known as the Maunder Minimum.
Although the sun was only delivering 0.22 per cent less energy, some researchers think that this change was partially responsible for the deadly chill.
If Project Hail Mary's predictions came true and the sun's radiation continued to fall by one per cent, the results would soon become catastrophic.
As Earth would be losing more energy into space than it gained from the sun, global temperatures would soon fall several degrees below average.
Worryingly, Earth's history shows that even relatively small changes in the planet's average temperature can have a massively outsized impact.
During the Little Ice Age, less than a degree Celsius of cooling led to mass famine throughout Northern Europe.
Cold winters and cool summers led to crop failures, while the sea became so cold that Norse colonies in Greenland were cut off by the ice and collapsed through starvation.
In Project Hail Mary, the teacher turned astronaut Ryland Grace, played by Ryan Gosling (pictured), learns that the sun will cool by one per cent in a year
According to a recent study, global cooling of just 1.8°C (3.25°F) would cut production of maize, wheat, soybeans and rice to fall by as much as 11 per cent.
However, if Project Hail Mary came true and the sun cooled by one to five per cent in 20 years, the effects on the climate would be even more devastating.
In Project Hail Mary, the teacher turned astronaut Ryland Grace, played by Ryan Gosling in the upcoming film, remarks: 'That would mean an ice age. Like... right away. Instant ice age.'
That might sound dramatic, but scientists agree that it might not take much cooling for ice to reclaim the world.
According to a recent study from the University of Arizona, the average temperature during the last Ice Age, 20,000 years ago, was just 6°C (10.8°F) colder than today.
During this time, glaciers covered about half of North America, Europe and South America and many parts of Asia.
Dr Becky Smethurst, astrophysicist at the University of Oxford, told Daily Mail: 'A drop in energy of one per cent from the Sun would trigger a new Ice Age on Earth, with the polar ice caps expanding further towards the equator.
Just like the 2004 movie 'The Day After Tomorrow' (pictured), these major changes to the Earth's climate would eventually culminate in a new Ice Age that could wipe out life on Earth
According to a recent study from the University of Arizona, the average temperature during the last Ice Age, 20,000 years ago, was just 6°C (10.8°F) colder than today. This means it might not take much cooling for icy conditions to return
'Many ecosystems would collapse as the weather changed, farming would fail, and there would be severe food shortages. As a species, humans would likely survive this change thanks to modern technology, although we'd most likely be living underground.'
What would happen if the sun completely cooled?
Although humanity might be able to survive a global ice age, the situation would be very different if the sun completely vanished.
Within a week, the Earth's surface would fall below –18°C (0°F) and within a year it would dip below –73°C (–100°F).
Eventually, after cooling for millions of years, the planet would stabilise at a frigid –240°C (–400°F).
However, humanity would be long gone well before the planet ever got to that point.
Some humans might be able to cling on in the deepest parts of the ocean, using hydrothermal vents for warmth.
But once the oceans freeze over, there would be very little hope for anyone to survive.
In the original novel of Project Hail Mary, written by The Martian author Andy Weir, scientists make the terrifying prediction that the sun's brightness will fall one per cent in a year and five per cent in 20 years. If this were true, then humanity would very likely be destroyed
Dr Alexander James, a solar scientist from University College London, told Daily Mail: 'From a fundamental viewpoint, if the Sun completely faded, there would be no more light, meaning all our green plant life would be unable to photosynthesise.
'That means plants would not be producing oxygen, which, of course, we need to live. Temperatures would also plummet, so I don’t see how the majority of life as we know it would be able to survive without our Sun.'
Could this ever really happen?
Thankfully, scientists say there's no way that the sun could cool as fast as in Project Hail Mary.
Although the sun's activity does fluctuate, even in the most extreme events and quiet periods, the effects are not dramatic.
For example, many scientists have questioned how much the Maunder Minimum really contributed to the Little Ice Age during the 17th Century.
While most experts agree that a decline in solar activity did contribute to the cooling, other factors, such as volcanic activity, likely played a bigger role.
Additionally, most of the sun's natural variations are on a much smaller scale.
Luckily for us, the sun is so large that it cannot physically cool as quickly as Project Hail Mary suggests. Experts say the sun would only cool by one per cent in a million years if the core completely stopped producing energy
The amount of energy arriving from the sun usually only drops by 0.1 per cent during the solar cycle.
While large sunspots, cool regions on the solar surface, might cause a temporary dip as low as 0.25 per cent below average, this is nowhere near the five per cent change of Project Hail Mary.
In fact, many scientists believe that the sun cannot physically cool this fast.
Professor Michael Lockwood, a space environment physicist from the University of Reading, told Daily Mail: 'About half of the Sun's mass is in the radiative and convection zones outside the core – that is about a thousand, billion, billion, billion, billion kilograms.'
This enormous mass acts like a heat sink, storing colossal amounts of energy that would take billions of years to dissipate.
Professor Lockwood says: 'Roughly speaking, if the core ceased producing any energy, the power emitted by the Sun would only have dropped by about one per cent a million years later.
'Scientifically, anything faster than that is nonsense.'
So, even if the sun does start giving out on us, we will have plenty of time to find a better solution than sending out Ryan Gosling on a spaceship.
The Sun is a huge ball of electrically-charged hot gas that moves, generating a powerful magnetic field.
This magnetic field goes through a cycle, called the solar cycle.
Every 11 years or so, the Sun's magnetic field completely flips, meaning the sun's north and south poles switch places.
The solar cycle affects activity on the surface of the Sun, such as sunspots which are caused by the Sun's magnetic fields.
Every 11 years the Sun's magnetic field flips, meaning the Sun's north and south poles switch places. The solar cycle affects activity on the surface of the Sun, increasing the number of sunspots during stronger (2001) phases than weaker (1996/2006) ones
One way to track the solar cycle is by counting the number of sunspots.
The beginning of a solar cycle is a solar minimum, or when the Sun has the least sunspots. Over time, solar activity - and the number of sunspots - increases.
The middle of the solar cycle is the solar maximum, or when the Sun has the most sunspots.
As the cycle ends, it fades back to the solar minimum and then a new cycle begins.
Giant eruptions on the Sun, such as solar flares and coronal mass ejections, also increase during the solar cycle.
These eruptions send powerful bursts of energy and material into space that can have effects on Earth.
For example, eruptions can cause lights in the sky, called aurora, or impact radio communications and electricity grids on Earth.
Illustration by Tag Hartman-Simkins / Futurism. Source: Jorge Uzon / AFP via Getty Images
Geoffrey Hinton, one of the three so-called “godfathers” of AI, never misses an opportunity to issue foreboding proclamations about the tech he helped create.
During an hour-long public conversation with Senator Bernie Sanders at Georgetown University last week, the British computer science laid out all the alarming ways that he forecasts AI will completely upend society for the worst, seemingly leaving little room for human contrivances like optimism. One of the reasons why is that AI’s rapid deployment will be completely unlike technological revolutions in the past, which created new classes of jobs, he said.
“The people who lose their jobs won’t have other jobs to go to,” Hinton said, as quoted by Business Insider. “If AI gets as smart as people — or smarter — any job they might do can be done by AI.”
“These guys are really betting on AI replacing a lot of workers,” Hinton added.
Hinton pioneered the deep learning techniques that are foundational to the generative AI models fueling the AI boom today. His work on neural networks earned him a Turing Award in 2018, alongside University of Montreal researcher Yoshua Bengio and the former chief AI scientist at Meta Yann LeCun. The trio are considered to be the “godfathers” of AI.
All three scientists have been outspoken about the tech’s risks, to varying degrees. But it was Hinton who first began to turn the most heads when he said he regretted his life’s work after stepping down from his role at Google in 2023.
In his discussion with Sanders, Hinton reiterated these risks, adding that the multibillionaires spearheading AI, like Elon Musk, Mark Zuckerberg, and Larry Ellison haven’t really “thought through” the fact that “if the workers don’t get paid, there’s nobody to buy their products,” he said, per BI.
Previously, Hinton has said it wouldn’t be “inconceivable” that humankind gets wiped out by AI. He also believes we’re not that far away from achieving an artificial general intelligence, or AGI, a hypothetical AI system with human or superhuman levels of intelligence that is able to perform a vast array of tasks, which the AI industry is obsessed with building.
“Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI,” Hinton said in 2023. “And now I think it may be 20 years or less.”
Strikingly, Hinton now claims that the latest models like OpenAI’s GPT-5 “know thousands of times more than us already.”
While leading large language models are trained on a corpus of data vastly exceeding what a human could ever learn, many experts would disagree that this means that the AI actually “knows” what it’s talking about. Moreover, many efforts to replace workers with semi-autonomous models called AI agents have often failed embarrassingly, including in customer support roles that many predicted were the most vulnerable to being outmoded. In other words, it’s not quite set in stone that the tech will be to so easily replace even low-paying jobs.
Nonetheless, never put it past your overlords to find a way how to screw you over anyway. AI machines could be a great tool for carrying out imperial actions abroad; deploying AI robots to fight overseas would be great for the US military industrial complex, Hinton argued, since there wouldn’t be dead soldiers to cause “political blowback.”
“I think it will remove one of the main barriers to rich powerful countries just invading little countries like Granada,” Hinton told Sanders.
It’s one small step for man — and one giant, badass layup for robot kind.
Researchers at the Hong Kong University of Science and Technology (HKUST) have programmed a Unitree G1 humanoid robot to play basketball, almost perfectly mimicking the skills of a human athlete.
A video shared by HKUST PhD student Yinhuai Wang shows the robot dribbling, taking jump shots, and even pivoting on one of its feet to evade the student’s attempts to block it from taking a shot.
Wang called it the “first-ever real-world basketball demo by a humanoid robot,” boasting that he “became the first person to record a block against a humanoid.”
It’s an impressive demo, showcasing how far humanoid robotics has come in a matter of years. Unitree, in particular, has stood out in an increasingly crowded field, with its G1 rapidly picking up new skills.
Wang and his colleagues are teaching robots how to play basketball through a system they’ve dubbed “SkillMimic,” which is described on his website as a “data-driven approach that mimics both human and ball motions to learn a wide variety of basketball skills.”
“SkillMimic employs a unified configuration to learn diverse skills from human-ball motion datasets, with skill diversity and generalization improving as the dataset grows,” the writeup continues. “This approach allows training a single policy to learn multiple skills, enabling smooth skill switching even if these switches are not present in the reference dataset.”
While netizens were generally impressed by the robot’s basketball skills, others were a little more skeptical.
“Love that the programmer focused on showboating rather than fundamentals,” one wrote.
“Robots will do everything but fill the dishwasher,” another joked.
Others imagined a future in which bipedal robots dominate sports.
“Man, I hope I get to see proper robotics basketball leagues,” another Reddit user mused.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
29-11-2025
Kan AI ooit echt bewustzijn bereiken?
Kan AI ooit echt bewustzijn bereiken?
Kan AI ooit echt bewustzijn bereiken?
Key takeaways
AI-bewustzijn stuit op bezwaren door computationele, algoritmische en fysieke beperkingen.
De onderzoekers verdelen hun argumenten in drie categorieën op basis van overtuigingskracht: verbeterpunten, praktische obstakels in de huidige technologie, en fundamentele onmogelijkheden.
Het raamwerk verduidelijkt het debat over AI-bewustzijn en biedt een routekaart voor toekomstig onderzoek, ethiek en beleidsontwikkeling in artificiële intelligentie (AI).
De vraag of artificiële intelligentie (AI) bewustzijn kan bereiken, is al lange tijd onderwerp van intens debat tussen wetenschappers, filosofen en technologen. Een recente studie, Consciousness in Artificial Intelligence? A Framework for Classifying Objections and Constraints, probeert die complexe discussie te verduidelijken door een gestructureerd model te ontwikkelen voor het categoriseren van de verschillende bezwaren rondom digitaal bewustzijn.
Soorten bezwaren
Het onderzoek erkent dat argumenten tegen AI-bewustzijn vaak overlappen of verkeerd zijn gericht. Sommige bezwaren komen voort uit de overtuiging dat bewustzijn niet verklaard kan worden door louter computationele processen, terwijl andere het principe van computationeel bewustzijn wel accepteren, maar stellen dat huidige digitale systemen niet beschikken over de noodzakelijke architectuur. Weer andere argumenten verwerpen de mogelijkheid van digitaal bewustzijn juist op basis van inzichten uit de fysica of biologie, in plaats van uit de computatietheorie.
Analytische structuur op drie niveaus
Om die complexiteit te adresseren, stellen de auteurs een analytisch kader met drie niveaus voor, geïnspireerd op het cognitieve wetenschappelijke model van David Marr. Het eerste niveau richt zich op bewustzijn als een input-outputcorrespondentie, gestuurd door berekenbare functies. Het tweede niveau behandelt de specifieke algoritmen, architecturen en representatiestructuren die nodig zijn om bewustzijn te realiseren. Het derde niveau richt zich op de bezwaren dat het fysieke substraat zelf essentieel is voor bewuste ervaring.
Dit kader stelt onderzoekers, beleidsmakers en filosofen in staat om overeenkomsten en verschillen te identificeren, met een duidelijk onderscheid tussen argumenten tegen computationeel functionalisme en argumenten tegen digitaal bewustzijn.
Niet-berekenbare functies
Sommige critici beweren dat bewustzijn niet-computeerbare processen omvat die buiten het bereik van Turing-machines liggen, terwijl anderen stellen dat elk computationeel model van bewustzijn te complex zou zijn om op grote schaal te implementeren. De studie benadrukt tevens het belang van dynamische koppeling en suggereert dat bewustzijn mogelijk real-time interactie met omgevingen vereist—iets wat digitale systemen moeilijk kunnen nabootsen.
Algoritmische organisatie
Op algoritmisch niveau draait het debat om de organisatie van algoritmen. Theorieën onderzoeken of symbolische architecturen, neurale netwerken of hybride systemen in staat zijn bewuste toestanden te genereren. Sommigen benadrukken de noodzaak van analoge processen met continue waarden, die digitale systemen niet volledig kunnen emuleren. Anderen leggen de nadruk op synchronisatie en representatievormen die essentieel zijn voor subjectieve ervaring, maar die in de huidige digitale architecturen ontbreken.
Dit niveau omvat ook discussies over belichaming en enactivisme, waarin wordt gesteld dat bewustzijn uitsluitend voortkomt uit lichamen die handelen binnen een omgeving. Volgens die zienswijze kunnen grote taalmodellen, ondanks hun ogenschijnlijke intelligentie, de interactieve eigenschappen missen die essentieel zijn voor bewuste toestanden.
Fysieke substraat
Bezwaren met betrekking tot het fysieke substraat leggen de strengste beperkingen op. Die argumenten richten zich op de unieke eigenschappen van biologische hersenen die digitale hardware niet kan repliceren. Theorieën in deze categorie beweren dat bewustzijn afhankelijk is van informatie die in biologische netwerken is ingebed, de dynamica van elektromagnetische velden in de hersenen, of zelfs kwantumprocessen.
Volgens deze bezwaren zijn digitale AI-systemen fundamenteel niet in staat bewustzijn te bereiken door de cruciale rol van het fysieke substraat. Het onderzoek benadrukt dat deze beweringen eerder op het niveau van natuurkunde en biologie liggen dan op dat van de computerwetenschap, waarvoor nog empirisch bewijs nodig is over hoe bewustzijn in natuurlijke systemen ontstaat.
Evaluatiesysteem met drie niveaus
Om de kracht van elk bezwaar beter te verduidelijken, hanteert de studie een evaluatiesysteem op drie niveaus. Sommige bezwaren suggereren dat machinebewustzijn mogelijk is, mits bepaalde mogelijkheden of architecturen worden toegepast. Andere wijzen op praktische obstakels die bewuste AI onwaarschijnlijk maken met de huidige technologie. De sterkste bezwaren stellen dat digitale systemen, ongeacht technologische vooruitgang, nooit bewust kunnen worden.
Dit classificatiesysteem maakt onderscheid tussen conceptuele, technologische en metafysische bezwaren. Het benadrukt bovendien de gebieden waar empirisch onderzoek de meningsverschillen zou kunnen oplossen, evenals de domeinen die verder filosofisch onderzoek vereisen.
Kader voor bestuur, ethiek en AI-ontwikkeling
De studie sluit af met een bespreking van de praktische implicaties van het kader voor bestuur, ethiek en AI-ontwikkeling. Nu AI-modellen steeds vaker cruciale beslissingen beïnvloeden en op geavanceerde manieren met mensen interageren, is het essentieel het volledige spectrum van argumenten rondom digitaal bewustzijn te begrijpen. De voorgestelde classificatie kan beleidsmakers helpen bij het ontwikkelen van weloverwogen regelgeving, het opstellen van ethische richtlijnen en het ondersteunen van AI-ontwikkelaars bij verantwoorde uitspraken over de mogelijkheden van hun systemen. (fc)
AgiBot humanoid robot patrols at the waiting hall of Jinhua railway station on the first day of the Spring Festival travel rush on January 14, 2025 in Jinhua, Zhejiang Province of China.
The Chinese robotics company AgiBot has set a new world record for the longest continuous journey walked by a humanoid robot. AgiBot’s A2 walked 106.286 kilometers (66.04 miles), according to Guinness World Records, making the trek from Nov. 10-13.
The robot journeyed from Jinji Lake in China’s Jiangsu province to Shanghai’s Bund waterfront district, according to China’s Global Times news outlet. The robot never powered off and reportedly continued to operate while batteries were swapped out, according to UPI.
A video posted to YouTube shows a highly edited version of the walk that doesn’t give much insight into how it was presumably monitored by human handlers. But even if it did have some humans playing babysitter, the journey included just about everything you’d expect when traveling by foot in an urban environment, including different types of ground, limited visibility at night, and slopes, according to the Global Times.
The robot obeyed traffic signals, but it’s unclear what level of autonomy may have been at work. The company told the Global Times that “the robot was equipped with dual GPS modules along with its built-in lidar and infrared depth cameras, giving it the sensing capability needed for accurate navigation through changing light conditions and complex urban environments.”
That suggests it was fully autonomous, and the Guinness Book of World Records used the word “autonomous,” though Gizmodo couldn’t independently confirm that claim.
“Walking from Suzhou to Shanghai is difficult for many people to do in one go, yet the robot completed it,” Wang Chuang, partner and senior vice president at AgiBot, told the Global Times.
The amount of autonomy a robot is operating under is a big question when it comes to companies rolling out their demonstrations. Elon Musk’s Optimus robot has been ridiculed at various points because the billionaire has tried to imply his Tesla robot is more autonomous than it actually is in real life.
For example, Musk posted a video in January 2024 that appeared to show Optimus folding a shirt. That’s historically been a difficult task for robots to accomplish autonomously. And, as it turns out, Optimus was actually being teleoperated by someone who was just off-screen. Well, not too far off-screen. The teleoperator’s hand was peeking into the frame, which is how people figured it out.
Tesla’s Optimus robot folding laundry in Jan. 2024 with an annotation of a red arrow added by Gizmodo showing the human hand. Gif: Tesla / Gizmodo
Musk did something similar in October 2024 when he showed off Optimus robots supposedly pouring beer during his big Cybercab event in Los Angeles. They were teleoperated as well.
It’s entirely possible that AgiBot’s A2 walked the entire route autonomously. The tech really is getting that good, even if long-lasting batteries are still a big hurdle. But obviously, people need to remain skeptical when it comes to spectacular claims in the robot race.
We’ve been promised robotic servants for over a century now. And the people who have historically sold that idea are often unafraid to use deception to hype up their latest achievements. Remember Miss Honeywell of 1968? Or Musk’s own unveiling of Optimus? They were nothing more than humans in a robot costume.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
25-11-2025
Disney brings Olaf to life! AI-powered snowman robot can walk and talk just like the Frozen character - as delighted fans say 'it's like he jumped right off the screen'
Disney has brought one of its most legendary characters to life – and he's seriously worth melting for.
Measuring just three feet (one metre) tall, Olaf the robot can walk and talk just like the delightful eternally optimistic snowman from the Frozen movies.
The high-tech device can move around through a combination of remote operation and AI that's programmed to adapt to surroundings.
Video shows him curiously shuffling around Disneyland Paris just like he does in the films – and visitors will soon be able to meet him.
Kyle Laughlin, product and technology leader at Disney, called it 'one of the most expressive and true-to-life characters built'.
'From the way he moves to the way he looks, every gesture and detail is crafted to reflect the Olaf audiences have seen in the film,' he said.
On Instagram, Frozen fans were sent into delirium, calling the creation 'incredible', 'perfect' and 'so good it's making my tummy hurt'.
Another fan said: 'It's like he jumped right off the screen into real life.'
Some people are worth melting for: The walking, talking bot, unveiled at Disneyland Paris on Monday, has movements just like in the film, as well as detachable arms and nose
In a nod to one of his most famous lines, another person said: 'I need a warm hug please Olaf.'
Olaf was built by Disney Imagineering, the multi-billion-dollar company's research and development arm based in Glendale, California.
To make the snowman's movements as authentic as possible, engineers relied on a type of AI called reinforcement learning.
This is where a robot learns to make intelligent decisions through trial and error by interacting with its environment.
But the technology allows Olaf the robot to practice thousands of motions inside a computer simulation until his real-life movements look natural.
'It takes humans years to perfect our motor skills for walking, and it takes additional years of practice to perform acrobatic motions that only a few of us can master,' Laughlin said.
'Deep reinforcement learning is a technology that helps robots acquire such skills in a shorter amount of time.
'To make this technology scale well, we need fast and parallel simulation.'
Olaf talking about the new World of Frozen, inside the Disney Adventure World (former Walt Disney Studios) in Disneyland Paris, in Marne-la-Vallee, east of Paris on November 24, 2025
A hidden operator remotely controls the robot using joysticks, but it is AI programmed to adapt to surroundings
To make Olaf as authentic as possible, the team used a branch of artificial intelligence called reinforcement learning
How does Olaf work?
Disney has relied on an AI method called reinforcement learning so Olaf can adapt to its surroundings and people.
But like Disney's Star Wars BDX Droids he changes direction through remote operation by a staff member.
It's unclear if Olaf utters a range of pre-recorded responses or has a more sophisticated conversational capability akin to ChatGPT.
Despite this feat, Olaf is still ultimately operated by a Disney staff member holding a remote control, Tech Radar reports.
Other nice touches are Olaf's exterior 'snow' made from light-catching iridescent fibres, which is a striking contrast to the hard shells of other robotic characters.
And just like in the film, his twig arms and carrot nose are removable and pop right back on.
Most importantly, Olaf can fully articulate his mouth and engage in Frozen-related conversations with park guests, Disney claims.
As yet, it's unclear how sophisticated these conversations will be or if they'll be powered by a generative chatbot akin to ChatGPT.
Olaf is similar to the Star Wars BDX Droids, the free roaming robotic characters that victors can see at Disney attractions.
'However, BDX Droids in the films are literally robotic characters,' Laughlin said.
'Olaf is an animated character that is far more challenging to bring to life in the physical world.'
Olaf at the new World of Frozen, inside the Disney Adventure World (former Walt Disney Studios) in Disneyland Paris, in Marne-la-Vallee, east of Paris on November 24, 2025. The Disney Adventure World will open on March 29, 2026
In the bizarre clip, two lifesize robotswearing gloves and protective headgear fight each other in a ring as a human officiator looks on.
Each fighter robot weighs about 35kg and is 4.3ft (132cm) tall – roughly the height of the average eight-year-old child.
Both the bots initially have trouble seeing exactly where their opponent is before successfully trading punches and kicks, to the delight of a baying crowd.
For the most part, robotics developers have kept their projects on the lighter side, even if they occasionally creep us out. But that might be changing thanks to the Chinese automotive company XPeng, which just released a clip showcasing its “Iron” humanoid robot.
The clip, which circulated on social media, show a robotic exoskeleton strapped to a harness walking in a straight line. While XPeng’s Iron units are usually clad in a sleek white skin, this one is totally nude — and it looks like something straight out of the post-apocalyptic RPG Fallout, or like the titular antagonists from the “Terminator” franchise.
The bot also cleans up nicely. During an event this week, the company showed it off dressed in a cloth bodysuit that gave it a distinctly feminine profile.
The bot displayed some pretty slick motion, swaying its hips as it walks.
“We’re at a point now where robots can move more sensually than Taylor Swift,” one Redditor commented.
“I am kind of blown away that they can get motors to work in such an elegant way. I assumed it was soft body mechanics,” wrote another. “Wow.”
Iron made its first debut on Wednesday, when XPeng CEO He Xiaopeng introduced the unit as the “most human-like” bot on the market to date. Per Humanoids Daily, the robot features “dexterous hands” with 22 degrees of flexibility, a “human-like spine,” gender options, and a digital face.
According to He, the bot also contains the “first all-solid-state battery in the industry,” as opposed to the liquid electrolyte typically found in lithium-ion batteries. Solid-state batteries are considered the “holy grail” for electric vehicle development, a design choice He says will make the robots safer for home use.
For the most part, robotics developers have kept their projects on the lighter side, even if they occasionally creep us out. But that might be changing thanks to the Chinese automotive company XPeng, which just released a clip showcasing its “Iron” humanoid robot.
The clip, which circulated on social media, show a robotic exoskeleton strapped to a harness walking in a straight line. While XPeng’s Iron units are usually clad in a sleek white skin, this one is totally nude — and it looks like something straight out of the post-apocalyptic RPG Fallout, or like the titular antagonists from the “Terminator” franchise.
The bot also cleans up nicely. During an event this week, the company showed it off dressed in a cloth bodysuit that gave it a distinctly feminine profile.
The bot displayed some pretty slick motion, swaying its hips as it walks.
“We’re at a point now where robots can move more sensually than Taylor Swift,” one Redditor commented.
“I am kind of blown away that they can get motors to work in such an elegant way. I assumed it was soft body mechanics,” wrote another. “Wow.”
Iron made its first debut on Wednesday, when XPeng CEO He Xiaopeng introduced the unit as the “most human-like” bot on the market to date. Per Humanoids Daily, the robot features “dexterous hands” with 22 degrees of flexibility, a “human-like spine,” gender options, and a digital face.
According to He, the bot also contains the “first all-solid-state battery in the industry,” as opposed to the liquid electrolyte typically found in lithium-ion batteries. Solid-state batteries are considered the “holy grail” for electric vehicle development, a design choice He says will make the robots safer for home use.
A humanoid robot has reached new depths of the uncanny valley with its smooth, humanlike movements.
Chinese electric vehicle manufacturer, Xpeng, revealed its latest robot dubbed the Xpeng IRON, at an event last week.
The bot proved so eerily lifelike that its inventors were forced to cut it open on stage to prove there wasn't a person hiding inside.
Luckily for the assembled crowds, all this bold stunt revealed was sophisticated synthetic muscles, rather than real flesh and blood.
Xpeng says IRON's creepily realistic walk is a result of its unique AI, which enables it to physically react to its surroundings.
Each robot is also coated with a 'full coverage' synthetic skin, which supposedly makes it 'feel warmer and more intimate'.
On social media, fans have been blown away by the impressive design, with commenters hailing IRON as 'beyond belief'.
One amazed commenter wrote: 'For the first time in human history, a robot needs to prove that it is a machine.'
Xpeng, a Chinese EV manufacturer, has unveiled a humanoid robot which is so realistic that the company was forced to cut the bot open live on stage
People were so convinced that the new IRON robot was human that the engineers cut off its skin to reveal the robot beneath
Xpeng unveiled its new humanoid robot at the company's 2025 XPENG AI Day in Guangzhou, China.
And, after facing a series of claims that its robots are fake, Xpeng decided to settle the debate once and for all.
In front of the assembled crowd, an engineer took a large pair of scissors and carefully cut through the outer layer of synthetic skin.
Rather than revealing a hidden human as some critics had suspected, the engineer only revealed a sleek metal limb.
The company says that its robot's 'cat-like' gate was produced by a hidden system of artificial muscles, flexible bones, and a synthetic spine.
The robot is powered by three custom AI chips that allow it to make a combined total of 2,250 trillion operations per second (TOPS), making it one of the most advanced humanoid robots in existence.
This allows it to move with unnervingly human-like balance and poise.
However, IRON's true uncanny factor comes from its lifelike endoskeleton, which gives its body shape, and a layer of flexible skin.
After cutting away the synthetic skin, the inventors revealed the powerful artificial muscles below. These are what give the IRON robot its human-like walk
IRON is made especially creepy by its synthetic skin and 'endoskeleton', which gives the robot a realistic body shape
In the future, Xpeng says that customers will be able to customise the build and skin tone of their robots, choosing slimmer or stockier designs.
He Xiaopeng, chairman and CEO of Xpeng Motors, said at the event: 'In the future, robots will be life partners and colleagues.
'I suspect that, just like when you buy a car, you can choose different colours, exteriors, and interiors. In the future, when you buy a robot, you can choose the sex, hair length, or clothing for your desired purpose.'
On social media, some tech enthusiasts flocked to share their amazement at the innovation.
One impressed commenter wrote: 'It's a compliment to the engineers that people thought it was a disguised human.'
'An engineer should be proud if people call their work fake or a hoax,' another chimed in.
While one added: 'This robot's movement is too human to believe it is actually a robot.'
However, even though Xpeng cut its robot open on stage, not everyone was convinced.
Even after the demonstration, some social media users were not convinced. Claiming that there was an amputee with a metal leg inside the suit
Xpeng says that its robots will be available to work in factories from 2026, but it has ruled out providing them for domestic settings
'It's a woman with a prosthetic leg,' one commenter wrote.
Another added: 'I laughed so hard when they started cutting out the part that was hiding the prosthetic leg.
Another demanded: 'Cut open all of it! It could be an amputee!'
Of course, there is no evidence to suggest Xpeng has faked their robot, and IRON is actually the second generation of humanoid bots made by the company.
Mr Xiaopeng says that the first IRON robots will start appearing at Xpeng locations in 2026.
The company didn't confirm how much each robot would cost, and did not respond to Daily Mail's request for additional information.
However, these humanoid bots won't be folding your laundry or doing the dishes for you quite yet.
Xpeng has ruled out offering its robots for domestic settings, noting that a powerful autonomous robot in a cluttered, unpredictable environment poses obvious safety risks.
Physical jobs in predictable environments, including machine-operators and fast-food workers, are the most likely to be replaced by robots.
Management consultancy firm McKinsey, based in New York, focused on the amount of jobs that would be lost to automation, and what professions were most at risk.
The report said collecting and processing data are two other categories of activities that increasingly can be done better and faster with machines.
This could displace large amounts of labour - for instance, in mortgages, paralegal work, accounting, and back-office transaction processing.
Conversely, jobs in unpredictable environments are least are risk.
The report added: 'Occupations such as gardeners, plumbers, or providers of child- and eldercare - will also generally see less automation by 2030, because they are technically difficult to automate and often command relatively lower wages, which makes automation a less attractive business proposition.'
Xpeng's new humanoid, IRON, is designed to work alongside people — but it won't be folding your laundry anytime soon.
Chinese electric vehicle(EV) maker Xpeng has unveiled a new humanoid robot with such lifelike movements that company representatives felt compelled to slice it open onstage to prove a human wasn't hiding inside.
Fortunately for the audience, there wasn't. Instead, the robot, named "IRON," features a flexible, humanlike spine, articulated joints and artificial muscles that allow it to move with a model-like swagger.
This is thanks to Xpeng's custom artificial intelligence (AI) robotics architecture, which enables it to interpret visual inputs and respond physically without needing to first translate what it sees into language
Speaking during IRON's unveiling at Xpeng's AI Day in Guangzhou on Nov. 5, China, He Xiaopeng, chairman and CEO of Xpeng Motors, suggested that IRON's appearance was designed to be recognizably human — if slightly unsettling.
The machine is equipped with 82 degrees of freedom, including 22 in each hand, allowing it to bend, pivot and gesture at multiple points throughout its body, representatives said in a statement.
It's powered by three custom AI chips that give it a combined 2,250 trillion operations per second (TOPS) of computing power, which Xpeng says makes it one of the most powerful humanoid robots developed to date. For comparison, Intel's Core Ultra 200V series processor, fitted into some of the best laptops, can achieve just 120 TOPS.
IRON man
IRON is based on what its creators call a "born from within" design, a concept that reflects the robot’s design mimicking the human body from the inside out.
The robot features an internal endoskeleton and bionic muscle structure capable of supporting different body types, ranging from slim to stocky, which users can customize. Its outer layer is also made from "full-coverage" synthetic skin, He said during the presentation, making the robot "feel warmer and more intimate."
"The next generation has very flexible bones, solid bionic muscles, and soft skin. We hope it can have a similar height and proportions to human beings," He said. "In the future, robots will be life partners and colleagues. I suspect that, just like when you buy a car, you can choose different colors, exteriors, and interiors. In the future, when you buy a robot, you can choose the sex, hair length, or clothing for your desired purpose."
According to Xpeng, IRON is also the first humanoid robot in the world to run on an all-solid-state battery. Solid-state batteries use ceramics or polymers instead of the flammable liquids in conventional lithium-ion batteries, making them safer for the enclosed environments where the robot is designed to operate.
IRON is destined for mass production, although Xpeng ruled out household chores for the immediate future, pointing out that a humanoid robot operating in messy or unpredictable households could pose safety risks. Instead, it will debut in commercial settings such as stores, offices and company showrooms, with the first models expected to appear in Xpeng locations in 2026.
The announcement forms part of Xpeng's broader push into "physical AI," which aims to bring together robotics, autonomous vehicles and AI development under a unified platform. Earlier this year, the company revealed a prototype flying car designed to launch from a Cybertruck-style mobile base.
Humanoid robots have been having something of a moment in recent months. In October, Chinese robotics startup Unitree debuted its pirouetting, karate-kicking H2 model. Unlike IRON, Unitree's bot has yet to be given an official release date, meaning Xpeng's bot may well beat it to the shop floor (or office reception).
The compact robot enables people with limited mobility to navigate complicated environments where wheeled devices can't go.
(Image credit: Toyota/Japan Mobility Show 2025)
A robot chair revealed at the Japan Mobility Show 2025 can navigate complicated environments on its four articulated legs.
While the chair is still a prototype, it aims to allow users with limited mobility to climb stairs or cross other obstacles that would be impassable by traditional wheelchairs. It's also capable of lifting the user so they can access cars and other elevated vehicles or platforms.
Developed by Toyota, the Walk Me prototype features four foldable legs and a seat designed to support proper posture. The legs are swaddled in a soft, colorful material that serves the dual purpose of protecting the sensitive internals (like sensors and motors) from external damage, while also giving the unit a pleasant, approachable aesthetic.
Toyota’s “Walk Me” Wheelchair Walks on Legs and Climbs Stairs – The Future of Mobility Is Here - YouTube
The legs are wholly independent, with each bending, lifting or folding to aid manoeuvrability. When not in use, the legs can also fold away neatly beneath the robot, allowing it to be packed into a car or luggage for easy transport. The system can also unfold and stabilize itself without user assistance.
Described as an "autonomous wheelchair," the bot is packed with a number of features that allow it to navigate difficult terrain by mimicking the movement of four-legged animals like crabs. These include LiDAR systems that use laser light to measure distances and create highly accurate, detailed three-dimensional representations of objects and environments, which the robot utilizes to dodge obstacles or deal with uneven surfaces.
When climbing stairs, the unit first tests the height with its front legs before pushing upward with its rear limbs. There are also built-in collision radars to avoid contact with people or objects.
Additionally, the Walk Me has built-in weight sensors to ensure that the user remains in a stable, seated position. Toyota's engineers studied the way people naturally navigate stairs and how they distribute their weight when moving around or over obstacles. If the robot senses an imbalance, it can adjust both its legs as well as the tilt of the seat itself to ensure the user is comfortable and secure.
There are also a number of manual control options. Handles are attached to the seat that allow the user to guide the robot's direction. Alternatively, a digital interface provides specific buttons to control locomotion precisely. The Walk Me will also respond to voice commands that include preset destinations like "living room" and speed controls like "slower" or "faster."
The unit is powered by a battery concealed behind the seat, which can power it for an entire day of operation. The battery is charged by plugging it into a standard wall outlet overnight.
The Walk Me was part of a broader product lineup shown by Toyota at the Tokyo Mobility Show, which also included an autonomous, self-driving car for kids and a "Land Cruiser of wheelchairs" with extra-rugged, all-terrain tires and a durable frame. According to Top Gear, the wheelchair was inspired by Toyota's chairman Akio Toyoda who, at 69, wants to be able to "drift, do donuts and race off-road into his retirement."
Since Boston Dynamic first teased its BigDog robot in 2004, four-legged hound automatons have exploded in popularity. There are now dozens of robot dogs in development, ranging from militaryandsurveillanceapplications to companionship models that cancarry groceries and talk back to their human owners.
One of the most distinctive uses for a quadrupedal robot we’ve seen yet is coming out of China, where the company Unitree has been hard at work developing robodogs that can assist firefighters at the site of dangerous blazes.
Called “Fire Rescue” units, the robots are essentially beefed up models of the Unitree B2. According to Unitree’s website, they Fire Rescue platform allows public safety officials to kit out their B2s with modular components, allowing them to spray water and foam, fight wildfires with air cannons, transmit data and video from inside burning structures, and carry equipment for rescuers.
Trial footage of the B2 Fire Rescue bot in action quickly made the rounds on Chinese and Western social media. The short clip shows a firefighter attach a high-pressure hose to the back of a unit, which springs up and advances toward a brush fire.
Controlled by a teleoperator, the device positions itself in front of the fire, dousing it in a stream of water.
On the Chinese-language app RedNote, one user commented that “this is the direction of technological development: to help people, not replace them.”
Whether these units make their way to the rest of the world remains to be seen. On Reddit, Western netizens wondered if the devices would weigh enough to withstand the high pressure typical of US handlines, the hoses firefighters carry by hand to directly attack fires.
“I’m hoping dog has some heavy weight, but if not you’ll need several dogs to hold down the hose,” one Redditor commented. “Those things ain’t no joke, the pressure is insane.”
According to the Unitree website, the B2 Fire Rescue module is rated for a water flow rate of 40 liters per second, though it’s not known what kind of water flow or pressure is used in the video. (For reference, the Fire Department of New York uses an angled hose nozzle for high-rise fires which flows at 16.7 liters per second.)
Either way, it’s a fascinating look at a new use for robot dogs, which until now were looking more like weapons of war than tools for the good of humanity.
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 75 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.