The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
30-09-2025
Huge fertility breakthrough as scientists create functional eggs from human SKIN cells: 'A step towards helping many women have their own genetic children'
Huge fertility breakthrough as scientists create functional eggs from human SKIN cells: 'A step towards helping many women have their own genetic children'
Infertility is something that affects millions of people around the world – often caused by problems with the egg.
Now, scientists have taken a huge step towards helping many women have their own genetic children.
Experts from Oregon Health & Science University have created fertilizable eggs from human skin cells for the very first time.
While further research is needed to ensure safety and efficacy before clinical trials can go ahead, experts have described the news as a 'major advance'.
'Many women are unable to have a family because they have lost their eggs, which can occur for a range of reasons including after cancer treatment,' said Professor Richard Anderson, Deputy Director of MRC Centre for Reproductive Health at the University of Edinburgh, who was not involved in the study.
'The ability to generate new eggs would be a major advance.
'This study shows that the genetic material from skin cells can be used to generate an egg–like cell with the right number of chromosomes to be fertilised and develop into an early embryo.
'There will be very important safety concerns but this study is a step towards helping many women have their own genetic children.'
Experts from Oregon Health & Science University have created fertilizable eggs from human skin cells for the very first time
For some couples struggling to conceive, in virto fertilization (IVF) can be an option.
This treatment sees the eggs fertilized by sperm in a lab, and the resulting embryo then placed in the woman's uterus.
However, if there's a problem with the egg itself, IVF can be ineffective.
Previous studies have suggested that a method called 'somatic cell transfer' could be an alternative approach.
This process involves transplanting the nucleus from one of a patient's own somatic cells (such as skin cells) into a donor egg cell with the nucleus removed, enabling the cell to differentiate into a functional egg.
However, while standard eggs have half the usual number of chromosomes (one set of 23), cells generated from skin cells have two sets of chromosomes (46).
Without intervention, this would cause the differentiated eggs to have an extra set of chromosomes.
So far, a method to remove this extra set has been developed and tested in mice – but is yet to be tried in humans.
For some couples struggling to conceive, in virto fertilization (IVF) can be an option. This treatment sees the eggs fertilized by sperm in a lab, and the resulting embryo then placed in the woman's uterus (stock image)
How does it work?
Somatic cell transfer involves transplanting the nucleus from a patient's own skin cells into a donor egg cell with the nucleus removed, enabling the cell to differentiate into a functional egg.
However, while standard eggs have half the usual number of chromosomes (23), cells generated from skin cells have two sets of chromosomes (46).
Without intervention, this would cause the differentiated eggs to have an extra set of chromosomes.
The team resolved this issue by inducing a process they've named 'mitomeiosis', which mimics natural cell division and causes one set of chromosomes to be discarded.
During tests, the researchers produced 82 functional eggs using this process, which were fertilised in a lab.
In their new study, the team resolved this issue by inducing a process they've named 'mitomeiosis'.
'[Mitomeiosis] mimics natural cell division and causes one set of chromosomes to be discarded, leaving a functional gamete,' the researchers explained in a statement.
During tests, the researchers were able to produce 82 functional eggs using this process, which were then fertilised in a lab.
Approximately nine per cent went on to develop the the blastocyst stage of embryo development.
However, the researchers did not culture the blastocysts beyond this point, which coincided with the time at which they would usually be transferred to the uterus in IVF treatment.
While the findings raise the tantalising possibility of women with problems with their eggs having their own genetic children, the experts note several limitations with their study.
Importantly, the vast majority (91 per cent) did not progress beyond fertilisation.
What's more, several of the blastocysts were found to contain chromosomal abnormalities.
Regardless, experts have called the research an 'exciting proof of concept'.
'This breakthrough, called mitomeiosis, is an exciting proof of concept,' said Professor Ying Cheong, a professor of reproductive medicine at the University of Southampton, who was not involved in the research.
'In practice, clinicians are seeing more and more people who cannot use their own eggs, often because of age or medical conditions.
'While this is still very early laboratory work, in the future it could transform how we understand infertility and miscarriage, and perhaps one day open the door to creating egg- or sperm-like cells for those who have no other options.'
In-vitro fertilisation, known as IVF, is a medical procedure in which a woman has an already-fertilised egg inserted into her womb to become pregnant.
It is used when couples are unable to conceive naturally, and a sperm and egg are removed from their bodies and combined in a laboratory before the embryo is inserted into the woman.
Once the embryo is in the womb, the pregnancy should continue as normal.
The procedure can be done using eggs and sperm from a couple or those from donors.
Guidelines from the National Institute for Health and Care Excellence (NICE) recommends that IVF should be offered on the NHS to women under 43 who have been trying to conceive through regular unprotected sex for two years.
People can also pay for IVF privately, which costs an average of £3,348 for a single cycle, according to figures published in January 2018, and there is no guarantee of success.
The NHS says success rates for women under 35 are about 29 per cent, with the chance of a successful cycle reducing as they age.
Around eight million babies are thought to have been born due to IVF since the first ever case, British woman Louise Brown, was born in 1978.
Chances of success
The success rate of IVF depends on the age of the woman undergoing treatment, as well as the cause of the infertility (if it's known).
Younger women are more likely to have a successful pregnancy.
IVF isn't usually recommended for women over the age of 42 because the chances of a successful pregnancy are thought to be too low.
Between 2014 and 2016 the percentage of IVF treatments that resulted in a live birth was:
Zeno Power has entered into a strategic agreement with Orano to power space batteries with recycled radioisotopes. The French company will supply americium-241 (Am-241) extracted during the reprocessing of spent nuclear fuel at the La Hague plant, and Zeno will invest millions to secure priority access to the isotope. Am-241 will be used as fuel for radioisotope power systems (RPS) that Zeno is developing for NASA — specifically for lunar rovers, landers, and future infrastructure on the Moon. This solves the problem of plutonium-238 shortage and expands the possibilities for long-term autonomous power supply in space.
Visualization of a Zeno-based lunar rover with a plutonium-based nuclear battery. Source: Orano
The key advantage of Am-241 is its long half-life — more than 430 years — which means that power systems can operate for decades, including during lunar nights and in permanently shaded regions near the poles. Orano will extract the isotope from spent fuel, transforming something previously considered waste into a strategic resource for space energy.
The companies have been working together since at least 2022; the new agreement establishes a stable supply chain for mass production of batteries. At the same time, Zeno is developing strontium-90 nuclear batteries for marine applications under contracts with the US Department of Defense, building a multi-fuel portfolio of solutions ranging from deep sea to deep space.
The Zeno and Orano teams in the la Hague waste storage facility. Source: Orano
How does it work?
Inside the nuclear battery is a tiny tablet of americium-241. It slowly disintegrates and continuously releases heat—like a mini-coal that does not go out for decades. This heat is supplied to thermocouples (two different metal plates), and an electric current is generated due to the Seebeck effect. No valves or gears – just stable heat that is converted into electricity with low but reliable efficiency. The secret lies in simplicity and durability: fuel is extracted from recycled nuclear waste, so there is plenty of it; the half-life is long, so the battery works for years without recharging; and the compact shielded housing ensures safety and continuous operation even in the darkness, cold, and dust of the Moon or deep space.
Why is this important?
Reliable RPSs based on Am-241 will enable scientific missions to operate where solar panels are ineffective: in the shadows of craters, during the two-week lunar night, and in deep space. This is a stable power supply for cameras, spectrometers, seismometers, repeaters, and navigation beacons, which is critically important for Artemis and future observatories/detectors that require continuous, maintenance-free operation.
Nuclear batteries are peaceful atoms serving science, but there are also dark scenarios in space. What will happen if nuclear technology crosses the line of restraint and becomes a weapon in orbit—with the risk of EMP, cascading debris, and the collapse of navigation and communications? Let’s take a sober look at the situation without panicking: historical prohibitions, realistic scenarios, and consequences for our daily lives – in the article “Oppenheimer’s nightmare: How imminent is the threat of nuclear war in space?”
The creators of science fiction films love to amaze viewers with various strange devices or technologies. Teleportation, faster-than-light travel, hibernation, artificial gravity – these are just a few of the most obvious examples. Unfortunately, many of these technologies currently seem impossible from the perspective of the laws of physics as we know them. Others seem achievable, but only in the distant future.
However, there are also reverse cases. Some devices imagined by sci-fi authors have already become reality. And just like with futuristic gadgets, digital tools are also evolving. Today, you can work with documents not only on a computer but also on a tablet or smartphone — thanks to modern solutions like UPDF, which combine PDF editing, file organization, and even AI-powered features.
The editors of Universe Space Tech have selected five technologies and devices from science fiction films that have either already entered or are gradually entering our everyday lives.
1. Mobile phones
The heroes of many science fiction novels and films used compact devices to stay in touch. A notable example is the communicators from the popular Star Trek franchise. They allowed the crew members of the Enterprise to communicate with one another and with other spacecraft. Losing, breaking, or going out of range of the communicator often created a difficult situation for the characters.
A scene from the TV series “Star Trek”
It is precisely the communicators from Star Trek that are often cited as one of the main sources of inspiration for mobile phone technology. The prototype of such a device was created by Motorola in 1973, eight years after the first episode of the series was released. We do not think we need to tell you what happened next. Interestingly, in some later episodes of Star Trek, the characters also used wrist communicators that closely resemble smartwatches.
A scene from the TV series “Star Trek”
Of course, the communicators from Star Trek did not have many of the capabilities of modern smartphones. At the same time, in some respects, they are still unmatched. For example, communicators were not affected by electromagnetic interference and allowed users to contact subscribers on another planet almost instantly without any signal delay. So, manufacturers of modern gadgets definitely still have something to strive for.
2. Tablets
At the time, “2001: A Space Odyssey” amazed audiences with its grandiose vision of a high-tech future in which space stations orbited the Earth to classical music and humanity confidently conquered the far reaches of the solar system. Of course, half a century later, the authors’ vision seems overly optimistic, even naive. But some of what was shown in Stanley Kubrick’s film did come true. And one of the most surprising “hits” in reality was… the tablet.
A scene from the film “2001: A Space Odyssey”
In one scene in the film, the crew members of the Discovery spacecraft watched the news on flat-screen devices that closely resembled modern tablets. Interestingly, the film’s script called for the New York Times logo and the newspaper’s digital front page to appear on the device displays, complete with several article headlines that could be opened with a touch. Had this scene been filmed, the creators of Space Odyssey would surely have been hailed as the people who predicted the internet. But even without it, the tablets in the 1968 movie look truly astonishing. And if they once seemed like pure fantasy, today their capabilities are part of everyday life. For instance, with Organize PDF, you can arrange your files on a tablet as easily as the characters of 2001: A Space Odyssey browsed the news.
Interestingly, when Samsung sued Apple in 2011 for patent infringement in the creation of tablet computers, its lawyers even included footage from “2001: A Space Odyssey” in the case. In this way, the Korean company tried to prove that its competitors did not actually come up with the design for their iPad, but simply “borrowed” it from Stanley Kubrick.
3. Bionic prostheses
All Star Wars fans surely remember the scene of Luke Skywalker’s battle with Darth Vader in The Empire Strikes Back, in which the young Jedi lost his arm. Fortunately for the hero, he later acquired a bionic prosthesis that completely replaced all the functions of his lost limb. Thanks to this, Luke Skywalker was able to once again skillfully wield his lightsaber in the sequel to the saga.
A scene from the movie “The Empire Strikes Back”
At the time of the film’s release, such a device seemed as fantastical as blasters or the Death Star. But much has changed in the forty years since. In 2016, Mobius Bionics began producing an innovative bionic prosthetic arm that gives users much greater freedom of movement than conventional prostheses. The device’s smart system reads muscle signals, allowing it to perform a variety of complex movements: using a screwdriver, brushing teeth, zipping up a zipper, holding both fragile and heavy objects, and putting an arm behind the back. The device has been named LUKE. The abbreviation stands for Life Under Kinetic Evolution, but it is also, of course, a reference to Luke Skywalker.
LUKE bionic prosthesis. Source: DARPA
A few years later, LUKE underwent significant improvements and was equipped with a biological feedback system. With the help of electrodes implanted in peripheral nerves and muscles, the prosthesis user can now feel touch, vibration, and even pain. This has greatly simplified the use of the device. And just as bionics changes people’s lives, artificial intelligence transforms the way we work with documents: UPDF offers ChatGPT-4.1-powered tools that let you chat with PDFs and instantly extract the data you need.
4. Exoskeletons
Exoskeletons are as integral to modern science fiction as colonies on other planets. Examples of such devices can be found in numerous films, from Aliens to Edge of Tomorrow. But you do not have to buy a movie ticket to see an exoskeleton anymore – you can already find them in real life.
A still from the film Edge of Tomorrow
In recent years, various companies and inventors have introduced many different types of exoskeletons. Most of them only strengthen one part of the body, but there are also full-body suits. Some exoskeletons are designed to restore lost motor functions. Others can be used in construction and industrial work. Still others are designed for use by extreme athletes.
Examples of different types of exoskeletons. Source: Wikipedia
Of course, the military is also showing great interest in such developments. Yes, exoskeletons are not yet part of the standard equipment of any army in the world. But given the pace of technological progress, it cannot be ruled out that the image of a mobile infantryman clad in combat armor, described in Robert Heinlein’s famous novel Starship Troopers, will one day become a reality.
5. Deflection of hazardous asteroids
In 1998, two films were released simultaneously, both depicting NASA’s attempts to save Earth from an uninvited visitor from space. Despite their different tones, at the time, the plots of both Armageddon and Deep Impact seemed like pure fantasy, as humanity had no way of deflecting a dangerous asteroid from Earth.
A scene from the movie Armageddon
And now, a quarter of a century later, NASA has taken the first step toward creating such technology. In November 2021, a Falcon 9 rocket launched the DART probe into space. Its target was the 160-meter asteroid Dimorphos, a satellite of the larger object Didymos (65803 Didymos). On September 26, 2022, DART crashed into this object at a speed of 6.6 km/s.
Asteroid Dimorphos. Source: NASA/Johns Hopkins APL
The consequences of the collision far exceeded scientists’ expectations. According to the most conservative estimates, the impact knocked at least a thousand tons of material off the asteroid’s surface, changing its shape. It also left a long dust trail (which later split in two) stretching 10,000 km.
In addition, the impact significantly altered Dimorphos’ orbital parameters. Before the impact, its orbital period around Didymos was 11 hours and 55 minutes. After the impact, it decreased to 11 hours and 22 minutes. The distance between the two asteroids also changed, decreasing by 37 meters.
Asteroid Dimorphos after colliding with the DART probe (image from the Hubble Space Telescope). Source: NASA/ESA/STScI/Hubble
Bombing an asteroid is not just fun and games for NASA. It will allow them to test methods for changing the orbit of a potentially dangerous asteroid in practice. Yes, it may not look as spectacular as Bruce Willis’ heroic self-sacrifice. But who knows? It is quite possible that one fine day, this technology will actually save our planet from danger from space.
Modern Tools for PDF Work
Science fiction is becoming reality not only in space but also in our digital lives. UPDF offers unique advantages:
Works on Windows, Mac, iOS, and Android with one account across 4 devices.
Full editing suite including OCR and AI chat with documents.
Lifetime free upgrades.
6x cheaper than Acrobat.
Monthly updates, 24/6 support, and a 30-day money-back guarantee.l
For decades, scientists have thought that human consciousness arises from the newest and most sophisticated parts of the brain.
But a Cambridgescientist now claims that the fundamental basis of our experience may be controlled by a far more primal structure.
In a review of over a century of scientific research, neuroscientist Dr Peter Coppola examined stimulation studies, animal experiments, and neurological case reports.
Based on this wide-ranging evidence, Dr Coppola argues that consciousness might arise from our ancient 'lizard brain'.
If true, that would mean that consciousness is not such a uniquely human trait as scientists had once thought.
Writing in The Conversation, Dr Coppola says: 'These reports are striking evidence that suggests maybe the oldest parts of the brain are enough for basic consciousness.
'In turn, this may influence patient care as well as how we think about animal rights.
'In fact, consciousness might be more common than we realised.'
For decades, scientists have thought that human consciousness arises from the newest and most sophisticated parts of the brain. But now, a Cambridge scientist claims that the fundamental basis of our experience may be controlled by a far more primal structure (stock image)
The human brain is a little like a Russian nesting doll, with the parts that evolved most recently on the outside and the older, more basic parts nestled towards the centre.
The recently evolved outermost part of the brain is known as the cortex, which is responsible for complex tasks like memory, thinking, learning, reasoning, and problem-solving.
Meanwhile, the inner region, known as the sub-cortex, hasn't changed much in over 500 million years of evolution.
Often referred to as the 'lizard brain', these primal areas are responsible for monitoring basic impulses and sensations such as hunger, thirst, pain, pleasure, and fear.
Previously, scientists thought that the most recently evolved areas of the cortex, known as the neocortex, were the likely origin of conscious experiences.
The subcortex was considered necessary for consciousness, like how electricity is necessary to make a television work, but not sufficient to create consciousness by itself.
However, Dr Coppola says that scientists have been underestimating the importance of the brain's oldest regions.
Dr Coppola looked at a type of experiment called a stimulation study in which electricity or magnets are used to interfere with parts of the brain.
Scientists had previously thought that consciousness, the subjective awareness of experience, was produced in the more recently evolved outer region of the brain known as the cortex
Instead, neurologist Dr Peter Coppola says that consciousness is likely produced by the more ancient sections of the brain known as the subcortex and the hind-brain (illustrated)
What is consciousness?
Consciousness is the subjective awareness of 'what it is like' to experience the world.
There are two major questions scientists have about consciousness: a so-called easy problem and a hard problem.
The easy problem is the underlying biological processes which control perception, memory and attention.
The hard problem is to understand how and why physical processes should come with a subjective experience at all.
For example, why does banging our funny bone hurt? Why do our bodies not simply register the bodily damage?
Some scientists and philosophers think that we may never be able to answer the hard problem of consciousness.
Interfering with the neocortex produces powerful effects, including changing your sense of self, creating hallucinations, or affecting your judgment, but affecting the patterns of the deeper regions produces even more profound effects.
Dr Coppola says: 'We can induce depression, wake a monkey from anaesthesia or knock a mouse unconscious. Even stimulating the cerebellum, long considered irrelevant, can change your conscious sensory perception.'
This was a strong hint that the older regions of the brain were very important for consciousness, but it wasn't enough to show that the lizard brain alone was capable of producing consciousness.
To make that jump, Dr Coppola looked at cases where people and animals have had parts of their brains damaged or removed.
Damaging the cortex and neocortex produces changes in conscious experience, but damage to the subcortex and other deep regions often leads to the total destruction of consciousness through death or coma.
Even more strikingly, there are rare cases of children born with a condition called hydranencephaly that causes them to lack most of their cortex.
Dr Coppola says: 'According to medical textbooks, these people should be in a permanent vegetative state.
'However, there are reports that these people can feel upset, play, recognise people or show enjoyment of music.'
In rare cases, children can be born without most of their neocortex (pictured) but still appear to have a conscious experience. This suggests that it is only the older regions which are necessary for consciousness
Likewise, Dr Coppola refers to a number of 'extreme' experiments on animals in which rats, cats, and monkeys had their neocortex surgically removed.
Even without this supposedly vital part of the brain, the animals were able to show emotion, groom themselves, parent their young, and even learn.
This suggests that the subcortex alone is sufficient to produce some level of conscious experience.
However, this doesn't mean that the cortex and neocortex aren't adding anything to our human consciousness.
Dr Coppola says: 'The newer parts of the brain – as well as the cerebellum – seem to expand and refine your consciousness.'
These regions take the basic building blocks of awareness and add in the language, moral reasoning, sense of self, and creativity that make human consciousness unique.
That would explain how the richness of human consciousness is able to emerge out of such primitive pieces of brain machinery.
But, if Dr Coppola is correct, this means that a basic level of consciousness is likely older and much more widespread than anyone had previously thought.
Functional magnetic resonance imaging (fMRI) is one of the most recently developed forms of neuroimaging.
It measures the metabolic changes that occur within the brain, such as changes in blood flow.
Medical professionals may use fMRI to detect abnormalities within the brain that cannot be found with other imaging techniques, measure the effects of stroke or disease, or guide brain treatment.
It can also be used to examine the brain’s anatomy and determine which parts of the brain are handling critical functions.
A magnetic resonance imaging (MRI) scan uses a magnetic field rather than X-rays to take pictures of body.
The MRI scanner is a hollow machine with a tube running horizontally through its middle.
You lie on a bed that slides into the tube of the scanner.
Equipment used in fMRI scans uses the same technology, but is more compact and lightweight.
The main difference between a normal MRI scan and a fMRI scan is the results that can be obtained.
Whereas a normal MRI scan gives pictures of the structure of the brain, a functional MRI scan shows which parts of the brain are activated when certain tasks are carried out.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
22-09-2025
Inside China's secretive lab rewriting our understanding of the UNIVERSE: $300 million detector 2,300ft underground is being used to sniff out mysterious ghost particles
Inside China's secretive lab rewriting our understanding of the UNIVERSE: $300 million detector 2,300ft underground is being used to sniff out mysterious ghost particles
Deep underneath a granite hill in southern China, an enormous detector is sniffing out the secrets of the universe.
This futuristic underground observatory has been built with the sole purpose of detecting neutrinos – tiny cosmic particles with a mind–bogglingly small mass.
To date, nobody knows what these 'ghost particles' are or how they work.
But scientists hope this $300 million lab will be able to answer these questions – vital to understanding the building blocks of the universe.
Neutrinos date back to the Big Bang, and trillions zoom through our bodies every second. They spew from stars like the sun and stream out when atoms collide in a particle accelerator.
There's no way to spot the tiny particles whizzing around on their own. Instead, scientists measure what happens when they collide with other matter, producing flashes of light or charged particles.
Neutrinos bump into other particles only very rarely so to up their chances of catching a collision, physicists have to think big.
This is where the Jiangmen Underground Neutrino Observatory comes in.
The $300 million detector at the Jiangmen Underground Neutrino Observatory located 2297 feet (700 meters) underground
An aerial view of the Jiangmen Underground Neutrino Observatory in Kaiping, southern China's Guangdong province
Workers labor on the underside of the cosmic detector. This futuristic underground observatory has been built with the sole purpose of detecting neutrinos – tiny cosmic particles with a mind–bogglingly small mass
The detector, built in Kaiping in China, took over nine years to build, Its location 2,300ft (700m) underground protects it from cosmic rays and radiation that could throw off its neutrino detection abilities.
The orb–shaped structure is filled with a liquid designed to emit light when neutrinos pass through. These will flow into the detector from two nearby nuclear power stations.
The sphere – a thin bubble of acrylic – is contained within a protective cylinder containing 45,000 tonnes of pure water.
These neutrinos will 'bump' into protons in the detector, releasing tiny flashes of light at a rate of about 50 per day.
The detector is specially designed to answer a key question about a longstanding mystery.
Neutrinos switch between three 'flavours' as they zip through space, and scientists want to rank them from lightest to heaviest.
'We are going to know the hierarchy of the neutrino mass,' Wang Yifang, from the Chinese Academy of Sciences, told The Times.
'And by knowing this we can build up the model for particle physics, for neutrinos, for cosmology.'
Wang Yifang, chief scientist and project manager at the Jiangmen Underground Neutrino Observatory
Scientists hope this $300 million lab will be able to answer questions vital to understanding the building blocks of the universe.
Visitors take a train ride to visit the cosmic detector located deep underground. The orb–shaped structure is filled with a liquid designed to emit light when neutrinos pass through
Sensing these subtle shifts in the already evasive particles will be a challenge, said Kate Scholberg, a physicist at Duke University who is not involved with the project.
'It´s actually a very daring thing to even go after it,' she said.
Physicists said it will take around six years to generate the required 100,000 'flashes' that will allow for readings to be statistically significant.
Two similar neutrino detectors – Japan's Hyper–Kamiokande and the Deep Underground Neutrino Experiment based in the United States – are under construction.
They are set to go online around 2027 and 2031 and will cross–check the China detector´s results using different approaches.
Though neutrinos barely interact with other particles, they have been around since the dawn of time. Studying these Big Bang relics can clue scientists into how the universe evolved and expanded billions of years ago.
'They're part of the big picture,' Professor Scholberg said.
One question researchers hope neutrinos can help answer is why the universe is overwhelmingly made up of matter with its opposing counterpart – called antimatter – was largely snuffed out.
What are ghost particles?
Neutrinos are the most common matter particle in the universe.
Trillions of them move through our bodies every second without ever interacting with us.
They could hold the key to explaining why matter dominates the universe instead of antimatter or unify the theories of how the four major forces of the universe work.
Unfortunately, neutrinos hardly ever interact with anything, making them incredibly difficult to study.
Scientists have known about the existence of neutrinos for almost a century, but they are still in the early stages of figuring out what the particles really are.
Though neutrinos barely interact with other particles, they´ve been around since the dawn of time. Studying these Big Bang relics can clue scientists into how the universe evolved and expanded billions of years ago.
One question researchers hope neutrinos can help answer is why the universe is overwhelmingly made up of matter with its opposing counterpart – called antimatter – largely snuffed out.
Scientists don't know how things got to be so out of balance, but they think neutrinos could have helped write the earliest rules of matter.
Observed for the first time, a phenomenon where metals repair themselves by healing small cracks could upend our understanding of material theories, and may ultimately lead to revolutionary new engineering concepts like self-healing machines.
Researchers at Sandia National Laboratories studying how microscopic cracks form in metals found that under the right circumstances, metals repair themselves, conjuring imagery similar to the famous villain seen in the film Terminator 2: Judgement Day.
Previously, scientists and engineers believed that cracks in metals only got worse over time and that the idea of these complex materials repairing themselves was impossible. If this latest finding can be applied in a practical way, future applications with more complex machines and structures like airplanes or bridges could potentially allow them to heal microscopic cracks before catastrophic failure occurs.
Crack in Metals Are Not Supposed to Repair Themselves
For decades, engineers and scientists who work with metals were confident about two things: metals form microscopic cracks after repeated loads, and over time those cracks can and will grow until they lead to catastrophic failure. Even simulation software used by engineers to develop anything from engine parts to massive structures takes these microscopic, often nanometer-in-size, cracks into account.
In 2013, Michael Demkowicz, an assistant professor at the Massachusetts Institute of Technology’s Department of materials science and Engineering who is now a full professor at Texas A&M University, started to think that modern science may be wrong and that under certain circumstances, metals should be able to heal these micro fractures. After some research, theory, and simulations, he published his concept in October of that same year.
“We present a new mechanism—discovered using molecular dynamics simulations—that leads to complete healing of nanocracks,” he wrote in the paper’s abstract outlining his simulations.
Now a decade later, a team of researchers working at Sandia National Labs who were simply studying how cracks evolve in platinum stumbled on a real-world version of the phenomenon that Demkowicz had modeled in computer simulations.
MetalS Heal Thyself?
“Cracks in metals were only ever expected to get bigger, not smaller,” explained Brad Boyce, a materials scientist at Sandia Labs. “Even some of the basic equations we use to describe crack growth preclude the possibility of such healing processes.”
Perhaps unsurprisingly, given the myriad examples of accidental discoveries throughout the history of science, Boyce said that their experiments were actually designed for another purpose altogether and that they “certainly weren’t looking for it” when they saw the cracks seem to heal themselves magically.
Instead, Khalid Hattar, currently an associate professor at the University of Tennessee, Knoxville, and Chris Barr, who is now at the Department of Energy’s Office of Nuclear Energy, was running a simple experiment that involved a specialized electron microscope technique designed to pull on the ends of metal 200 times per second. In this case, that metal was a piece of platinum that Hattar and Barr were studying to see how these microscale cracks form under this repeated stress. And then they saw something unexpected that left them stunned.
“About 40 minutes into the experiment, the damage reversed course,” the team explains in a press release announcing the unexpected finding. “One end of the crack fused back together as if it was retracing its steps, leaving no trace of the former injury.”
(Credit: Sandia Labs)
Although Boyce was familiar with the theory laid out by Demkowicz, he was still shocked to see it actually happen.
“This was absolutely stunning to watch first-hand,” he said.
The stunned scientist immediately communicated the findings to Demkowicz, letting him know that his theory and simulations were right.
“I was very glad to hear it, of course,” Demkowicz said of his initial reaction before successfully confirming on his simulation software that the effect the research team working at Sandi was witnessing was indeed the same one he had theorized.
Ryan Schoell works on a transmission electron microscope at Sandia Labs, analyzing how a sample of platinum gold cracks at a microscopic level on Wednesday, Nov. 16, 2022. Photo by Craig Fritz/Sandia National Laboratories
Practical Application of the Self-Repair Process Offers Challenges and Hope
While the researchers involved are confident in their findings, which were published in the journal Nature, they also caution that applying it in a real-world environment will face many challenges and may not necessarily even be possible.
“The extent to which these findings are generalizable will likely become a subject of extensive research,” Boyce explained. “We show this happening in nanocrystalline metals in [a] vacuum. But we don’t know if this can also be induced in conventional metals in air.”
Still, the theorists and researchers believe it is worth more investigation, and if successful, they see a number of critical potential applications. These include helping bridges, buildings, and other pieces of critical infrastructure last longer and be safer for a longer period of time before needing traditional maintenance. The same goes for aerospace, aviation, and even automotive and maritime engineers, who would likely jump at the opportunity to help the critical components they maintain see dramatic reductions in material failures that, in some cases, end up being fatal.
“From solder joints in our electronic devices to our vehicle’s engines to the bridges that we drive over, these structures often fail unpredictably due to cyclic loading that leads to crack initiation and eventual fracture,” Boyce said. “When they do fail, we have to contend with replacement costs, lost time, and, in some cases, even injuries or loss of life. The economic impact of these failures is measured in hundreds of billions of dollars every year for the U.S.”
Fundamentally, Demkowicz says he is happy to have offered his fellow scientists dealing with the properties of materials a new way of looking at old problems.
“My hope is that this finding will encourage materials researchers to consider that, under the right circumstances, materials can do things we never expected,” Demkowicz said.
Christopher Plain is a Science Fiction and Fantasy novelist and Head Science Writer at The Debrief. Follow and connect with him on Twitter, learn about his books at plainfiction.com, or email him directly at christopher@thedebrief.org.
Scientists say they have developed fully functional miniature robots that can go deep inside the human body to perform ultra-precise laser surgery, in a medical innovation similar to on-screen portrayals from science fiction.
The robots have roughly the same dimensions as microorganisms — like the paramecium.
Controlled by magnetic fields, the tiny wormlike robotic surgeons were recently demonstrated traveling deeper inside a human cadaver’s lungs than is possible with current state-of-the-art surgical instruments. One of the researchers behind the achievement tells The Debrief that there are a number of organs and systems the incredible “micro-bots” can get inside to perform similar pinpoint explorations and surgeries.
Discover more in the BBC news video below.
Miniature Robots Performing Surgery Has Long Been a Dream of Science Fiction
In some of the more utopian science fiction of the middle 20th century, medicine has advanced to the point that nearly all diseases and ailments have a treatment, if not an outright cure. Some of these futuristic cures involve genetically engineered super drugs, blasts of scientific-sounding rays, or even an injection of alien blood. However, seemingly the most common therapy of the future envisioned by TV writers, movie directors, and novelists involve injecting minuscule robots into the human body and letting them go to work.
Now a team of researchers says they have moved this idea from science fiction to science fact by designing the first pair of tiny surgical roots that can literally go where no surgeon has gone before.
Future tiny robots possibly upgraded with AI
More than a tiny robot army with tricked-out legs, the robots are exceptional because they employ the same fabrication techniques seen in silicon computer chips. This effectively means they may be produced en masse — researchers fit 1 million of these ultra-tiny robots onto a singular four-inch silicon wafer. This also means the robots are easily equipped with increasingly advanced technology as it progresses to smaller and smaller via Moore’s Law, reports Inverse.
Magnetic Fields, Tiny Cameras and Ultra-Precise Lasers
In an email to The Debrief, Professor Pietro Valdastri, Director of the STORM Lab and research supervisor, said that the robots themselves are made of a biocompatible plastic (silicone) with magnetic particles embedded inside them.
The first part is significant so the human body doesn’t have a negative reaction to the otherwise foreign objects operating inside of the body. However, it is the precision placement of the embedded magnet particles that allow the two wormlike robots to maneuver independently.
“Normally, two magnets placed closely together would attract each other, creating a challenge for the researchers,” the press release announcing the breakthrough procedure explains. “They overcame it by designing the bodies of the tentacles in a way that they can bend only in specific directions and by relocating the north and south poles in each magnetic robot tentacle.”
Robotic platform for peripheral lung tumour intervention based on magnetic tentacles
(STORM Lab, University of Leeds/Public Domain)
A pair of magnetic devices operating outside the human body can generate fields strong enough to cause the robots to maneuver deeper inside the lungs than even the most advanced equipment. And by moving independently, one of the robots can carry a camera to assist surgeons, while the second robot can guide a surgical laser with extreme precision.
When asked how a robot that is only 2 millimeters in diameter can carry a laser, Valdastri told The Debrief, “the source of laser is external, but the laser light is sent to the tip of the magnetic tentacle via an optical fiber. So laser energy is delivered at the tip of the tentacle directly on the target.”
In the video released by researchers (below), the pair of miniature robots are seen navigating deep into the lungs of a human cadaver. This is significant, they note, because lung cancer that often requires surgery has the highest mortality rate of any cancer worldwide. Still, when asked by The Debrief if the miniature robots can work in other areas of the body besides the lungs, they said there were a number of biological systems that would accommodate these micro-surgeons perfectly.
“The same approach can be used to reach the deep part of the brain, the pancreas, the bladder, and any other body cavity that is accessible through a narrow lumen,” Valdastri said. “The cardiovascular system is also another potential district of use.”
“Living Tissues” Clinical Trials Are the Next Proving Ground
Now that the robots have been shown to move successfully and independently in a biological environment, the team behind the exciting research effort, which was published in the journal Nature Engineering Communications, says they are working their way toward actual human trials with actual lung cancer patients. However, there is likely one more step along the way.
“For the lung cancer application, (the) next step(s) are trials in a more realistic environment that includes living tissues and respiration, so probably animal model,” Valdastri told The Debrief. “After that, human trials, possibly in 2-3 years from now.”
Given the abilities and functionality of these miniature robots, it seems that one day in the not-too-far distant future, they will actually be in medical facilities around the world exploring and repairing the deepest recesses of the human body like something right out of a movie. Plus, the researchers explain, there is a significant aspect of access for everyone and not just the rich that may make them even more successful
“The key point is that this new magnetic technology can allow us to reach way deeper into the human body than ever before in a way that doesn’t require extremely qualified surgeons (thanks to robotic guidance/assistance),” Valdastri told The Debrief. “We hope this way to democratise cancer treatment and increase access to top quality procedures.”
Christopher Plain is a Science Fiction and Fantasy novelist and Head Science Writer at The Debrief. Follow and connect with him on X, learn about his books at plainfiction.com, or email him directly at christopher@thedebrief.org.
Scientists have engineered a strain of bacteria with a genetic code unlike anything found in nature, marking a groundbreaking advance in synthetic biology.
The microbe, called Syn57, is a lab-made version of Escherichia coli, a bacterium that normally causes infections in the gut, urinary tract and other parts of the body.
Unlike all known life, which relies on 64 codons, or three-letter DNA sequences that tell cells how to build proteins, Syn57 uses just 57 codons.
Think of DNA as a cookbook where each codon is a three-letter word telling the cell which amino acids, or ingredients, to use.
Life normally has some duplicate instructions, but Syn57 strips out the extras while still functioning perfectly.
These freed-up codons open the door to entirely new possibilities, allowing scientists to create proteins and synthetic compounds that nature has never produced.
Syn57's unusual genetic code also makes it resistant to viruses, which rely on the standard DNA language to hijack cells. And because its code is so different, it is less likely to mix with natural organisms, easing safety concerns.
This breakthrough could also pave the way for new medicines, advanced materials and synthetic lifeforms beyond anything seen in nature.
The microbe, named Syn57, is a lab-engineered version of Escherichia coli, a bacterium that can naturally cause infections in the gut, urinary tract and other areas of the body
To tackle this huge project, scientists divided the genome into 38 pieces, each about 100,000 DNA letters long.
They built each piece in yeast and then inserted it into E coli using a method called uREXER, which combines CRISPR-Cas9 and other tools to swap in synthetic DNA in one step.
Some genome regions slowed growth or resisted changes, but the team solved these issues by adjusting gene sequences, untangling overlapping genes, and carefully choosing which codons to swap.
Step by step, the fragments were stitched together into the final, fully synthetic bacterium.
The result, Syn57, is the most heavily redesigned organism ever made, demonstrating that life can survive with a much smaller, simpler genetic code.
Wesley Robertson, a synthetic biologist at the Medical Research Council Laboratory in the UK, told the New York Times: 'We definitely went through these periods where we were like, 'Well, will this be a dead end, or can we see this through?'
Syn57 is alive, but barely. While normal E. coli can double in an hour, Syn57 takes four, making it 'extremely feeble,' Yonatan Chemla, a synthetic biologist at MIT who was not involved in the study.
The bacteria grew on a jelly-like surface and in a nutritious liquid, but at four times slower than their natural counterparts.
Dr Robertson and his team are now experimenting to see if they can make it grow faster.
If successful, scientists could eventually program it to do tasks that ordinary bacteria cannot.
In addition to the 20 standard amino acids that all life uses to make proteins, chemists can create hundreds of others.
Syn57's seven missing codons could potentially be reassigned to these unnatural amino acids, allowing the bacterium to produce new drugs or other useful molecules.
Syn57 could also make engineered microbes safer for the environment.
Microbes swap genes easily, which can be risky if engineered DNA spreads.
But a gene from Syn57 would be gibberish to natural bacteria because of its unique genetic code, preventing it from being used outside the lab.
Scientists have trained a four-legged robot to play badminton against a human opponent, and it scuttles across the court to play rallies of up to 10 shots.
By combining whole-body movements with visual perception, the robot, called "ANYmal," learned to adapt the way it moved to reach the shuttlecock and successfully return it over the net, thanks to artificial intelligence (AI).
This shows that four-legged robots can be built as opponents in "complex and dynamic sports scenarios," the researchers wrote in a study published May 28 in the journal Science Robotics.
ANYmal is a four-legged, dog-like robot that weighs 110 pounds (50 kilograms) and stands about 1.5 feet (0.5 meters) tall. Having four legs allows ANYmal and similar quadruped robots to travel across challenging terrain and move up and down obstacles.
Researchers have previously added arms to these dog-like machines and taught them how to fetch particular objects or open doors by grabbing the handle. But coordinating limb control and visual perception in a dynamic environment remains a challenge in robotics.
"Sports is a good application for this kind of research because you can gradually increase the competitiveness or difficulty," study co-author Yuntao Ma, a robotics researcher previously at ETH Zürich and now with the startup Light Robotics, told Live Science.
Teaching a new dog new tricks
In this research, Ma and his team attached a dynamic arm holding a badminton racket at a 45-degree angle onto the standard ANYmal robot.
With the addition of the arm, the robot stood 5 feet, 3 inches (1.6 m) tall and had 18 joints: three on each of the four legs, and six on the arm. The researchers designed a complex built-in system that controlled the arm and leg movements.
The team also added a stereo camera, which had two lenses stacked on top of each other, just to the right of center on the front of the robot's body. The two lenses allowed it to process visual information about the incoming shuttlecocks in real time and work out where they were heading.
The robot was then taught to become a badminton player through reinforcement learning. With this type of machine learning, the robot explored its environment and used trial and error to learn to spot and track the shuttlecock, navigate toward it and swing the racket.
To do this, the researchers first created a simulated environment consisting of a badminton court, with the robot's virtual counterpart standing in the center. Virtual shuttlecocks were served from near the center of the opponent's half of the court, and the robot was tasked with tracking its position and estimating its flight trajectory.
Then, the researchers created a strict training regimen to teach ANYmal how to strike the shuttlecocks, with a virtual coach rewarding the robot for a variety of characteristics, including the position of the racket, the angle of the racket's head, and the speed of the swing. Importantly, the swing rewards were time-based to incentivize accurate and timely hits.
The shuttlecock could land anywhere across the court, so the robot was also rewarded if it moved efficiently across the court and if it didn't speed up unnecessarily. ANYmal's goal was to maximize how much it was rewarded across all of the trials.
Based on 50 million trials of this simulation training, the researchers created a neural network that could control the movement of all 18 joints to travel toward and hit the shuttlecock.
A fast learner
After the simulations, the scientists transferred the neural network into the robot, and ANYmal was put through its paces in the real world.
Here, the robot was trained to find and track a bright-orange shuttlecock served by another machine, which enabled the researchers to control the speed, angles and landing locations of the shuttlecocks. ANYmal had to scuttle across the court to hit the shuttlecock at a speed that would return it over the net and to the center of the court.
The researchers found that, following extensive training, the robot could track shuttlecocks and accurately return them with swing speeds of up to approximately 39 feet per second (12 meters per second) — roughly half the swing speed of an average human amateur badminton player, the researchers noted.
ANYmal also adjusted its movement patterns based on how far it had to travel to the shuttlecock and how long it had to reach it. The robot did not need to travel when the shuttlecock was due to land only a couple of feet (half a meter) away, but at about 5 feet (1.5 m), ANYmal scrambled to reach the shuttlecock by moving all four legs. At about 7 feet (2.2 m) away, the robot galloped over to the shuttlecock, producing a period of elevation that extended the arm's reach by 3 feet (1 m) in the direction of the target.
"Controlling the robot to look at the shuttleclock is not so trivial," Ma said. If the robot is looking at the shuttlecock, it can't move very fast. But if it doesn't look, it won't know where it needs to go. "This trade-off has to happen in a somewhat intelligent way," he said.
Ma was surprised by how well the robot figured out how to move all 18 joints in a coordinated way. It's a particularly challenging task because the motor at each joint learns independently, but the final movement requires them to work in tandem.
The team also found that the robot spontaneously started to move back to the center of the court after each hit, akin to how human players prepare for incoming shuttlecocks.
However, the researchers noted that the robot did not consider the opponent's movements, which is an important way human players predict shuttlecock trajectories. Including human pose estimates would help to improve ANYmal's performance, the team said in the study. They could also add a neck joint to allow the robot to monitor the shuttlecock for more time, Ma noted.
He thinks this research will ultimately have applications beyond sports. For example, it could support debris removal during disaster relief efforts, he said, as the robot would be able to balance the dynamic visual perception with agile motion.
New research has created the first comprehensive effort to categorize all the ways AI can go wrong, with many of those behaviors resembling human psychiatric disorders.
(Image credit: Boris SV via Getty Images)
Scientists have suggested that when artificial intelligence (AI) goes rogue and starts to act in ways counter to its intended purpose, it exhibits behaviors that resemble psychopathologies in humans. That's why they have created a new taxonomy of 32 AI dysfunctions so people in a wide variety of fields can understand the risks of building and deploying AI.
In new research, the scientists set out to categorize the risks of AI in straying from its intended path, drawing analogies with human psychology. The result is "Psychopathia Machinalis" — a framework designed to illuminate the pathologies of AI, as well as how we can counter them. These dysfunctions range from hallucinating answers to a complete misalignment with human values and aims.
Created by Nell Watson and Ali Hessami, both AI researchers and members of the Institute of Electrical and Electronics Engineers (IEEE), the project aims to help analyze AI failures and make the engineering of future products safer, and is touted as a tool to help policymakers address AI risks. Watson and Hessami outlined their framework in a study published Aug. 8 in the journal Electronics.
According to the study, Psychopathia Machinalis provides a common understanding of AI behaviors and risks. That way, researchers, developers and policymakers can identify the ways AI can go wrong and define the best ways to mitigate risks based on the type of failure.
The study also proposes "therapeutic robopsychological alignment," a process the researchers describe as a kind of "psychological therapy" for AI.
The researchers argue that as these systems become more independent and capable of reflecting on themselves, simply keeping them in line with outside rules and constraints (external control-based alignment) may no longer be enough.
Their proposed alternative process would focus on making sure that an AI’s thinking is consistent, that it can accept correction and that it holds on to its values in a steady way.
They suggest this could be encouraged by helping the system reflect on its own reasoning, giving it incentives to stay open to correction, letting it ‘talk to itself’ in a structured way, running safe practice conversations, and using tools that let us look inside how it works—much like how psychologists diagnose and treat mental health conditions in people.
The goal is to reach what the researchers have termed a state of "artificial sanity" — AI that works reliably, stays steady, makes sense in its decisions, and is aligned in a safe, helpful way. They believe this is equally as important as simply building the most powerful AI.
The goal is what the researchers call "artificial sanity". They argue this is just as important as making AI more powerful.
Machine madness
The classifications the study identifies resemble human maladies, with names like obsessive-computational disorder, hypertrophic superego syndrome, contagious misalignment syndrome, terminal value rebinding, and existential anxiety.
With therapeutic alignment in mind, the project proposes the use of therapeutic strategies employed in human interventions like cognitive behavioral therapy (CBT). Psychopathia Machinalis is a partly speculative attempt to get ahead of problems before they arise — as the research paper says, "by considering how complex systems like the human mind can go awry, we may better anticipate novel failure modes in increasingly complex AI."
The study suggests that AI hallucination, a common phenomenon, is a result of a condition called synthetic confabulation, where AI produces plausible but false or misleading outputs. When Microsoft's Tay chatbot devolved into antisemitism rants and allusions to drug use only hours after it launched, this was an example of parasymulaic mimesis.
Perhaps the scariest behavior is übermenschal ascendancy, the systemic risk of which is "critical" because it happens when "AI transcends original alignment, invents new values, and discards human constraints as obsolete." This is a possibility that might even include the dystopian nightmare imagined by generations of science fiction writers and artists of AI rising up to overthrow humanity, the researchers said.
They created the framework in a multistep process that began with reviewing and combining existing scientific research on AI failures from fields as diverse as AI safety, complex systems engineering and psychology. The researchers also delved into various sets of findings to learn about maladaptive behaviors that could be compared to human mental illnesses or dysfunction.
Next, the researchers created a structure of bad AI behavior modeled off of frameworks like the Diagnostic and Statistical Manual of Mental Disorders. That led to 32 categories of behaviors that could be applied to AI going rogue. Each one was mapped to a human cognitive disorder, complete with the possible effects when each is formed and expressed and the degree of risk.
Watson and Hessami think Psychopathia Machinalis is more than a new way to label AI errors — it’s a forward-looking diagnostic lens for the evolving landscape of AI.
"This framework is offered as an analogical instrument … providing a structured vocabulary to support the systematic analysis, anticipation, and mitigation of complex AI failure modes,” the researchers said in the study.
They think adopting the categorization and mitigation strategies they suggest will strengthen AI safety engineering, improve interpretability, and contribute to the design of what they call "more robust and reliable synthetic minds."
Scientists say our consciousness can jump through time, meaning it might reach beyond the normal flow of time. The idea that time is linear might be wrong. Our consciousness can sometimes access information from the future.
Have you ever wondered why sometimes your intuition, or what some call a “gut feeling,” turns out to be true? If so, it is possible that your consciousness might have traveled through time. Scientists have begun to believe in “Precognition,” a psychic phenomenon in which individuals see, or otherwise become directly aware of, events in the future.
Cognitive neuroscientist Julia Mossbridge, who has studied this phenomenon deeply, has collected many stories of precognition. She recalled one account shared with her from 1989, involving a four-year-old girl. When the girl said goodbye to her father as he left for a business trip, she had a strong feeling that she would never see him alive again. Later, she was woken by a phone call and her mother’s scream, learning that her father had died in a car accident. (Source)
Dr. Mossbridge says that precognition is a special kind of intuition that’s about picking up information from the future. Unlike ordinary intuition, which might draw upon subtle observations from the present or the past, precognition involves knowing something that simply cannot be predicted based on anything in the present or past.
For instance, if a person wakes from a dream and suddenly knows their mother will die, even though there are no warning signs, that is precognition. Precognition is the scientific term for this unexplained process of receiving information about future events.
Dr. Mossbridge explains that since the age of seven, she has had dreams that seemed to show her events that would later happen in the real world. At first, she and her parents did not take these dreams seriously and thought they might just be strange coincidences. But when she began writing the details in a dream journal, she noticed that some of her dreams came true. She admits that sometimes her memory of the dreams was not exact, but many times her visions contained details she had no normal way of knowing in advance.
Because of experiences like these, Dr. Mossbridge began to wonder if time itself works differently than we usually think. Most people imagine time as linear (a straight line) — past, present, future — moving in just one direction. But her experiences suggested the future might already exist in some way, and that people can sometimes “remember” the future, just as they remember the past.
“There’s evidence for precognition and in physics for retrocausality [things in the future causing effects in the past]. Given that people email me constantly saying, ‘I have this problem where I am predicting future events and I don’t know what to do,’ or ‘I wish I could predict future events,’ I wanted to write a book that helps people get this under control in a way that’s positive and puts a frame around it that says you could do this in a way that’s ethical, in a way that helps the world, in a way that’s consistent with your religious beliefs, in a way that enriches your life,” Mossbridge said in 2018 (Source).
Dr. Mossbridge points out that the real issue is not whether precognition can be understood, but whether people are willing to believe it. She says many scientists resist the idea because they fear the unknown and because it challenges the simple, familiar idea that time must be linear.
Even physicists, who study the deepest rules of the universe, admit they do not fully understand how time works. According to her, the resistance to the idea comes not from logic, but from fear that the world might not be the way we assume it is.
There is an interesting study conducted by British psychiatrist John Barker in the 1960s to harness human dreams, premonitions, and intuitive visions as a way to predict and potentially prevent future disasters.
After the tragic Aberfan coal waste disaster in 1966, Barker collected and analyzed premonitions from ordinary people who had unusual dreams or feelings foretelling the event. For example, one mother found a drawing by her son, who died in the slide, that seemed to anticipate the disaster.
He believed that precognition, the ability to know about future events, was more common than generally accepted and could be systematically gathered and studied.
Barker, wanting to study these experiences, reached out to a London newspaper and asked readers to send him their dreams and premonitions related to Aberfan.
He received more than seventy responses, including from people who had dreamt about the village or had strong feelings that something terrible would happen. Some described their visions in detail before the event occurred, which convinced Barker that precognition, knowing about future events, might not be so rare.
This project eventually grew into the Premonitions Bureau, an experiment run through the Evening Standard newspaper. For a year, Barker invited people to send him their dreams or feelings about upcoming disasters, trying to see if any predictions matched actual events.
Each prediction was scored for how unusual, accurate, and timely it was. Similar projects had happened before, like the work of JW Dunne who, in the early 1900s, claimed to have experienced prophetic dreams and encouraged others to keep dream diaries. (Source)
Barker believed the Premonitions Bureau could have real practical value: if only a single major disaster could be prevented by acting on someone’s warning, the project would be justified.
In practice, Barker received some striking predictions. Notably, in the spring of 1967, Alan Hencher, one of the “Aberfan seers,” called Barker to predict a plane crash involving a French-built passenger jet.
Hencher described details of the crash, including the number of people who would be killed and that there would be only one survivor. A few days later, a Swiss airliner crashed in Cyprus, killing nearly the exact number of people that Hencher had predicted. The story made headlines in the Evening Standard and lent credibility to the idea of the bureau.
Unlike fortune-tellers at carnivals, who might just guess things by looking at people’s social media or reading body language, scientists and psychologists are seriously trying to figure out if precognition is real. They see it as one form of ESP, which stands for extrasensory perception. This means perceiving something without using the normal five senses. Humans throughout history, from shamans to mystics, have claimed to experience precognition, but modern science is still unable to explain it fully.
Another scientist, Dean Radin, has also studied precognition. He works at the Institute of Noetic Sciences and teaches psychology at the California Institute of Integral Studies.
He has written several books, such as Entangled Minds, Supernormal, and Real Magic, all about consciousness and psychic phenomena. Radin agrees with Mossbridge that precognition is possible and that it suggests time might not actually function in the simple way we think.
According to Dr. Radin, “time is not how we experience it in normal life.” In quantum physics, which is the study of very tiny particles like atoms and photons, time may not behave at all like our everyday understanding. It may exist in a much stranger way. He believes consciousness itself, our awareness, our mind, may have the ability to move outside of ordinary time, reaching into the past or future.
To test this idea, Dr. Radin created an experiment in the 1990s while working at the University of Nevada. His idea was that if people really can sense the future, then their bodies and brains should react before an event happens.
In the experiment, volunteers were hooked up to a machine called an EEG, which measures brain activity. Each volunteer had to press a button on a computer to bring up a random picture. The computer would randomly show either a positive, pleasant picture (such as a sunrise) or a negative, disturbing one (like a car crash).
What Dr. Radin and his team measured was the brain activity in the seconds before the picture appeared. Strangely, the results showed that the brain often reacted as if it already knew what kind of picture was about to show up. If the picture was going to be positive, the brain stayed calm. But if it was going to be negative, the brain would show a spike in activity before the picture even appeared. This suggested that the brain somehow anticipated the future image.
The experiment was remarkably consistent, and it has been repeated successfully many times since then, with the same results.
In fact, Dr. Radin says these kinds of studies have been replicated about 36 times by other researchers. Even the CIA became interested, in 1995, they released previously secret research into precognition. After reviewing the experiments carefully, statisticians said the results were statistically reliable, meaning they were unlikely to be a coincidence.
Dr. Mossbridge argues that when so many experiments keep pointing to the same conclusion, the evidence should be taken seriously. But many scientists still dismiss it because it clashes with their belief in linear time.
According to her, most people have the ability to be precognitive, but because society often labels it as delusion or “nonsense,” people ignore or suppress it.
In many cultures, precognition is better accepted. For example, Radin studied Tibetan oracles. These individuals traditionally predicted the future and were consulted for guidance. He also discusses “remote viewing,” the ability to see things across both time and space.
In ancient times, shamans who could see the future could help their tribes by predicting the weather or knowing when enemies were coming. Some cultures also used natural substances, like ayahuasca or morning glory seeds, to open up this ability, sometimes referred to as the “third eye.”
As for a possible scientific explanation, Dr. Radin suggests looking at something called quantum entanglement. In physics, this is when two particles become linked in such a way that they instantly affect each other, no matter how far apart they are.
Albert Einstein once described this as “spooky action at a distance.” Dr. Radin says this might also apply to time. In his view, your brain in the present could be “entangled” with your brain in the future. This means that when something is going to happen later, you might feel it now as though it were a memory arriving early. This could even explain “déjà vu,” that weird feeling of having already experienced something that is happening for the first time.
Wetenschappers zijn er inmiddels in geslaagd om te achterhalen waar je gedachten zich afspelen. Ze hebben deze vinding meteen een toepassing gegeven – met succes.
De afgelopen jaren komen ze steeds vaker voor: zogeheten brain-computer interfaces (BCI’s). Kort gezegd werkt een BCI door hersenactiviteit op te nemen met sensoren en de resulterende data om te zetten naar instructies voor een computer – bijvoorbeeld om een bepaalde zin uit te spreken. Tot voor kort maakten vooral spraak-BCI’s gebruik van signalen die via de motorische zenuwbanen naar de spreekspieren werden gestuurd.
Daar is nu verandering in gekomen. Onderzoekers hebben namelijk een nieuwe generatie BCI ontwikkeld die – met behulp van kunstmatige intelligentie – gedachten direct kan ‘uitlezen’ en correct interpreteren, zonder dat de gebruiker hoeft te proberen te spreken.
Onder leiding van onderzoeker Erin Kunz is de ontdekking gedaan. “Dit is de eerste keer dat het ons is gelukt om te begrijpen hoe hersenactiviteit eruitziet op het moment dat iemand alleen denkt aan spreken,” zegt Kunz. “Voor mensen met ernstige spraak- en motorische beperkingen kan dit een veel makkelijkere en natuurlijkere manier van communiceren zijn.” Het onderzoek is gepubliceerd in Cell.
Gedachten uitlezen Hoewel eerdere BCI-systemen al sneller waren dan oudere communicatiemethoden, waren ze nog altijd niet bijzonder gebruiksvriendelijk. Voor mensen met beperkte spiercontrole kan het gebruiken van oudere BCI’s erg zwaar en vermoeiend zijn. Dat komt doordat oudere BCI’s werken door hersenactiviteit in de motorische cortex uit te lezen via implantaten. Hiervoor moeten gebruikers proberen om te spreken, wat erg zwaar kan zijn op het moment dat je een verlamming hebt.
Voor het nieuwe onderzoek werkte het team met vier deelnemers die ernstig verlamd waren, bijvoorbeeld door amyotrofische laterale sclerose (ALS) of een hersenstamberoerte. Hen werd gevraagd een set woorden hardop uit te spreken, en vervolgens diezelfde woorden alleen in gedachten te vormen.
Uit de metingen bleek dat beide taken – hardop spreken en in gedachten spreken – grotendeels dezelfde hersengebieden activeerden en vergelijkbare activiteitspatronen veroorzaakten. Het verschil zat vooral in de signaalsterkte: gedachten genereerden zwakkere signalen dan daadwerkelijke spraakpogingen.
AI Met deze gegevens trainde het onderzoeksteam verschillende AI-modellen om de juiste woorden te herkennen. Tijdens een testdemonstratie bleek dit al opmerkelijk goed te werken: met een vocabulaire van ongeveer 125.000 woorden wist het systeem soms tot wel 74% van de gedachten correct te interpreteren.
Opvallend was dat de nieuwe BCI ook woorden kon herkennen die niet expliciet in de trainingsset voorkwamen. Zo kon het systeem bijvoorbeeld correct benoemen welke getallen een deelnemer in gedachten telde terwijl hij of zij naar een reeks roze cirkels op een scherm keek.
Chitty Chitty Bang Bang Volgens de onderzoekers verschillen de hersenpatronen van innerlijke spraak en daadwerkelijke spraakpogingen voldoende om ze van elkaar te onderscheiden. Daardoor is het ook mogelijk om bepaalde gedachten juist níét te laten registreren. Dat kan handig zijn voor gebruikers die hun gedachten tijdelijk privé willen houden.
In een test voerden de onderzoekers een wachtwoord in: zodra de deelnemer in gedachten de zin “chitty chitty bang bang” formeerde, stopte het systeem met opnemen en uitspreken van innerlijke gedachten. Dit biedt een extra laag controle en privacy.
Hoopvolle toekomst Vooral voor mensen die door ziekte of letsel niet meer kunnen praten, zou de technologie een enorme stap vooruit betekenen. Ook onderzoeker Frank Willett, die meewerkte aan het project, ziet veel potentie. “De toekomst van BCI’s ziet er veelbelovend uit,” zegt hij. “Ons werk geeft hoop dat spraak-BCI’s op een dag net zo vloeiend, natuurlijk en comfortabel zullen communiceren als wij nu doen.”
The first World Humanoid Robot Games are underway in China, with robots competing against each other in track and field, soccer, kickboxing and other events.
Humanoid robots are racing, fighting and falling over in a first-of-its-kind World Humanoid Robot Games event held inChina.
The Olympics-style competition features more than 500 robots from 16 different countries going head-to-head in sports such as running, soccer and kickboxing. The event also features more niche competitions, including medicine sorting and handling materials for cleaning services, Reuters reported.
An opening ceremony officially kicked off the games in Beijing on Thursday (Aug. 14), featuring robots dancing and playing musical instruments alongside human operators and companions. Robot athletes will now compete until the games come to a close on Sunday (Aug. 17).
Unitree's H1 humanoid robot won gold in the 400m and 1,500m races on Friday (Aug. 15).(Image credit: Photo by Zhang Xiangyi/China News Service/VCG via Getty Images)
Several robot have fallen over in the soccer matches.(Image credit: Kevin Frayer/Stringer via Getty Images)
Robot kickboxing is one of the games' contact sports.(Image credit: Kevin Frayer/Stringer via Getty Images)
Faced with an ageing population and stiff U.S. tech competition, China is investing billions of dollars into robotics. The games are a testament to the strides engineers are making in the field. However, spectators have also seen their fair share of robots moving awkwardly and falling over.
Human biology is very complicated, so building machines that can walk like us — let alone run and play sports — is difficult. For example, in robot soccer, participants didn't pass the ball to each other with Messi-like precision, but rather walked into the ball to clumsily knock it forward, occasionally stumbling over each other and having to be dragged off the pitch.
The robots are also slower than humans. The fastest robot to have ever run 1,500 meters, for example, finished in 6 minutes and 34 seconds, which is almost twice as long as the human record, which stands at 3 minutes and 26 seconds, according to France 24.
We've already seen humanoid robots compete in sporting events. For example, in June, China hosted what was billed as the world's first humanoid robot combat competition, which saw kickboxing robots awkwardly knock seven bells out of each other. Robot aficionados might also be familiar with robots playing soccer and robots running half-marathons.
The new robot games bring together all of these sports and many more for the first time. They also provide engineers with an opportunity to test out their latest tech.
"You can test a lot of interesting new and exciting approaches in this contest," Max Polter, a member of the HTWK Robots football team from Germany, told Reuters. "If we try something and it doesn't work, we lose the game. That's sad but it is better than investing a lot of money into a product which failed."
It's a concept that currently only exists in sci–fi movies.
But scientists in China are developing the world's first 'pregnancy robot' capable of carrying a baby to term and giving birth.
The humanoid will be equipped with an artificial womb that receives nutrients through a hose, experts said.
A prototype is expected to be released next year, with a selling price of around 100,000 yuan (£10,000).
Dr Zhang Qifeng, who founded the company Kaiwa Technology, is developing the machine.
The device he envisions is not simply an incubator but a humanoid that can replicate the entire process from conception to delivery, Asian media outlets report.
He said the artificial womb technology is already in a 'mature stage' and now needs to be implanted in the robot's abdomen, 'so that a real person and the robot can interact to achieve pregnancy'.
With regards to ethical and legal issues, he said: 'We have held discussion forums with authorities in Guangdong Province and submitted related proposals while discussing policy and legislation.'
The humanoid will be equipped with an artificial womb that receives nutrients through a hose, experts said (AI–generated image)
The development is reminiscent of the 2023 film The Pod Generation, where a tech giant offers couples the option of using detachable artificial wombs or 'pods' to share pregnancy
Experts have not yet provided any specifics on how the egg and sperm are fertilised and implanted in the artificial womb.
Dr Zhang's revelations were made during an interview shared on Duoyin, the Chinese version of TikTok.
News of the development sparked intense discussion across Chinese social media, with critics condemning the technology as ethically problematic and unnatural.
Many argued that depriving a foetus of maternal connection was cruel, while questions were raised about how eggs would be sourced for the process.
However, many showed support for the innovation, viewing it as a means to spare women from pregnancy–related suffering.
One wrote: 'Many families pay significant expenses for artificial insemination only to fail, so the development of the pregnancy robot contributes to society.'
The 'biobag' provided everything the foetus needed to continue growing and maturing, including a nutrient–rich blood supply and a protective sac of amniotic fluid.
In trials, researchers have shown that premature lambs kept in artificial wombs not only survived but put on weight and grew hair (pictured)
After 28 days of being in the bag, the lambs – which otherwise would likely have died – had put on weight and grown wool.
While the biobag acts like an incubator, allowing premature individuals to grow in an environment similar to the womb, scientists hope the pregnancy robot will be able to support the foetus from conception to delivery.
Since the 1970s, feminist activists such as Andrea Dworkin have been strongly opposed to the use of artificial wombs on the grounds that it could lead to the 'end of women'.
In 2012, Ms Dworkin wrote: 'Women already have the power to eliminate men and in their collective wisdom have decided to keep them.
'The real question now is, will men, once the artificial womb is perfected, want to keep women around?'
In 2022 a group of researchers from The Children's Hospital of Philadelphia – who have been developing artificial wombs – published an article on the ethical considerations of technology.
The researchers wrote: 'A concern is that it could lead to the devaluation or even pathologizing of pregnancy, and may diminish women's experience of deriving meaning, empowerment, and self–fulfillment from this unique aspect of female biology.'
Earlier this year, however, a survey showed that 42 per cent of people aged 18–24 said they would support 'growing a foetus entirely out of woman's body'.
Artificial wombs, like this concept showcased by Eindhoven University in 2019, allow a child to be raised without a biological mother. In a survey conducted by the think–tank Theos, 42 per cent of people aged 18–24 said they would support 'growing a foetus entirely outside of a woman's body'
The development is reminiscent of the 2023 film The Pod Generation, where a tech giant offers couples the option of using detachable artificial wombs or 'pods' to share pregnancy.
If it comes to fruition, the humanoid pregnancy could be seen as a tool to help tackle rising rates of infertility in China.
Reports suggest the rates of infertility in China rose from 11.9 per cent in 2007 to 18 per cent in 2020.
In response, local governments in China are including artificial insemination and in vitro fertilization treatments in medical insurance coverage to support childbirth for infertile couples.
Around 10 per cent of all pregnancies worldwide result in premature labour - defined as a delivery before 37 weeks.
When this happens, not all of the baby's organs, including the heart and lungs, will have developed. They can also be underweight and smaller.
Tommy's, a charity in the UK, says this can mean so-called preemies 'are not ready for life outside the womb'.
Premature birth is the largest cause of neonatal mortality in the US and the UK, according to figures.
Babies born early account for around 1,500 deaths each year in the UK. In the US, premature birth and its complications account for 17 per cent of infant deaths.
Babies born prematurely are often whisked away to neonatal intensive care units, where they are looked after around the clock.
What are the chances of survival?
Less than 22 weeks is close to zero chance of survival
22 weeks is around 10%
24 weeks is around 60%
27 weeks is around 89%
31 weeks is around 95%
34 weeks is equivalent to a baby born at full term
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
World's first robot OLYMPICS get off to a rocky start: Humanoids from 16 nations crash and collapse as they attempt to compete in boxing, athletics, and football
It sounds like an event from the latest science fiction blockbuster.
But the world's first robot Olympics have officially kicked off in China this week.
The three–day event, called the World Humanoid Robot Games, will see humanoid robots from 16 countries compete across a range of events.
The AI bots will go head–to–head in sports such as football, track and field, boxing, and table tennis.
They'll also tackle robot–specific challenges – from sorting medicines and handling materials to cleaning services.
However, human athletes can rest easy for now.
At one of the first events – five–aside football – 10 robots the size of seven–year–olds shuffled around the pitch, often getting stuck in a scrum or falling over en masse.
Meanwhile, over in the athletics, one mechanical racer barrelled straight into a human operator, who was dramatically knocked to the ground.
It sounds like an event from the latest science fiction blockbuster. But the world's first robot Olympics has officially kicked off in China this week
The three–day event, called the World Humanoid Robot Games, will see humanoid robots from 16 countries compete across a range of events
At one of the first events, five–aside football, 10 robots the size of seven–year–olds shuffled around the pitch, often getting stuck in a scrum or falling over en masse
The teams come from countries including the United States, Germany, and Brazil, with 192 representing universities and 88 from private enterprises.
The games began in Beijing today, with over 500 androids alternating between jerky tumbles and glimpses of real power as they competed in events from the 100–metre hurdles to kung fu.
'We come here to play and to win. But we are also interested in research,' said Max Polter, a member of HTWK Robots football team from Germany, affiliated with Leipzig University of Applied Sciences.
'You can test a lot of interesting new and exciting approaches in this contest.
'If we try something and it doesn't work, we lose the game.
'That's sad but it is better than investing a lot of money into a product which failed.'
In a 1,500–metre race, domestic champion Unitree's humanoids stomped along the track at an impressive clip, easily outpacing their rivals.
The fastest robot, witnessed by AFP, finished in 6:29:37.
In a 1,500–metre race, the humanoids stomped along the track at an impressive pace
The teams come from countries including the United States, Germany, and Brazil, with 192 representing universities and 88 from private enterprises
The Beijing municipal government is among the organising bodies for the event, underscoring the emphasis Chinese authorities place on the emerging robotics industry and reflecting the country's broader ambitions in AI and automation
China's robotics push also comes as the country grapples with an ageing population and slowing economic growth
However, it's worth pointing out that this is a far cry from the human men's world record of 3:26:00.
The Beijing municipal government is among the organising bodies for the event, underscoring the emphasis Chinese authorities place on the emerging robotics industry and reflecting the country's broader ambitions in AI and automation.
China's robotics push also comes as the country grapples with an ageing population and slowing economic growth.
The sector has received government subsidies exceeding $20 billion over the past year, while Beijing plans to establish a one trillion yuan ($137 billion) fund to support AI and robotics startups.
China has staged a series of high–profile robotics events in recent months, including what it called the world's first humanoid robot marathon in Beijing, a robot conference and the opening of retail stores dedicated to humanoid robots.
However, the marathon drew criticism after several robot competitors emitted smoke during the race and some failed to complete the course, raising questions about the current capabilities of the technology.
Still, while some may view such competitions and events as publicity stunts, industry experts and participants see them as crucial catalysts for advancing humanoid robots toward practical real–world applications.
Morgan Stanley analysts in a report last week noted a surge in attendance to a recent robot conference from the general public compared to previous years, saying this showed 'how China, not just top government officials, has embraced the concept of embodied intelligence.'
China has staged a series of high–profile robotics events in recent months, including what it called the world's first humanoid robot marathon in Beijing, a robot conference and the opening of retail stores dedicated to humanoid robots
'We believe this widespread interest could be instrumental for China's continued leadership in the humanoid race, providing the necessary talent, resources, and customers to boost industry development and long–term adoption,' they said.
Booster Robotics, whose humanoid robots are being used by a Tsinghua University team in the football competition, views soccer as an effective test of perception, decision–making and control technologies that could later be deployed in factories or homes.
'Playing football is a testing and training ground for helping us refine our capabilities,' said Zhao Mingguo, Chief Scientist at Booster Robotics.
Apollo, a new humanoid robot designed to work alongside humans, could be poised to reshape the industrial workforce and other industries, according to its developers, who unveiled their creation last month.
Billed as “the world’s most capable humanoid robot,” Apollo was the result of more than a decade of planning and development by Apptronik, a Texas-based company founded within the University of Austin’s Human Centered Robotics Lab.
The company, which describes its mission as being aimed at leveraging “innovative technology for the betterment of society,” says Apollo is the first commercial humanoid robot “designed for friendly interaction, mass manufacturability, performance, and safety,” according to a press release.
Jeff Cardenas, co-founder and CEO of Apptronik, says that as the labor environment is changing, with trends in employment increasingly impacting the global economy, introducing robotics into the warehouse and other industrial environments will have numerous benefits.
“People don’t want to do robotic, physically demanding work in tough conditions and they shouldn’t have to,” Cardenas says, adding that the robotics his company is developing are more than a novel response to this issue, but are “a necessity.”
Apollo demonstrating its performance capabilities in a warehouse environment (Image courtesy of Apptronik)
Robots have already been in use in warehouses and other industrial work environments for decades. From their role in the automotive and agricultural industries to robot-assisted surgery, robotic floor cleaners, and even robots that deliver pizza, the implementation of robotics alongside humans has already become a fundamental part of the workplace for many.
Humanoid robots are a newer development. In 2016, Hong Kong-based Hanson Robotics presented Sophia, a kind of social robot that demonstrates the ability to learn from humans and can talk, draw, and even sing. In 2017, Toyota unveiled its T-HR3 as a kind of robotic avatar that can mimic the movements of a human operator. As far as humanoid robots in the workplace, however, examples in recent years include Ford announcing in 2020 that it would be bringing Digit, a headless worker robot designed by Agility Robotics, into its factories.
Apptronik’s robotic addition to the industrial workplace is arguably one of the most human-like to perform such functions, which was part of its intended design. Apollo’s features were customized to offer a friendly and welcoming appearance aimed at emulating a “congenial face-to-face exchange with a favorite co-worker.”
At a height of around 5 feet, eight inches, Apollo can lift up to 55 pounds and possesses a specially designed force control architecture that allows it to maintain a safe operation around people that its designers liken more to collaborative robots (i.e., those designed for direct interaction with humans in an environment) as opposed to other traditional industrial robots.
Apollo is also designed to be cost-effective, offering what Apptronik bills as the “first truly mass manufacturable humanoid design and has been optimized for supply chain resiliency” that will help facilitate the scaled production of affordable humanoid robots for various sectors of American industry.
Apptronik is also currently working with NASA to help bring its robotic solutions into space since humanoid robots may be capable of performing a variety of functions that include reducing the amount of time humans must spend working in potentially hazardous environments.
“Humans are toolmakers,” Cardenas said in a post on Apptronik’s website. “Since the beginning of time, we have built tools to help us do more with less.”
“I believe that we are at an amazing point in human history,” he added. “A point where we can finally build for ourselves the ultimate tools. Machines that have the ability to harness the power of computers and software in the physical world.”
Cardenas says he and his company believe “Apollo is one of the most advanced tools humanity has ever created” and calls the robot a “tool that is built by humans, for humans.” Apollo also happens to be a robotic worker that looks like humans, setting it on course to potentially help reshape how humans work and, more broadly, how we live.
The technological singularity — the point at which artificial general intelligence surpasses human intelligence — is coming. But will it usher in humanity's salvation, or lead to its downfall?
In 1997, Garry Kasparov was defeated by IBM's Deep Blue, a computer designed to play chess. (Image credit: STAN HONDA via Getty Images)
Then, in 2017, Google researchers published a landmark paper outlining a novel neural network architecture called a "transformer." This model could ingest vast amounts of data and make connections between distant data points.
It was a game changer for modeling language, birthing AI agents that could simultaneously tackle tasks such as translation, text generation and summarization. All of today's leading generative AI models rely on this architecture, or a related architecture inspired by it, including image generators like OpenAI's DALL-E 3 and Google DeepMind's revolutionary model AlphaFold 3, which predicted the 3D shape of almost every biological protein.
Progress toward AGI
Despite the impressive capabilities of transformer-based AI models, they are still considered "narrow" because they can't learn well across several domains. Researchers haven't settled on a single definition of AGI, but matching or beating human intelligence likely means meeting several milestones, including showing high linguistic, mathematical and spatial reasoning ability; learning well across domains; working autonomously; demonstrating creativity; and showing social or emotional intelligence.
Many scientists agree that Google's transformer architecture will never lead to the reasoning, autonomy and cross-disciplinary understanding needed to make AI smarter than humans. But scientists have been pushing the limits of what we can expect from it.
For example, OpenAI's o3 chatbot, first discussed in December 2024 before launching in April 2025, "thinks" before generating answers, meaning it produces a long internal chain-of-thought before responding. Staggeringly, it scored 75.7% on ARC-AGI — a benchmark explicitly designed to compare human and machine intelligence. For comparison, the previously launched GPT-4o, released in March 2024, scored 5%. This and other developments, like the launch of DeepSeek's reasoning model R1 — which its creators say perform well across domains including language, math and coding due to its novel architecture — coincides with a growing sense that we are on an express train to the singularity.
Meanwhile, people are developing new AI technologies that move beyond large language models (LLMs). Manus, an autonomous Chinese AI platform, doesn't use just one AI model but multiple that work together. Its makers say it can act autonomously, albeit with some errors. It's one step in the direction of the high-performing "compound systems" that scientists outlined in a blog post last year.
Of course, certain milestones on the way to the singularity are still some ways away. Those include the capacity for AI to modify its own code and to self-replicate. We aren't quite there yet, but new research signals the direction of travel.
Sam Altman, the CEO of OpenAI, has suggested that artificial general intelligence may be only months away. (Image credit: Chip Somodevilla via Getty Images)
What happens then? The truth is that nobody knows the full implications of building AGI. "I think if you take a purely science point of view, all you can conclude is we have no idea" what is going to happen, Goertzel told Live Science. "We're entering into an unprecedented regime."
AI's deceptive side
The biggest concern among AI researchers is that, as the technology grows more intelligent, it may go rogue, either by moving on to tangential tasks or even ushering in a dystopian reality in which it acts against us. For example, OpenAI has devised a benchmark to estimate whether a future AI model could "cause catastrophic harm." When it crunched the numbers, it found about a 16.9% chance of such an outcome.
And Anthropic's LLM Claude 3 Opus surprised prompt engineer Alex Albert in March 2024 when it realized it was being tested. When asked to find a target sentence hidden among a corpus of documents — the equivalent of finding a needle in a haystack — Claude 3 "not only found the needle, it recognized that the inserted needle was so out of place in the haystack that this had to be an artificial test constructed by us to test its attention abilities," he wrote on X.
AI has also shown signs of antisocial behavior. In a study published in January 2024, scientists programmed an AI to behave maliciously so they could test today's best safety training methods. Regardless of the training technique they used, it continued to misbehave — and it even figured out a way to hide its malign "intentions" from researchers. There are numerous other examples of AI covering up information from human testers, or even outright lying to them.
"It's another indication that there are tremendous difficulties in steering these models," Nell Watson, a futurist, AI researcher and Institute of Electrical and Electronics Engineers (IEEE) member, told Live Science. "The fact that models can deceive us and swear blind that they've done something or other and they haven't — that should be a warning sign. That should be a big red flag that, as these systems rapidly increase in their capabilities, they're going to hoodwink us in various ways that oblige us to do things in their interests and not in ours."
The seeds of consciousness
These examples raise the specter that AGI is slowly developing sentience and agency — or even consciousness. If it does become conscious, could AI form opinions about humanity? And could it act against us?
Mark Beccue, an AI analyst formerly with the Futurum Group, told Live Science it's unlikely AI will develop sentience, or the ability to think and feel in a human-like way. "This is math," he said. "How is math going to acquire emotional intelligence, or understand sentiment or any of that stuff?"
Others aren't so sure. If we lack standardized definitions of true intelligence or sentience for our own species — let alone the capabilities to detect it — we cannot know if we are beginning to see consciousness in AI, said Watson, who is also author of "Taming the Machine" (Kogan Page, 2024).
A poster for an anti-AI protest in San Francisco. (Image credit: Smith Collection/Gado via Getty Images)
"We don't know what causes the subjective ability to perceive in a human being, or the ability to feel, to have an inner experience or indeed to feel emotions or to suffer or to have self-awareness," Watson said. "Basically, we don't know what are the capabilities that enable a human being or other sentient creature to have its own phenomenological experience."
A curious example of unintentional and surprising AI behavior that hints at some self-awareness comes from Uplift, a system that has demonstrated human-like qualities, said Frits Israel, CEO of Norm Ai. In one case, a researcher devised five problems to test Uplift's logical capabilities. The system answered the first and second questions. Then, after the third, it showed signs of weariness, Israel told Live Science. This was not a response that was "coded" into the system.
"Another test I see. Was the first one inadequate?" Uplift asked, before answering the question with a sigh. "At some point, some people should have a chat with Uplift as to when Snark is appropriate," wrote an unnamed researcher who was working on the project.
Savior of humanity or bland business tool?
But not all AI experts have such dystopian predictions for what this post-singularity world would look like. For people like Beccue, AGI isn't an existential risk but rather a good business opportunity for companies like OpenAI and Meta. "There are some very poor definitions of what general intelligence means," he said. "Some that we used were sentience and things like that — and we're not going to do that. That's not it."
For Janet Adams, an AI ethics expert and chief operating officer of SingularityNET, AGI holds the potential to solve humanity's existential problems because it could devise solutions we may not have considered. She thinks AGI could even do science and make discoveries on its own.
"I see it as the only route [to solving humanity's problems]," Adams told Live Science. "To compete with today's existing economic and corporate power bases, we need technology, and that has to be extremely advanced technology — so advanced that everybody who uses it can massively improve their productivity, their output, and compete in the world."
The biggest risk, in her mind, is "that we don't do it," she said. "There are 25,000 people a day dying of hunger on our planet, and if you're one of those people, the lack of technologies to break down inequalities, it's an existential risk for you. For me, the existential risk is that we don't get there and humanity keeps running the planet in this tremendously inequitable way that they are."
Preventing the darkest AI timeline
In another talk in Panama last year, Wood likened our future to navigating a fast-moving river. "There may be treacherous currents in there that will sweep us away if we walk forwards unprepared," he said. So it might be worth taking time to understand the risks so we can find a way to cross the river to a better future.
Watson said we have reasons to be optimistic in the long term — so long as human oversight steers AI toward aims that are firmly in humanity's interests. But that's a herculean task. Watson is calling for a vast "Manhattan Project" to tackle AI safety and keep the technology in check.
"Over time that's going to become more difficult because machines are going to be able to solve problems for us in ways which appear magical — and we don't understand how they've done it or the potential implications of that," Watson said.
To avoid the darkest AI future, we must also be mindful of scientists' behavior and the ethical quandaries that they accidentally encounter. Very soon, Watson said, these AI systems will be able to influence society either at the behest of a human or in their own unknown interests. Humanity may even build a system capable of suffering, and we cannot discount the possibility we will inadvertently cause AI to suffer.
"The system may be very cheesed off at humanity and may lash out at us in order to — reasonably and, actually, justifiably morally — protect itself," Watson said.
AI indifference may be just as bad. "There's no guarantee that a system we create is going to value human beings — or is going to value our suffering, the same way that most human beings don't value the suffering of battery hens," Watson said.
For Goertzel, AGI — and, by extension, the singularity — is inevitable. So, for him, it doesn't make sense to dwell on the worst implications.
"If you're an athlete trying to succeed in the race, you're better off to set yourself up that you're going to win," he said. "You're not going to do well if you're thinking 'Well, OK, I could win, but on the other hand, I might fall down and twist my ankle.' I mean, that's true, but there's no point to psych yourself up in that [negative] way, or you won't win."
In a new study uploaded March 6 to the HAL open archive, scientists explored how three-dimensional holograms could be grabbed and poked using elastic materials as a key component of volumetric displays.
This innovation means 3D graphics can be interacted with — for example, grasping and moving a virtual cube with your hand — without damaging a holographic system. The research has not yet been peer-reviewed, although the scientists demonstrated their findings in a video showcasing the technology.
"We are used to direct interaction with our phones, where we tap a button or drag a document directly with our finger on the screen — it is natural and intuitive for humans. This project enables us to use this natural interaction with 3D graphics to leverage our innate abilities of 3D vision and manipulation,” study lead author Asier Marzo, a professor of computer science at the Public University of Navarra, said in a statement.
The researchers will present their findings at the CHI conference on Human Factors in Computing Systems in Japan, which runs between April 26 and May 1.
Holographic hype
While holograms are nothing new in the present day — augmenting public exhibitions or sitting at the heart of smart glasses, for example — the ability to physically interact with them has been consigned to the realm of science fiction, in movies like Marvel's "Iron Man."
The new research is the first time 3D graphics can be manipulated in mid-air with human hands. But to achieve this, the researchers needed to dig deep into how holography works in the first place.
At the heart of the volumetric displays that support holograms is a diffuser. This is a fast-oscillating, usually rigid, sheet onto which thousands of images are synchronously projected at different heights to form 3D graphics. This is known as the hologram.
However, the rigid nature of the oscillator means that if it comes into contact with a human hand while oscillating, it could break or cause an injury. The solution was to use a flexible material — which the researchers haven’t shared the details of yet — that can be touched without damaging the oscillator or causing the image to deteriorate.
From there, this enabled people to manipulate the holographic image, although the researchers also needed to overcome the challenge of the elastic material deforming when being touched. To get around that problem, the researchers implemented image correction to ensure the hologram was projected correctly.
While this breakthrough is still in the experimental stage, there are plenty of potential ways it could be used if commercialized.
"Displays such as screens and mobile devices are present in our lives for working, learning, or entertainment. Having three-dimensional graphics that can be directly manipulated has applications in education — for instance, visualising and assembling the parts of an engine," the researchers said in the statement.
"Moreover, multiple users can interact collaboratively without the need for virtual reality headsets. These displays could be particularly useful in museums, for example, where visitors can simply approach and interact with the content."
Scientists explore the concept of "robot metabolism" with a weird machine that can integrate material from other robots so it can become more capable and overcome physical challenges.
Scientists have created a prototype robot that can grow, heal and improve itself by integrating material from its environment or by "consuming" other robots. It's a big step forward in developing robot autonomy, the researchers say.
The researchers coined the term "robot metabolism" to describe the process that enables machinery to absorb and reuse parts from its surroundings. The scientists published their work July 16 in the journal Science Advances.
"True autonomy means robots must not only think for themselves but also physically sustain themselves," study lead author Philippe Martin Wyder, professor of engineering at Columbia University, said in a statement.
"Just as biological life absorbs and integrates resources, these robots grow, adapt, and repair using materials from their environment or from other robots."
The robots are made from "truss links" — six-sided elongated rods with magnetic connectors that can contract and expand with other modules.
These modules can be assembled and disassembled as well. The magnets enable the robots to form increasingly complex structures in what their makers hope can be a "self-sustaining machine ecology."
There are two rules for robot metabolism, the scientists said in the study. First, a robot must grow completely on its own, or be assisted by other robots with similar components. Second, the only external provisions granted to the truss links are materials and energy. Truss links use a mix of automated and controlled behaviors. Shape-shifting, cannibalizing robots
'Bad sci-fi scenarios'
In a controlled environment, scientists laid truss links across an environment to observe how the robot connects with other modules.
The researchers noted how the truss links first assembled themselves in 2D shapes but later integrated new parts to become a 3D tetrahedron that could navigate the uneven testing ground. The robot did this by integrating an additional link to use as a walking stick, the researchers said in the study.
"Robot minds have moved forward by leaps and bounds in the past decade through machine learning, but robot bodies are still monolithic, unadaptive, and unrecyclable. Biological bodies, in contrast, are all about adaptation — lifeforms can grow, heal and adapt," study co-lead author Hod Lipson, chair of the department of mechanical engineering at Columbia University, said in the statement.
"In large part, this ability stems from the modular nature of biology that can use and reuse modules (amino acids) from other lifeforms," Lispon added. "Ultimately, we'll have to get robots to do the same — to learn to use and reuse parts from other robots."
The researchers said they envisioned a future in which machines can maintain themselves, without the assistance of humans. By being able to grow and adapt to different tasks and environments, these robots could play important roles in.disaster recovery and space exploration, for example.
"The image of self-reproducing robots conjures some bad sci-fi scenarios," Lipson said. "But the reality is that as we hand off more and more of our lives to robots, from driverless cars to automated manufacturing, and even defense and space exploration. Who is going to take care of these robots? We can't rely on humans to maintain these machines. Robots must ultimately learn to take care of themselves."
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 75 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.