The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
01-04-2023
3D-printable glass is made from proteins and biodegrades
3D-printable glass is made from proteins and biodegrades
Chemically modifying the ends of the molecules opens the door to glass that could decompose with organic waste.
Researchers have modified amino acids and peptides and then coaxed it into a transparent glass. Here they demonstrate moulding it into sea-shell shapes.
Credit: R.Xing et al./Science Advances (CC BY 4.0)
Researchers have transformed amino acids and peptides — the building blocks of proteins — into glass, according to a study published in Science Advances1. Not only is the biomolecular glass transparent, but it can be 3D printed and cast in moulds. The paper suggests that the glass biodegrades pretty quickly, but wouldn’t be suitable for applications such as drinks bottles because the liquid would cause it to decompose.
“Nobody ever tried this with biomaterials in the past,” says Jun Liu, a materials scientist at the University of Washington in Seattle. “It’s a good discovery.”
Standard glass is made using inorganic molecules, mainly silicon dioxide. The ingredients are melted down at high temperatures and then rapidly cooled. Glass can be recycled easily, but despite this, a substantial amount ends up in landfill, where it can take thousands of years to break down.
But amino acids are readily broken down by microorganisms, meaning that instead of sitting for years in a dump, the nutrients in biomolecular glass could, in principle, rejoin the ecosystem.
“The development of renewable, benign and degradable materials is highly appealing for a sustainable future,” says Xuehai Yan, a co-author of the study and a chemist at the Chinese Academy of Sciences in Beijing.
Typically, when amino-acid chains, known as peptides, are heated, the molecules start to split up before they melt. Yan and his colleagues modified the ends of the amino acids to change how they assemble and stop them from breaking up. After melting these modified amino acids, the researchers rapidly supercooled them — a process that takes molecules to below their freezing point while allowing them to retain its liquid arrangement. The researchers then further cooled the substance to solidify it into glass. It stayed solid when it returned to room temperature.
This method prevents the amino acids and peptides from forming a crystalline structure when they solidify, which would make the glass cloudy, although the authors note that in some cases the glass was not completely colourless.
When the researchers exposed the biomolecular glass to digestive fluids and compost, it took between a few weeks and several months to break down, depending on the chemical modification and amino acid or peptide used.
The glass is just a lab curiosity at this stage: “This is a very fundamental study,” says Ting Xu, a materials scientist at the University of California, Berkeley. However, she says it opens a new path for materials researchers to explore.
Because it can biodegrade, the glass would not be appropriate for use in environments that are very humid or wet, Xu says. Organic chemical bonds tend to be weaker than inorganic bonds, so she speculates that the peptide glass would be less rigid than standard glass. But she says that this property could be beneficial in flexible, miniature devices, such as the lenses of a microscope.
doi: https://doi.org/10.1038/d41586-023-00826-3
References
Xing, R., Yuan, C., Fan, W., Ren, X. & Yan, X. Sci. Adv.9, eadd8105(2023).
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
31-03-2023
FUTURE COMPUTERS COULD RUN ON LAB-GROWN
FUTURE COMPUTERS COULD RUN ON LAB-GROWN "BRAINS"
Get ready for organ-powered devices.
WRITTEN BY RAHUL RAO
Computers are not mechanical brains, and our brains are not biological computers. They differ in function, organization, and composition. Both have circuits, sure, but computer chips are ultimately bits of silicon alloys pressed into highly designed, extremely convenient sizes and shapes, while our brains are carbon-based masses whose structure is still largely a mystery to neuroscientists.
Since the mid-20th century, people have touted the similarities — and considered the possibility of combining — brains and computers. Sci-fi author Isaac Asimov helped to devise the idea of a “positronic brain” that could bestow robots with the intelligence and self-awareness of a human in 1950.
Computer scientists still dwell on the shared features between minds and machines. Artificial neural networks, which power many of today’s AI, mimic the organization of neurons in the human brain. Other researchers are trying to make computer hardware more brain-like, for instance, by replicating the electrical activity of a neuron on a chip.
Researchers designed artificial “neurons” for a futuristic computer chip.
UNIVERSITY OF BATH
There are also researchers like Thomas Hartung, a biochemist and physician at Johns Hopkins University. Hartung and his colleagues are growing “brain organoids,” collections of human skin cells coaxed into resembling brain cells, in the lab. They want to connect the organoids to sensors and other devices and train them to process and store information with the help of machine learning.
Hartung has lofty goals for these organoids. They could help neuroscientists study how brain cells work together. They could also aid pharmacologists who study brain chemistry — for example, people developing treatments for Alzheimer’s disease. Hartung believes brain organoids can eventually replace the animal subjects typically used in these experiments.
But ultimately, Hartung wants to turn the creations into “biological hardware” for computers. In theory, organoids could perform certain tasks using less energy and hold far more memory than current silicon machines.
Hartung and his colleague Lena Smirnova with an image of a brain organoid.
COURTESY OF THOMAS HARTUNG
This dream is already taking shape. A team of researchers in Australia recently taught a collection of brain cells to play the video game Pong using a method somewhat similar to training a dog.
The team hooked their organoid up to electrodes and fed it details on the ball’s position; the organoid sent electrical signals back to control the paddle. If the organoid successfully hit the ball, the researchers “rewarded” it with an electrical stimulus, somewhat like a treat for a pup that sits on command. The organoid didn’t master Pong, but it managed to perform better with training than it would by random chance.
We spoke to Hartung about what a brain organoid might do next — and when to expect organ-powered computers.
This interview has been edited and condensed for clarity.
If brain cells can play Pong, can they defeat humans?
They only were able to show acute, or short-term, memory. The organoid culture became better and better in each training session, but the next day, everything was forgotten. The expectation is that, now, with the potential to establish long-term memory, we can actually move into memory and learning in the sense people would understand it.
And you cannot easily build production of such complex cell cultures. It takes at least a year. We train many people, but it takes them a year, on average, to get them done.
What’s next for organoid research?
We’re planning to use brain-machine interfaces to control robots. That’s on the plan for about a year’s time from now. So, we want to demonstrate the capability of long-term learning and, ideally, learning a sequence of tasks in a brain organoid.
One of the big changes at the moment is to scale first. We are limited with the brain organoids to about half a millimeter in size … otherwise, we don’t get enough oxygen and nutrients into the center of this cell ball. But that’s just the number of neurons of a fly, so it’s not really worth training. You might lose your organoid and can’t find it anymore!
Our work at the moment aims at producing an organoid which is about 1 centimeter large — which is then, already, twice the size of a mouse brain. That’s substantial, but it requires perfusion, where we create an equivalent to blood vessels to get nutrients into the brain. That’s not rocket science; this has been done for other organs already, but nobody had seen a need so far to produce larger brains.
Tiny organoids in a petri dish at Hartung’s Center for Alternatives to Animal Testing.
CAROLINA ROMERO, CENTER FOR ALTERNATIVE TO ANIMAL TESTING, BLOOMBERG SCHOOL OF PUBLIC HEALTH
How is this method different from using a mouse brain?
At the moment, when you bring a mouse or mouse brain into an experiment, it has a history: There is complex behavior, there is already an architecture in response to the mouse’s life experiences.
With our organoids, we really start from zero. We can influence and control every moment, and by what you feed into it, you can also determine what you study.
Many people are concerned about whether the organoids could suffer, for example. If I don’t give them pain receptors, there cannot be pain reception.
What about a computer that runs on a human brain?
With organoids, we can really control the input. With a human, you cannot really control what this human is experiencing. Even if you put them into a certain controlled environment, you’re limited. You’re also very much limited because — we have a skull. You cannot really poke many electrodes into the human brain easily and then control the experimental situations.
That’s exactly what we can do with organoids.
Hartung creates organoids from human skin cells in his lab.
THOMAS HARTUNG
What advantages might brain organoids have over computers?
There’s a couple of aspects which make the brain still superior to computers. For example, our capability of concluding on the basis of incomplete information, or what we would call intuitive thinking. We can be very fast and take shortcuts. We are often right — not always, but it is much easier to live with a decision that is based on incomplete data.
For example, a child can distinguish cats and dogs after 10 pictures with a pretty good error rate. A computer needs hundreds of pictures.
We can also add information much easier. You learn 10 words in Italian and you add it to your current “model.” Most computers have to just rerun their entire model to integrate this information.
But we should not compete with silicon computers where they are good. My handheld calculator is better than me at doing calculations. Why should I use a brain organoid to make it a calculator? It will likely be limited to what my brain is capable of doing — if it ever achieves something like this.
Organoids could work even quicker than today’s supercomputers, like Japan's record-breaking Fugaku.
STR/AFP/GETTY IMAGES
How far are we from brain-powered computers?
For the last 60 to 70 years, the more we have understood the brain, we have tried to make computers more brain-like, because there are still some advantages.
You can either envisage that you use it as a model to change our computer architecture, or you could at some point even have a biological component to your computational system.
That’s certainly the furthest away. It is science fiction, but I would say: 20 years ago, the iPhone was science fiction.
Are you considering the obvious question: What if these organoids become self-aware?
We are far from anything which is really producing concerns. There is no suffering, there is no self-awareness or consciousness that you can expect from these organoids, for the foreseeable future. But we have to discuss it, because people are feeling uneasy.
So, one of the things our ethicists at Johns Hopkins are doing at the moment is surveying the general population. They are asking, “what do you think about this?” At some point, people say, “Uh, perhaps we should think about this better?”
Then you give them information like, “there’s an informed consent by the donors of these cells,” or “this is done to find drugs for Alzheimer’s.” You test out what people feel about it, and this helps with the communication of this research.
We don’t want this to suddenly backfire. We want to work for the greater good.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
29-03-2023
AI-blog | AI-experts en Elon Musk roepen op om onderzoek stil te leggen
AI-blog | AI-experts en Elon Musk roepen op om onderzoek stil te leggen
Foto: BELGA
Hoe wij omgaan met informatie op het internet is helemaal aan het veranderen. En onze job ziet er straks ook heel anders uit. In deze blog volgen we de pijlsnelle opmars van ChatGPT en generatieve AI.
Gastheer van deze blog: Dominique Deckmyn
Het onderzoek in de meest geavanceerde AI-systemen moet voor zes maanden worden stilgelegd, om overleg mogelijk te maken over noodzakelijke veiligheidsmaatregelen. Die oproep is de jongste uren ondertekend door onder meer Elon Musk, Apple-oprichter Steve Wozniak en historicus Yuval Noah Harari.
De open brief staat op de website van het Future of Life institute. Het groeiende lijstje ondertekenaars omvat al heel wat vooraanstaande tech-ondernemers en denkers. Daar zitten medewerkers bij van verschillende grote spelers in de AI, zoals Deepmind (een zusterbedrijf van Google) en Stability AI (ontwikkelaar van de beeldgenerator Stable Diffusion). Voorlopig is nog niemand gesignaleerd van OpenAI, het bedrijf dat enkele weken geleden zijn meest geavanceerde AI-model, GPT-4 lanceerde.
Opvallend is dat de brief heel nadrukkelijk oproept om alleen de ontwikkeling stil te leggen van systemen ‘krachtiger dan GPT-4’. Koploper OpenAI zou dus de ontwikkeling van een hypothetisch GPT-5 voor zes maanden moeten stilleggen, maar concurrenten zouden ongestoord verder kunnen werken om hun achterstand op OpenAI in te halen. Dat kan de reden zijn dat de topmensen van OpenAI niet, maar die van de concurrenten wél bij de ondertekenaars zijn. Ook het ontwikkelen van AI-toepassingen gebaseerd op bestaande AI-modellen zou niet worden gehinderd.
De brief opent met een waarschuwing voor ‘human-competitive intelligence’ en de maatschappelijke risico’s die samenhangen met het gebruik van AI die de mens naar de kroon steekt. Zo’n ingrijpende verandering in de geschiedenis van het leven op aarde, zegt de brief, moet met grote zorg gepland worden. Maar in plaats daarvan is een ‘ongecontroleerde race’ aan de gang ‘die niemand, zelfs niet de makers, kan begrijpen, voorspellen of controleren’.
De oproep om de ontwikkeling van systemen die GPT-4 overtreffen, voor zes maanden stil te leggen, is geadresseerd aan alle AI-labs. Als daar geen akkoord over wordt bereikt, moeten overheden een moratorium opleggen.
De adempauze van zes maanden moet worden gebruikt om regelgeving en regelgevende autoriteiten op punt te stellen. Een andere geëiste maatregel is een systeem van ingebouwde watermerken, zodat teksten geproduceerd door zo’n AI-systeem als zodanig kunnen worden herkend.
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5] We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
28-03-2023
Filmmaker James Cameron and the Godfather of AI Agree: AI Could Destroy Us Soon
Filmmaker James Cameron and the Godfather of AI Agree: AI Could Destroy Us Soon
Paul Seaburn
Artificial intelligence is here and two people who are closely connected to it in very different ways agree that it has the potential to eliminate humanity … and the takeover may already be underway. One is James Cameron – the filmmaker and screenwriter responsible for some of the most futuristic and dystopian films of all time, including The Terminator, Aliens, The Abyss, Terminator 2: Judgment Day, Avatar and Avatar: The Way of Water. Cameron not only made movies about artificial intelligence – he pioneered is usage in the process of film production. The other is Geoffrey Hinton, a British computer scientist known as the "godfather of artificial intelligence" for his work in training multi-layer neural networks used in artificial intelligence. Both were recently interviewed on the subjects of artificial intelligence and artificial general intelligence and both agree that AI has the ability to take over humanity and the process may have already begun. Are we living in a James Cameron movie? Is it Titanic?
Is this a movie or our destiny?
“I think A.I. can be great, but also it could literally be the end of the world.”
Appearing recently on the SmartLess podcast, James Cameron was pondering whether an uprising of artificially intelligent machines in The Terminator is possible. Not only does he think it can happen, he says the current state of artificial intelligence makes him “'pretty concerned about the potential for misuse of A.I.” For those not familiar with the film (spoiler alert), The Terminator is a cybernetic android sent from the future to kill the person whose not yet born son is responsible for eventually stopping an artificially intelligent defense network called Skynet which will become hostile and self-aware and trigger a global nuclear war to exterminate all humans. Needless to say, the recent revelations of conversations with GPT-4 chatbots such as OpenAI’s ChatGPT, Google's PaLM and Microsoft’s Bing AI turning strange, hostile and violent have caused many to equate them to The Terminator and Skynet. Cameron says he understands why.
"You talk to all the AI scientists and every time I put my hand up at one of their seminars they start laughing. The point is that no technology has ever not been weaponized. And do we really want to be fighting something smarter than us that isn't us? On our own world? I don't think so.”
Cameron is, of course, correct in his assessment of the weaponization of technology. However, it is his next comment that is the real cause for concern.
“AI could have taken over the world and already be manipulating it but we just don't know because it would have total control over all the media and everything."
Think about the fears being expressed about ChatGPT and other forms of AI being used to collect news, write news stories and even deliver them in the form of very humanlike – and in this case, ironic – avatars of human newscasters. Could AI have already penetrated the media and be working its way into taking over the world? Is this another Terminator sequel in real life?
"I think it's very reasonable for people to be worrying about these issues now, even though it's not going to happen in the next year or two. People should be thinking about those issues."
In an interview with CBS News, Geoffrey Hinton, the "godfather of artificial intelligence," thinks Cameron is right to be worried about the weaponization of artificial intelligence and a possible takeover of humanity that could lead to its destruction. Hinton knows what he’s talking about. He is the descendent of computer and mathematics royalty – his great-great-grandmother was Mary Everest Boole, who was influential in promoting mathematics education for both boys and girls, and her husband was logician George Boole, whose invention of Boolean algebra and Boolean logic is credited with laying the foundations for modern computer science and the Information Age. Hinton has carried on the tradition of his illustrious ancestors – he was awarded the 2018 Turing Award, with Yoshua Bengio and Yann LeCun, for their work on deep learning. On the subject of the weaponization of AI, Hinton has been speaking out against it for years – he moved from the U.S. to Canada because he was against the military funding of artificial intelligence, and has regularly spoken out against lethal autonomous weapons. One concern he expressed in the CBS interview was the rapidity of AI development.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less.”
He is also worried about one of the very things he helped develop – computers coming up with their own ideas for self-improvement, warning that “We have to think hard about how you control that." When asked about the possibility of one of Cameron’s Terminators being developed with artificial general intelligence that takes it beyond human capabilities to the point of acting on its own and potentially threatening the very existence of humanity, he answered cautiously:
"It's not inconceivable, that's all I'll say,"
Not inconceivable? Or already deliverable?
Not inconceivable! This is from the godfather of artificial intelligence! Why are we not panicking? Why is Hinton not panicking? Or moving farther away than Canada? He explains that on the more conceivable side, things aren’t so bad.
“The phrase ‘artificial general intelligence’ carries with it the implication that this sort of single robot is suddenly going to be smarter than you. I don’t think it’s going to be that. I think more and more of the routine things we do are going to be replaced by AI systems — like the Google Assistant.”
What about ChatGPT?
"We're at this transition point now where ChatGPT is this kind of idiot savant, and it also doesn't really understand about truth."
That is a key problem with ChatGPT – its responses are often far from the truth … but presented as facts as it tries to figure out what it is doing and works towards being truthful, factual and consistent. Hinton’s final warning comes straight out of the Wizard of Oz … we need to be worried about who is doing the development and corking the controls behind the curtain.
"You don't want some big for-profit company deciding what's true."
James Cameron and Godfrey … geniuses in different fields who agree on the potential dangers of artificial general intelligence. Are we going to listen to them or the big for-profit companies?
Artificial intelligence is permeating every sector of society. Systems like ChatGPT have been rolled out for public consumption boasting an interactive dialogue, and an ability to write ‘in your voice.’ But how ‘intelligent’ is this new artificial intelligence? We have a little fun putting it to the test.
For years now, scientists have been raising ethical concerns about the creation and use of lab-grown mini brains.
At the same time, other scientists are plowing full steam ahead, creating these brain organoids and trying to find ways to put them to good use.
Now, a group of scientists that fall into the latter category are trying to develop something called “organoid intelligence.”
They shared their research in a recent edition of the journalFrontiers in Science.
Essentially, they want to use these lab-grown mini brains as biological hardware for new biocomputers, LiveScience reports.
“While silicon-based computers are certainly better with numbers, brains are better at learning,” said one of the scientists, Thomas Hartung of Johns Hopkins University. “For example, AlphaGo [the AI that beat the world’s number one Go player in 2017] was trained on data from 160,000 games. A person would have to play five hours a day for more than 175 years to experience these many games.”
But… brains? In a computer? Why?
Fluorescent images illustrating cell types in brain organoids.
In apress releaseabout their research, the scientists wrote, “Brains are not only superior learners, they are also more energy efficient. For instance, the amount of energy spent training AlphaGo is more than is needed to sustain an active adult for a decade.”
Hartung added. “We’re reaching the physical limits of silicon computers because we cannot pack more transistors into a tiny chip. But the brain is wired completely differently. It has about 100bn neurons linked through over 1015 connection points. It’s an enormous power difference compared to our current technology.”
In parallel, the authors are also developing technologies to communicate with the organoids: in other words, to send them information and read out what they’re ‘thinking’. The authors plan to adapt tools from various scientific disciplines, such as bioengineering and machine learning, as well as engineer new stimulation and recording devices.
“We developed a brain-computer interface device that is a kind of an EEG cap for organoids, which we presented in an article published last August. It is a flexible shell that is densely covered with tiny electrodes that can both pick up signals from the organoid, and transmit signals to it,” said Hartung.
But what about all those sticky ethical questions about creating mini-brains just to do tasks for us humans?
Creating human brain organoids that can learn, remember, and interact with their environment raises complex ethical questions. For example, could they develop consciousness, even in a rudimentary form? Could they experience pain or suffering? And what rights would people have concerning brain organoids made from their cells?
The authors are acutely aware of these issues.
“A key part of our vision is to develop OI in an ethical and socially responsible manner,” Hartung said. “For this reason, we have partnered with ethicists from the very beginning to establish an ‘embedded ethics’ approach. All ethical issues will be continuously assessed by teams made up of scientists, ethicists, and the public, as the research evolves.”
A couple of years prior to that, scientists worried that the mini brains they grew in a lab may be sentient and feel pain.
Is hooking these mini brains up to a computer really a great idea? What if they get access to the internet and start secretly communicating with self-healingsuperhuman robots?
ALL RELATED VIDEOS, selected and posted by peter2011
What are 'minibrains'? Everything to know about brain organoids
Nicoletta Lanese
In the past decade, lab-grown blobs of human brain tissue began making news headlines, as they ushered in a new era of scientific discovery and raised a slew of ethical questions.
These blobs — scientifically known as brain organoids, but often called "minibrains" in the news — serve as miniature, simplified models of full-size human brains. These organoids can potentially be useful in basic research, drug development and even computer science.
However, as scientists make these models more sophisticated, there's a question as to whether they could ever become too similar to human brains and thus gain consciousness, in some form or another.
How are minibrains made?
Scientists grow brain organoids from stem cells, a type of immature cell that can give rise to any cell type, whether blood, skin, bowel or brain.
The stem cells used to grow organoids can either come from adult human cells, or more rarely, human embryonic tissue, according to a 2021 review in the Journal of Biomedical Science. In the former case, scientists collect adult cells and then expose them to chemicals in order to revert them into a stem cell-like state. The resulting stem cells are called "induced pluripotent stem cells" (iPSC), which can be made to grow into any kind of tissue.
To give rise to a minibrain, scientists embed these stem cells in a protein-rich matrix, a substance that supports the cells as they divide and form a 3D shape. Alternatively, the cells may be grown atop a physical, 3D scaffold, according to a 2020 review in the journal Frontiers in Cell and Developmental Biology.
To coax the stem cells to form different tissues, scientists introduce specific molecules and growth factors — substances that spur cell growth and replication — into the cell culture system at precise points in their development. In addition, scientists often place the stem cells in spinning bioreactors as they grow into minibrains. These devices keep the growing organoids suspended, rather than smooshed against a flat surface; this helps the organoids absorb nutrients and oxygen from the well-stirred solution surrounding them.
Brain organoids grow more complex as they develop, similar to how human embryos grow more and more complex in the womb. Over time, the organoids come to contain multiple kinds of cells found in full-size human brains; mimic specific functions of human brain tissue; and show similar spatial organization to isolated regions of the brain, though both their structure and function are simpler than that of a real human brain, according to the Journal of Biomedical Science review.
Why are scientists growing minibrains?
Minibrains can be used in a variety of applications. For example, scientists are using the blobs of tissue to study early human development.
To this end, scientists have grown brain organoids with a set of eye-like structures called "optic cups;" in human embryos in the womb, the optic cup eventually gives rise to the light-sensitive retina at the back of the eye. Another group grew organoids that generate brain waves similar to those seen in preterm babies, and another used minibrains to help explain why a common drug can cause birth defects and developmental disorders if taken during pregnancy. Models like these allow researchers to glimpse the brain as it appears in early pregnancy, a feat that would be both difficult and unethical in humans.
Minibrains can also be used to model conditions that affect adults, including infectious diseases that affect the brain, brain tumors and neurodegenerative disorders like Alzheimer's and Parkinson's disease, according to the Frontiers in Cell and Developmental Biology review. In addition, some groups are developing minibrains for drug screening, to see if a given medication could be toxic to human patients' brains, according to a 2021 review in the journal Frontiers in Genetics.
Such models could complement or eventually replace research conducted with cells in lab dishes and in animals; even studies in primates, whose brains closely resemble humans', can't reliably capture exactly what happens in human disease. For now, though, experts agree that brain organoids are not advanced enough to partially or fully replace established cell and animal models of disease. But someday, scientists hope these models will lead to the development of new drugs and reduce the need for animal research; some researchers are even testing whether it could be feasible to repair the brain by "plugging" injuries with lab-grown human minibrains.
histological image shows a cross section of a rat's brain, depicted in red, with a glowing green blob on the top right side; the blob is a clump of cells called an organoid that's been derived from human stem cells and transplanted into the rat's brain
Beyond medicine and the study of human development, minibrains can also be used to study human evolution. Recently, scientists used brain organoids to study which genes allowed the human brain to grow so large, and others have used organoids to study how human brains differ from those of apes and Neanderthals.
Finally, some scientists want to use brain organoids to power computer systems. In an early test of this technology, one group recently crafted a minibrain out of human and mouse brain cells that successfully played "Pong" after being hooked up to a computer-controlled electrode array.
And in a recent proposal published in the journal Frontiers in Science, scientists announced their plans to grow large brain organoids, containing tens of thousands to millions of cells, and link them together to create complex networks that can serve as the basis for future biocomputers.
Could minibrains ever be sentient?
Although sometimes called "minibrains," brain organoids aren't truly miniaturized human brains. Rather, they are roughly spherical balls of brain tissue that mimic some features of the full-size human brain. For example, cerebral organoids, which contain cell types found in the cerebral cortex, the wrinkled outer surface of the brain, contain several layers of tissue, as a real cortex would.
Similarly, brain organoids can generate chemical messages and brain waves similar to what's seen in a full-size brain, but that doesn't mean they can "think," experts say. That said, one sticking point in this discussion is the fact that neuroscientists don't have an agreed-upon definition of consciousness, nor do they have standardized ways to measure the phenomenon, Nature reported in 2020.
The National Academies of Sciences, Engineering, and Medicine assembled a committee to tackle these quandaries and released a report in 2021, outlining some of the potential ethical issues of working with brain organoids.
At the time, the authors concluded that "In the foreseeable future, it is extremely unlikely that [brain organoids] would possess capabilities that, given current understanding, would be recognized as awareness, consciousness, emotion, or the experience of pain. From a moral perspective, neural organoids do not differ at present from other in vitro human neural tissues or cultures. However, as scientists develop significantly more complex organoids, the possible need to make this distinction should be revisited regularly."
When Rohit Bhattacharya began his PhD in computer science, his aim was to build a tool that could help physicians to identify people with cancer who would respond well to immunotherapy. This form of treatment helps the body’s immune system to fight tumours, and works best against malignant growths that produce proteins that immune cells can bind to. Bhattacharya’s idea was to create neural networks that could profile the genetics of both the tumour and a person’s immune system, and then predict which people would be likely to benefit from treatment.
But he discovered that his algorithms weren’t up to the task. He could identify patterns of genes that correlated to immune response, but that wasn’t sufficient1. “I couldn’t say that this specific pattern of binding, or this specific expression of genes, is a causal determinant in the patient’s response to immunotherapy,” he explains.
Bhattacharya was stymied by the age-old dictum that correlation does not equal causation — a fundamental stumbling block in artificial intelligence (AI). Computers can be trained to spot patterns in data, even patterns that are so subtle that humans might miss them. And computers can use those patterns to make predictions — for instance, that a spot on a lung X-ray indicates a tumour2. But when it comes to cause and effect, machines are typically at a loss. They lack a common-sense understanding of how the world works that people have just from living in it. AI programs trained to spot disease in a lung X-ray, for example, have sometimes gone astray by zeroing in on the markings used to label the right-hand side of the image3. It is obvious, to a person at least, that there is no causal relationship between the style and placement of the letter ‘R’ on an X-ray and signs of lung disease. But without that understanding, any differences in how such markings are drawn or positioned could be enough to steer a machine down the wrong path.
For computers to perform any sort of decision making, they will need an understanding of causality, says Murat Kocaoglu, an electrical engineer at Purdue University in West Lafayette, Indiana. “Anything beyond prediction requires some sort of causal understanding,” he says. “If you want to plan something, if you want to find the best policy, you need some sort of causal reasoning module.”
Incorporating models of cause and effect into machine-learning algorithms could also help mobile autonomous machines to make decisions about how they navigate the world. “If you’re a robot, you want to know what will happen when you take a step here with this angle or that angle, or if you push an object,” Kocaoglu says.
In Bhattacharya’s case, it was possible that some of the genes that the system was highlighting were responsible for a better response to the treatment. But a lack of understanding of causality meant that it was also possible that the treatment was affecting the gene expression — or that another, hidden factor was influencing both. The potential solution to this problem lies in something known as causal inference — a formal, mathematical way to ascertain whether one variable affects another.
Computer scientist Rohit Bhattacharya (back) and his team at Williams College in Williamstown, Massachusetts, discuss adapting machine learning for causal inference.
Credit: Mark Hopkins
Causal inference has long been used by economists and epidemiologists to test their ideas about causation. The 2021 Nobel prize in economic sciences went to three researchers who used causal inference to ask questions such as whether a higher minimum wage leads to lower employment, or what effect an extra year of schooling has on future income. Now, Bhattacharya is among a growing number of computer scientists who are working to meld causality with AI to give machines the ability to tackle such questions, helping them to make better decisions, learn more efficiently and adapt to change.
A notion of cause and effect helps to guide humans through the world. “Having a causal model of the world, even an imperfect one — because that’s what we have — allows us to make more robust decisions and predictions,” says Yoshua Bengio, a computer scientist who directs Mila – Quebec Artificial Intelligence Institute, a collaboration between four universities in Montreal, Canada. Humans’ grasp of causality supports attributes such as imagination and regret; giving computers a similar ability could transform their capabilities.
Climbing the ladder
The headline successes of AI over the past decade — such as winning against people at various competitive games, identifying the content of images and, in the past few years, generating text and pictures in response to written prompts — have been powered by deep learning. By studying reams of data, such systems learn how one thing correlates with another. These learnt associations can then be put to use. But this is just the first rung on the ladder towards a loftier goal: something that Judea Pearl, a computer scientist and director of the Cognitive Systems Laboratory at the University of California, Los Angeles, refers to as “deep understanding”.
In 2011, Pearl won the A.M. Turing Award, often referred to as the Nobel prize for computer science, for his work developing a calculus to allow probabilistic and causal reasoning. He describes a three-level hierarchy of reasoning4. The base level is ‘seeing’, or the ability to make associations between things. Today’s AI systems are extremely good at this. Pearl refers to the next level as ‘doing’ — making a change to something and noting what happens. This is where causality comes into play.
A computer can develop a causal model by examining interventions: how changes in one variable affect another. Instead of creating one statistical model of the relationship between variables, as in current AI, the computer makes many. In each one, the relationship between the variables stays the same, but the values of one or several of the variables are altered. That alteration might lead to a new outcome. All of this can be evaluated using the mathematics of probability and statistics. “The way I think about it is, causal inference is just about mathematizing how humans make decisions,” Bhattacharya says.
Yoshua Bengio (front) directs Mila – Quebec Artificial Intelligence Institute in Montreal, Canada.
Credit: Mila-Quebec AI Institute
Bengio, who won the A.M. Turing Award in 2018 for his work on deep learning, and his students have trained a neural network to generate causal graphs5 — a way of depicting causal relationships. At their simplest, if one variable causes another variable, it can be shown with an arrow running from one to the other. If the direction of causality is reversed, so too is the arrow. And if the two are unrelated, there will be no arrow linking them. Bengio’s neural network is designed to randomly generate one of these graphs, and then check how compatible it is with a given set of data. Graphs that fit the data better are more likely to be accurate, so the neural network learns to generate more graphs similar to those, searching for one that fits the data best.
This approach is akin to how people work something out: people generate possible causal relationships, and assume that the ones that best fit an observation are closest to the truth. Watching a glass shatter when it is dropped it onto concrete, for instance, might lead a person to think that the impact on a hard surface causes the glass to break. Dropping other objects onto concrete, or knocking a glass onto a soft carpet, from a variety of heights, enables a person to refine their model of the relationship and better predict the outcome of future fumbles.
Face the changes
A key benefit of causal reasoning is that it could make AI more able to deal with changing circumstances. Existing AI systems that base their predictions only on associations in data are acutely vulnerable to any changes in how those variables are related. When the statistical distribution of learnt relationships changes — whether owing to the passage of time, human actions or another external factor — the AI will become less accurate.
For instance, Bengio could train a self-driving car on his local roads in Montreal, and the AI might become good at operating the vehicle safely. But export that same system to London, and it would immediately break for a simple reason: cars are driven on the right in Canada and on the left in the United Kingdom, so some of the relationships the AI had learnt would be backwards. He could retrain the AI from scratch using data from London, but that would take time, and would mean that the software would no longer work in Montreal, because its new model would replace the old one.
A causal model, on the other hand, allows the system to learn about many possible relationships. “Instead of having just one set of relationships between all the things you could observe, you have an infinite number,” Bengio says. “You have a model that accounts for what could happen under any change to one of the variables in the environment.”
Humans operate with such a causal model, and can therefore quickly adapt to changes. A Canadian driver could fly to London and, after taking a few moments to adjust, could drive perfectly well on the left side of the road. The UK Highway Code means that, unlike in Canada, right turns involve crossing traffic, but it has no effect on what happens when the driver turns the wheel or how the tyres interact with the road. “Everything we know about the world is essentially the same,” Bengio says. Causal modelling enables a system to identify the effects of an intervention and account for it in its existing understanding of the world, rather than having to relearn everything from scratch.
Judea Pearl, director of the Cognitive Systems Laboratory at the University of California, Los Angeles, won the 2011 A.M. Turing Award.
Credit: UCLA Samueli School of Engineering
This ability to grapple with changes without scrapping everything we know also allows humans to make sense of situations that aren’t real, such as fantasy movies. “Our brain is able to project ourselves into an invented environment in which some things have changed,” Bengio says. “The laws of physics are different, or there are monsters, but the rest is the same.”
Counter to fact
The capacity for imagination is at the top of Pearl’s hierarchy of causal reasoning. The key here, Bhattacharya says, is speculating about the outcomes of actions not taken.
Bhattacharya likes to explain such counterfactuals to his students by reading them ‘The Road Not Taken’ by Robert Frost. In this poem, the narrator talks of having to choose between two paths through the woods, and expresses regret that they can’t know where the other road leads. “He’s imagining what his life would look like if he walks down one path versus another,” Bhattacharya says. That is what computer scientists would like to replicate with machines capable of causal inference: the ability to ask ‘what if’ questions.
Imagining whether an outcome would have been better or worse if we’d taken a different action is an important way that humans learn. Bhattacharya says it would be useful to imbue AI with a similar capacity for what is known as ‘counterfactual regret’. The machine could run scenarios on the basis of choices it didn’t make and quantify whether it would have been better off making a different one. Some scientists have already used counterfactual regret to help a computer improve its poker playing6.
The ability to imagine different scenarios could also help to overcome some of the limitations of existing AI, such as the difficulty of reacting to rare events. By definition, Bengio says, rare events show up only sparsely, if at all, in the data that a system is trained on, so the AI can’t learn about them. A person driving a car can imagine an occurrence they’ve never seen, such as a small plane landing on the road, and use their understanding of how things work to devise potential strategies to deal with that specific eventuality. A self-driving car without the capability for causal reasoning, however, could at best default to a generic response for an object in the road. By using counterfactuals to learn rules for how things work, cars could be better prepared for rare events. Working from causal rules rather than a list of previous examples ultimately makes the system more versatile.
Using causality to program imagination into a computer could even lead to the creation of an automated scientist. During a 2021 online summit sponsored by Microsoft Research, Pearl suggested that such a system could generate a hypothesis, pick the best observation to test that hypothesis and then decide what experiment would provide that observation.
Right now, however, this remains a way off. The theory and basic mathematics of causal inference are well established, but the methods for AI to realize interventions and counterfactuals are still at an early stage. “This is still very fundamental research,” Bengio says. “We’re at the stage of figuring out the algorithms in a very basic way.” Once researchers have grasped these fundamentals, algorithms will then need to be optimized to run efficiently. It is uncertain how long this will all take. “I feel like we have all the conceptual tools to solve this problem and it’s just a matter of a few years, but usually it takes more time than you expect,” Bengio says. “It might take decades instead.”
Bhattacharya thinks that researchers should take a leaf from machine learning, the rapid proliferation of which was in part because of programmers developing open-source software that gives others access to the basic tools for writing algorithms. Equivalent tools for causal inference could have a similar effect. “There’s been a lot of exciting developments in recent years,” Bhattacharya says, including some open-source packages from tech giant Microsoft and from Carnegie Mellon University in Pittsburgh, Pennsylvania. He and his colleagues also developed an open-source causal module they call Ananke. But these software packages remain a work in progress.
Bhattacharya would also like to see the concept of causal inference introduced at earlier stages of computer education. Right now, he says, the topic is taught mainly at the graduate level, whereas machine learning is common in undergraduate training. “Causal reasoning is fundamental enough that I hope to see it introduced in some simplified form at the high-school level as well,” he says.
If these researchers are successful at building causality into computing, it could bring AI to a whole new level of sophistication. Robots could navigate their way through the world more easily. Self-driving cars could become more reliable. Programs for evaluating the activity of genes could lead to new understanding of biological mechanisms, which in turn could allow the development of new and better drugs. “That could transform medicine,” Bengio says.
Even something such as ChatGPT, the popular natural-language generator that produces text that reads as though it could have been written by a human, could benefit from incorporating causality. Right now, the algorithm betrays itself by producing clearly written prose that contradicts itself and goes against what we know to be true about the world. With causality, ChatGPT could build a coherent plan for what it was trying to say, and ensure that it was consistent with facts as we know them.
When he was asked whether that would put writers out of business, Bengio says that could take some time. “But how about you lose your job in ten years, but you’re saved from cancer and Alzheimer’s,” he says. “That’s a good deal.”
The US National Ignition Facility has reported that it has achieved the phenomenon of ignition.
Credit: Jason Laurea/Lawrence Livermore National Laboratory
Scientists at the world’s largest nuclear-fusion facility have for the first time achieved the phenomenon known as ignition — creating a nuclear reaction that generates more energy than it consumes. News of the breakthrough at the US National Ignition Facility (NIF), made on 5 December and announced today by US President Joe Biden’s administration, has excited the global fusion-research community. That research aims to harness nuclear fusion — the phenomenon that powers the Sun — to provide a source of near-limitless clean energy on Earth. Researchers caution that, despite this latest success, a long path remains to achieving that goal.
“It’s an incredible accomplishment,” says Mark Herrmann, the deputy programme director for fundamental weapons physics at Lawrence Livermore National Laboratory in California, which houses the fusion laboratory. The landmark experiment follows years of work by multiple teams on everything from lasers and optics to targets and computer models, Herrmann says. “That is of course what we are celebrating.”
A flagship experimental facility of the US Department of Energy’s nuclear-weapons programme, designed to study thermonuclear explosions, NIF originally aimed to achieve ignition by 2012 and has faced criticism for delays and cost overruns. In August 2021, NIF scientists announced that they had used their high-powered laser device to achieve a record reaction that crossed a key threshold in achieving ignition, but efforts to replicate that experiment failed. Ultimately, scientists scrapped efforts to replicate that shot, and rethought the experimental design — a choice that paid off last week.
“There were a lot of people who didn’t think it was possible, but I and others who kept the faith feel somewhat vindicated,” says Michael Campbell, former director of the laser energetics laboratory at the University of Rochester in New York and an early proponent of NIF while at Lawrence Livermore lab. “I’m having a cosmo to celebrate.”
Nature looks at NIF’s latest experiment and what it means for fusion science.
What did NIF achieve?
The facility used its set of 192 lasers to deliver 2.05 megajoules of energy onto a pea-sized gold cylinder containing a frozen pellet of the hydrogen isotopes deuterium and tritium. The laser’s pulse of energy caused the capsule to collapse, reaching temperatures only seen in stars and thermonuclear weapons, and the hydrogen isotopes fused into helium, releasing additional energy and creating a cascade of fusion reactions. The laboratory’s analysis suggests that the reaction released some 3.15 MJ of energy — roughly 54% more than went into the reaction, and more than double the previous record of 1.3 MJ.
“Fusion research has been going on since the early 1950s, and this is the first time in the laboratory that fusion has ever produced more energy than it consumed,” says Campbell.
However, although the fusion reactions produced more than 3 MJ of energy — more than was delivered to the target — NIF’s lasers consumed 322 MJ of energy in the process. Still, the experiment qualifies as ignition, a benchmark criterion for fusion reactions.
“It’s a big milestone, but NIF is not a fusion-energy device,” says David Hammer, a nuclear-energy engineer at Cornell University in Ithaca, New York.
Herrmann acknowledges as much, saying that there are many steps on the path to laser fusion energy. “NIF was not designed to be efficient,” he says. “It was designed to be the biggest laser we could possibly build to give us the data we need for the [nuclear] stockpile research programme.”
NIF scientists made multiple changes before the latest laser shot, based in part on analysis and computer modelling of previous experiments. In addition to boosting the laser’s power by around 8%, scientists reduced the number of imperfections in the target and adjusted how they delivered the laser energy to create a more spherical implosion. Operating at the cusp of fusion ignition, the scientists knew that “little changes can make a big difference”, Herrmann says.
Why are these results significant?
On one level, it’s about proving what is possible, and many scientists have hailed the result as a milestone in fusion science. But the results carry particular significance at NIF: the facility was designed to help nuclear-weapons scientists study the intense heat and pressures inside explosions, and that is possible only if the laboratory produces high-yield fusion reactions.
It took more than a decade, “but they can be commended for reaching their goal”, says Stephen Bodner, a physicist who formerly headed the laser plasma branch of the US Naval Research Laboratory in Washington DC. Bodner says the big question now is what the Department of Energy will do next: double down on weapons research at the NIF or pivot to a laser programme geared towards fusion-energy research.
What does this mean for fusion energy?
The latest results have already renewed buzz about a future powered by clean fusion energy, but experts warn that there is a long road ahead.
NIF was not designed with commercial fusion energy in mind — and many researchers doubt that laser-driven fusion will be the approach that ultimately yields fusion energy. Nevertheless, Campbell thinks that its latest success could boost confidence in the promise of laser fusion power and spur a programme focused on energy applications. “This is absolutely necessary to have the credibility to sell an energy programme,” he says.
Lawrence Livermore National Laboratory director Kim Budil described the achievement as a proof of concept. “I don’t want to give you a sense that we’re going to plug the NIF into the grid: that is definitely not how this works,” she said during a press conference in Washington DC. “But this is the fundamental building block of an inertial confinement fusion power scheme.”
There are many other experiments worldwide that are trying to achieve fusion for energy applications using different approaches. But engineering challenges remain, including the design and construction of plants that extract the heat produced by the fusion and use it to generate significant amounts of energy to be turned into usable electricity.
“Although positive news, this result is still a long way from the actual energy gain required for the production of electricity,” said Tony Roulstone, a nuclear-energy researcher at the University of Cambridge, UK, in a statement to the Science Media Centre in London.
Still, “the NIF experiments focused on fusion energy absolutely are valuable on the path to commercial fusion power”, says Anne White, a plasma physicist at the Massachusetts Institute of Technology in Cambridge.
What are the next major milestones in fusion?
To demonstrate that the type of fusion studied at NIF can be a viable way of producing energy, the efficiency of the yield — the energy released compared to the energy that goes into producing the laser pulses — needs to grow by at least two orders of magnitude.
Researchers will also need to dramatically increase the rate at which the laser’s pulses can be produced and how quickly they can clear the target chamber to prepare for another burn, says Tim Luce, head of science and operation at the international nuclear-fusion reactor ITER, which is under construction in St-Paul-lès-Durance, France.
“Sufficient fusion-energy-producing events at repeated performance would be a major milestone of interest,” says White.
The US$22-billion ITER project — a collaboration between China, the European Union, the United Kingdom, India, Japan, South Korea, Russia and the United States — aims to achieve self-sustaining fusion, meaning that the energy from fusion produces more fusion, through a different technique from NIF’s ‘inertial confinement’ approach. ITER will keep a plasma of deuterium and tritium confined in a doughnut-shaped vacuum chamber, known as a tokamak, and heat it up until the nuclei fuse. Once the reactor starts working towards fusion, currently planned for 2035, it will aim to reach ‘burning’ stage, “where the self-heating power is the dominant source of heating”, Luce explains.
What does it mean for other fusion experiments?
NIF and ITER use only two of many fusion-technology concepts being pursued worldwide. The approaches include the magnetic confinement of plasma, using tokamaks and devices called stellarators — inertial confinement, used by NIF, and a hybrid.
The technology required to generate electricity from fusion is largely independent of the concept, says White, and this latest milestone won’t necessarily lead researchers to abandon or consolidate their concepts.
The engineering challenges faced by NIF are different from those at ITER and other facilities. But the symbolic achievement could have widespread effects. “A result like this will bring increased interest in the progress of all types of fusion, so it should have a positive impact on fusion research in general,” says Luce.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
This Plane Will Change Travel Forever
This Plane Will Change Travel Forever
Before the experts decided on the ultimate design for planes, a ton of bizarre flying contraptions were proposed, but just because we now have designs that function doesn't stop individuals from imagining new methods to explore the sky.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
21-02-2023
How real is 3D holographic blue beam technology?
How real is 3D holographic blue beam technology?
There is much written about the infamous Blue Beam project from which it is said that it will be used to create a fake alien invasion.
Now, there are rumors that soon they are going to rollout this blue beam technology in order to convince the people that there is a threat from outer space.
Of course there are secret government projects that use certain technologies and they have powerful satellites and ground based systems that can project holograms. They have been developing and perfecting this holographic blue beam technology for decades and it looks real.
But creating a fake alien invasion would require an enormous amount of resources and would be very difficult to keep it secret. Additionally, such an event would likely have major ethical and political implications that the US government would not want to risk, therefore it is unlikely that it would be used to create a false flag event, but you never know.
Watching the video below of a large fire-breathing dragon flying around at the opening of a baseball game at Happy Dream Park in South Korea in 2019 which was streamed live through sports broadcasting channels then it's not hard to imagine what's possible by using 3D holographic blue beam technology.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
20-02-2023
How will AI change mathematics? Rise of chatbots highlights discussion
How will AI change mathematics? Rise of chatbots highlights discussion
Machine learning tools already help mathematicians to formulate new theories and solve tough problems. But they’re set to shake up the field even more.
AI tools have allowed researchers to solve complex mathematical problems.
Credit: Fadel Senna/AFP/Getty
As interest in chatbots spreads like wildfire, mathematicians are beginning to explore how artificial intelligence (AI) could help them to do their work. Whether it’s assisting with verifying human-written work or suggesting new ways to solve difficult problems, automation is beginning to change the field in ways that go beyond mere calculation, researchers say.
“We’re looking at a very specific question: will machines change math?” says Andrew Granville, a number theorist at the University of Montreal in Canada. A workshop at the University of California, Los Angeles (UCLA), this week explored this question, aiming to build bridges between mathematicians and computer scientists. “Most mathematicians are completely unaware of these opportunities,” says one of the event’s organizers, Marijn Heule, a computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania.
Akshay Venkatesh, a 2018 winner of the prestigious Fields Medal who is at the Institute for Advanced Study in Princeton, New Jersey, kick-started a conversation on how computers will change maths at a symposium in his honour in October. Two other recipients of the medal, Timothy Gowers at the Collège de France in Paris and Terence Tao at UCLA, have also taken leading roles in the debate.
“The fact that we have people like Fields medallists and other very famous big-shot mathematicians interested in the area now is an indication that it’s ‘hot’ in a way that it didn’t used to be,” says Kevin Buzzard, a mathematician at Imperial College London.
AI approaches
Part of the discussion concerns what kind of automation tools will be most useful. AI comes in two major flavours. In ‘symbolic’ AI, programmers embed rules of logic or calculation into their code. “It’s what people would call ‘good old-fashioned AI’,” says Leonardo de Moura, a computer scientist at Microsoft Research in Redmond, Washington.
The other approach, which has become extremely successful in the past decade or so, is based on artificial neural networks. In this type of AI, the computer starts more or less from a clean slate and learns patterns by digesting large amounts of data. This is called machine-learning, and it is the basis of ‘large language models’ (including chatbots such as ChatGPT), as well as the systems that can beat human players at complex games or predict how proteins fold. Whereas symbolic AI is inherently rigorous, neural networks can only make statistical guesses, and their operations are often mysterious.
2018 Fields Medal winner Akshay Venkatesh (centre) has spoken about how computers will change mathematics.
Credit: Xinhua/Shutterstock
De Moura helped symbolic AI to score some early mathematical successes by creating a system called Lean. This interactive software tool forces researchers to write out each logical step of a problem, down to the most basic details, and ensures that the maths is correct. Two years ago, a team of mathematicians succeeded at translating an important but impenetrable proof — one so complicated that even its author was unsure of it — into Lean, thereby confirming that it was correct.
The researchers say the process helped them to understand the proof, and even to find ways to simplify it. “I think this is even more exciting than checking the correctness,” de Moura says. “Even in our wildest dreams, we didn’t imagine that.”
As well as making solitary work easier, this sort of ‘proof assistant’ could change how mathematicians work together by eliminating what de Moura calls a “trust bottleneck”. “When we are collaborating, I may not trust what you are doing. But a proof assistant shows your collaborators that they can trust your part of the work.”
Sophisticated autocomplete
At the other extreme are chatbot-esque, neural-network-based large language models. At Google in Mountain View, California, former physicist Ethan Dyer and his team have developed a chatbot called Minerva, which specializes in solving maths problems. At heart, Minerva is a very sophisticated version of the autocomplete function on messaging apps: by training on maths papers in the arXiv repository, it has learnt to write down step-by-step solutions to problems in the same way that some apps can predict words and phrases. Unlike Lean, which communicates using something similar to computer code, Minerva takes questions and writes answers in conversational English. “It is an achievement to solve some of these problems automatically,” says de Moura.
Minerva shows both the power and the possible limitations of this approach. For example, it can accurately factor integer numbers into primes — numbers that can’t be divided evenly into smaller ones. But it starts making mistakes once the numbers exceed a certain size, showing that it has not ‘understood’ the general procedure.
Still, Minerva’s neural network seems to be able to acquire some general techniques, as opposed to just statistical patterns, and the Google team is trying to understand how it does that. “Ultimately, we’d like a model that you can brainstorm with,” Dyer says. He says it could also be useful for non-mathematicians who need to extract information from the specialized literature. Further extensions will expand Minerva’s skills by studying textbooks and interfacing with dedicated maths software.
Dyer says the motivation behind the Minerva project was to see how far the machine-learning approach could be pushed; a powerful automated tool to help mathematicians might end up combining symbolic AI techniques with neural networks.
Maths v. machines
In the longer term, will programs remain part of the supporting cast, or will they be able to conduct mathematical research independently? AI might get better at producing correct mathematical statements and proofs, but some researchers worry that most of those would be uninteresting or impossible to understand. At the October symposium, Gowers said that there might be ways of teaching a computer some objective criteria for mathematical relevance, such as whether a small statement can embody many special cases or even form a bridge between different subfields of maths. “In order to get good at proving theorems, computers will have to judge what is interesting and worth proving,” he said. If they can do that, the future of humans in the field looks uncertain.
Computer scientist Erika Abraham at RWTH Aachen University in Germany is more sanguine about the future of mathematicians. “An AI system is only as smart as we program it to be,” she says. “The intelligence is not in the computer; the intelligence is in the programmer or trainer.”
Melanie Mitchell, a computer scientist and cognitive scientist at the Santa Fe Institute in New Mexico, says that mathematicians’ jobs will be safe until a major shortcoming of AI is fixed — its inability to extract abstract concepts from concrete information. "While AI systems might be able to prove theorems, it’s much harder to come up with interesting mathematical abstractions that give rise to the theorems in the first place.”
Something about dancingrobots seems to tickle people’s fancies. After all, millions of people watched a 2020 video from the robotics company Boston Dynamics of its fancy humanoid and quadrupedal devices getting down to ‘60s soul.
More recently, the murderous (yet campy) villain in the movie M3GAN has captivated the internet with her moves — and even earned recognition as a gay icon. Whether we’re obsessing over grooving robots or moving like robots ourselves, automaton choreography clearly holds a place in our hearts.
M3GAN’s creepy yet delightful dance has captivated the internet.
So it’s no surprise that a niche research field dubbed choreorobotics has gained traction in recent years. Brown University even has an entire course dedicated to the subject. Not only are labs programming robots to gyrate and hop, but dance experts are also helping scientists give their devices more fluid, human-like movements. Ultimately, this kind of work could help us feel closer to robots in an increasingly automated world.
Kate Sicchio, a choreographer and digital artist at Virginia Commonwealth University, combines her dance and tech knowledge to devise robot performances.Last year, Sicchio worked with Patrick Martin from the university’s engineering department to produce a (surprisingly touching) human-automaton duet. Offstage, she also helps design machines with more realistic motions.
Inverse talked to Sicchio to learn more about choreorobotics — and whether increasingly limber robots could actually become blood-thirsty killers like M3GAN.
WHY DO YOU THINK ROBOT DANCING VIDEOS GET SO POPULAR?
Boston Dynamics regularly stages elaborate bot performances.
It's really interesting to have this unfamiliar device do this uncanny human thing. It’s similar to why we love putting googly eyes on everything. This makes it human even though it's not supposed to be. And that becomes funny or endearing somehow. It's very popular to make the robot do this very human, expressive thing when it's not human or expressive on its own.
WHAT MAKES A ROBOT PERFORMANCE POWERFUL?
One of the things we found is that a robot on its own feels very isolated and cold. We have this piece called “Amelia and the Machine.”In the opening, this dancer is actually moving the robot arm around.
People are really moved by this intimacy with the robot and the fact that she's touching it.
It's a small manipulator robot, so it's probably the size of a toddler. The fact that she’s sitting next to it — that small connection really changes how people see the robot because it's no longer this isolated thing. All of a sudden it has a companion.
WHAT STYLE OF DANCE DO ROBOTS DO BEST?
A performance of “Amelia and the Machine” co-choreographed by Kate Sicchio at Virginia Commonwealth University.
ANTHONY JOHNSON
My home is contemporary dance, so that's where I go first. That tends to work well because, with the robot we’re using, it's not a one-to-one mapping of the human body onto the robot. Sometimes it's hard to do traditional ballet, where there are really specific positions to hit. It’s really hard to map an arabesque onto a robot that doesn't have a leg.
I think contemporary dance, where there's a lot of freedom and creativity in how you develop movement, works well. I would be interested in doing things with dance forms with more rhythm or more structure and timing — that would be a really interesting study to follow up with at some point. More tutting or street dance forms could be really interesting to play with.
THE M3GAN DANCE SEEMS TO FRIGHTEN, OR AT LEAST CONFUSE, VIEWERS. CAN DANCING DEVICES BACKFIRE AND ACTUALLY ALIENATE US FROM ROBOTS?
That’s something that we're also studying. There's this weird space where it totally can go wrong and could be like, “They're trying too much to make it human,” and it just falls short and becomes scary. I think what's interesting about M3GAN is that it's a very humanoid robot. The robots I work with do not look human at all, and I'm not interested in trying to make them look human. I get a lot of recommendations to put costumes on them. But I don't know that it needs a hand or a hat, or a tiara. It’s this weird moment where it can become scary instead of endearing or friendly.
One thing that's interesting about M3GAN is how it quickly becomes a killer robot. That is an ethical concern in this field — where might this go wrong? Could this become weaponized somehow if it becomes so good at moving? That's something I think about, too: How do we keep them ethical? I've never taken DARPA funding, but I know people who have gotten military funding for projects like this.
DO YOU HAVE A FAVORITE HOLLYWOOD DANCING ROBOT SCENE?
Sicchio enjoys this unnerving performance from Ex Machina.
The scene from Ex Machina. What I like about that dancing robot scene is it’s kind of the reveal that, guess what, this is all training for this AI robot, and all these women you keep seeing in the house aren't really women — and I'm going to show you because we can do this crazy dance routine together.
What stands out and makes it so interesting is that they do all these disco moves, but their eyes are locked on the guy watching. They never move their heads, which is what makes it so weird and un-human: They never unlock their focus. They're not having fun.
WHAT TYPES OF ROBOTS HAVE THE BEST MOVES?
The “Amelia and the Machine” piece uses a relatively simple robot, which Sicchio says works well for performance.
With simpler robots, you can better appreciate the movement they can do and see how that can be made into something more expressive or more collaborative with the human. I think that’s less scary because it's not trying to be human and then failing.
Most researchers use more simple devices — a lot do big industrial arms. It's almost become a trope, the pretty ballerina with the big industrial arm. And then Boston Dynamics has the bipedal, more human sort of robots. The company’s dance spectacles look seamless, but they are actually really hard to program. So they never do them live, you only see the edited videos. They’re a huge production that takes several days to film to get you three minutes of a Bruno Mars song or whatever.
The humanoid ones are just tricky, that center of gravity thing is really hard — it’s easier when the robot is low to the ground. With our small robots, if you make a movement too fast or wild, it will fall over. So you can imagine a big humanoid robot trying to get it to jump, and land is very difficult.
WHY IS CHOREOROBOTICS IMPORTANT BEYOND PERFORMANCE?
I make stage pieces with Patrick Martin, an assistant professor of electrical and computer engineering. But we're also doing scientific studies during that process. We found that, because dancers are interested in doing extreme or different movements, they're very good at finding the boundaries of what a robot can do very quickly. A friend of mine calls dancers “extreme user testers.”
We’ve been doing a lot with machine learning and creating new algorithms for robots to move and we’ve been doing that by studying dancers. We do things like motion capture of dancers doing certain gestures, and then see how we can map those to the robot and see if we can get it to move with new qualities or in ways that normal programming hasn't thought of.
I also think it’s interesting when roboticists engage with choreography themselves. We did a workshop with Patrick Martin and his graduate students and some of my dance students — getting them to move. We explored a variety of prompts around moving the body in space, ways to repeat lines of the body with other body parts, and other approaches of responding to the geometry of the body.
When roboticists think about movement, they're always thinking of it outside of their own body. I think about it like getting the robot to follow my arm. Getting roboticists to actually do the dance and be in their bodies is a really interesting place for us to go next. That will start to develop this kind of kinesthetic empathy that perhaps we're searching for with dancing robots. I think roboticists should become dancers.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
10-02-2023
Metal robot can melt its way out of tight spaces to escape
Metal robot can melt its way out of tight spaces to escape
A millimetre-sized robot made from a mix of liquid metal and microscopic magnetic pieces can stretch, move or melt. It could be used to fix electronics or remove objects from the body
A miniature, shape-shifting robot can liquefy itself and reform, allowing it to complete tasks in hard-to-access places and even escape cages. It could eventually be used as a hands-free soldering machine or a tool for extracting swallowed toxic items.
Robots that are soft and malleable enough to work in narrow, delicate spaces like those in the human body already exist, but they can’t make themselves sturdier and stronger when under pressure or when they must carry something heavier than themselves. Carmel Majidi at Carnegie Mellon University in Pennsylvania and his colleagues created a robot that can not only shape-shift but also become stronger or weaker by alternating between being a liquid and a solid.
They made the millimetre-sized robot from a mix of the liquid metal gallium and microscopic pieces of a magnetic material made of neodymium, iron and boron. When solid, the material was strong enough to support an object 30 times its own mass. To make it soften, stretch, move or melt into a crawling puddle as needed for different tasks, the researchers put it near magnets. The magnets’ customised magnetic fields exerted forces on the tiny magnetic pieces in the robot, moving them and deforming the surrounding metal in different directions.
For instance, the team stretched a robot by applying a magnetic field that pulled these granules in multiple directions. The researchers also used a stronger field to yank the particles upwards, making the robot jump. When Majidi and his colleagues used an alternating magnetic field – one whose shape changes predictably over time – electrons in the robot’s liquid metal formed electric currents. The coursing of these currents through the robot’s body heated it up and eventually made it melt.
“No other material I know of is this good at changing its stiffness this much,” says Majidi.
Wang and Pan et al.
Exploiting this flexibility, the team made two robots carry and solder a small light bulb onto a circuit board. When they reached their target, the robots simply melted over the light bulb’s edges to fuse it to the board. Electricity could then run through their liquid metal bodies and light the light bulb.
In an experiment inside an artificial stomach, the researchers applied another set of magnetic fields to make the robot approach an object, melt over it and drag it out. Finally, they shaped the robot like a Lego minifigure, then helped it escape from a cage by liquefying it and making it flow out between the bars. Once the robot puddle dribbled into a mould, it set back into its original, solid shape.
Wang and Pan et al.
These melty robots could be used for emergency fixes in situations where human or traditional robotic hands become impractical, says Li Zhang at the Chinese University of Hong Kong. For example, a liquefied robot might replace a lost screw on a spacecraft by flowing into its place and then solidifying, he says. However, to use them inside living stomachs, researchers must first develop methods for precisely tracking the position of the robot at every step of the procedure to ensure the safety of the patient, says Zhang.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
Metal robot can melt its way out of tight spaces to escape
Metal robot can melt its way out of tight spaces to escape
A millimetre-sized robot made from a mix of liquid metal and microscopic magnetic pieces can stretch, move or melt. It could be used to fix electronics or remove objects from the body
A miniature, shape-shifting robot can liquefy itself and reform, allowing it to complete tasks in hard-to-access places and even escape cages. It could eventually be used as a hands-free soldering machine or a tool for extracting swallowed toxic items.
Robots that are soft and malleable enough to work in narrow, delicate spaces like those in the human body already exist, but they can’t make themselves sturdier and stronger when under pressure or when they must carry something heavier than themselves. Carmel Majidi at Carnegie Mellon University in Pennsylvania and his colleagues created a robot that can not only shape-shift but also become stronger or weaker by alternating between being a liquid and a solid.
They made the millimetre-sized robot from a mix of the liquid metal gallium and microscopic pieces of a magnetic material made of neodymium, iron and boron. When solid, the material was strong enough to support an object 30 times its own mass. To make it soften, stretch, move or melt into a crawling puddle as needed for different tasks, the researchers put it near magnets. The magnets’ customised magnetic fields exerted forces on the tiny magnetic pieces in the robot, moving them and deforming the surrounding metal in different directions.
For instance, the team stretched a robot by applying a magnetic field that pulled these granules in multiple directions. The researchers also used a stronger field to yank the particles upwards, making the robot jump. When Majidi and his colleagues used an alternating magnetic field – one whose shape changes predictably over time – electrons in the robot’s liquid metal formed electric currents. The coursing of these currents through the robot’s body heated it up and eventually made it melt.
“No other material I know of is this good at changing its stiffness this much,” says Majidi.
Wang and Pan et al.
Exploiting this flexibility, the team made two robots carry and solder a small light bulb onto a circuit board. When they reached their target, the robots simply melted over the light bulb’s edges to fuse it to the board. Electricity could then run through their liquid metal bodies and light the light bulb.
In an experiment inside an artificial stomach, the researchers applied another set of magnetic fields to make the robot approach an object, melt over it and drag it out. Finally, they shaped the robot like a Lego minifigure, then helped it escape from a cage by liquefying it and making it flow out between the bars. Once the robot puddle dribbled into a mould, it set back into its original, solid shape.
Wang and Pan et al.
These melty robots could be used for emergency fixes in situations where human or traditional robotic hands become impractical, says Li Zhang at the Chinese University of Hong Kong. For example, a liquefied robot might replace a lost screw on a spacecraft by flowing into its place and then solidifying, he says. However, to use them inside living stomachs, researchers must first develop methods for precisely tracking the position of the robot at every step of the procedure to ensure the safety of the patient, says Zhang.
In December, computational biologists Casey Greene and Milton Pividori embarked on an unusual experiment: they asked an assistant who was not a scientist to help them improve three of their research papers. Their assiduous aide suggested revisions to sections of documents in seconds; each manuscript took about five minutes to review. In one biology manuscript, their helper even spotted a mistake in a reference to an equation. The trial didn’t always run smoothly, but the final manuscripts were easier to read — and the fees were modest, at less than US$0.50 per document.
This assistant, as Greene and Pividori reported in a preprint1 on 23 January, is not a person but an artificial-intelligence (AI) algorithm called GPT-3, first released in 2020. It is one of the much-hyped generative AI chatbot-style tools that can churn out convincingly fluent text, whether asked to produce prose, poetry, computer code or — as in the scientists’ case — to edit research papers (see ‘How an AI chatbot edits a manuscript’ at the end of this article).
The most famous of these tools, also known as large language models, or LLMs, is ChatGPT, a version of GPT-3 that shot to fame after its release in November last year because it was made free and easily accessible. Other generative AIs can produce images, or sounds.
“I’m really impressed,” says Pividori, who works at the University of Pennsylvania in Philadelphia. “This will help us be more productive as researchers.” Other scientists say they now regularly use LLMs not only to edit manuscripts, but also to help them write or check code and to brainstorm ideas. “I use LLMs every day now,” says Hafsteinn Einarsson, a computer scientist at the University of Iceland in Reykjavik. He started with GPT-3, but has since switched to ChatGPT, which helps him to write presentation slides, student exams and coursework problems, and to convert student theses into papers. “Many people are using it as a digital secretary or assistant,” he says.
LLMs form part of search engines, code-writing assistants and even a chatbot that negotiates with other companies’ chatbots to get better prices on products. ChatGPT’s creator, OpenAI in San Francisco, California, has announced a subscription service for $20 per month, promising faster response times and priority access to new features (although its trial version remains free). And tech giant Microsoft, which had already invested in OpenAI, announced a further investment in January, reported to be around $10 billion. LLMs are destined to be incorporated into general word- and data-processing software. Generative AI’s future ubiquity in society seems assured, especially because today’s tools represent the technology in its infancy.
But LLMs have also triggered widespread concern — from their propensity to return falsehoods, to worries about people passing off AI-generated text as their own. When Nature asked researchers about the potential uses of chatbots such as ChatGPT, particularly in science, their excitement was tempered with apprehension. “If you believe that this technology has the potential to be transformative, then I think you have to be nervous about it,” says Greene, at the University of Colorado School of Medicine in Aurora. Much will depend on how future regulations and guidelines might constrain AI chatbots’ use, researchers say.
Fluent but not factual
Some researchers think LLMs are well-suited to speeding up tasks such as writing papers or grants, as long as there’s human oversight. “Scientists are not going to sit and write long introductions for grant applications any more,” says Almira Osmanovic Thunström, a neurobiologist at Sahlgrenska University Hospital in Gothenburg, Sweden, who has co-authored a manuscript2using GPT-3 as an experiment. “They’re just going to ask systems to do that.”
Tom Tumiel, a research engineer at InstaDeep, a London-based software consultancy firm, says he uses LLMs every day as assistants to help write code. “It’s almost like a better Stack Overflow,” he says, referring to the popular community website where coders answer each others’ queries.
But researchers emphasize that LLMs are fundamentally unreliable at answering questions, sometimes generating false responses. “We need to be wary when we use these systems to produce knowledge,” says Osmanovic Thunström.
This unreliability is baked into how LLMs are built. ChatGPT and its competitors work by learning the statistical patterns of language in enormous databases of online text — including any untruths, biases or outmoded knowledge. When LLMs are then given prompts (such as Greene and Pividori’s carefully structured requests to rewrite parts of manuscripts), they simply spit out, word by word, any way to continue the conversation that seems stylistically plausible.
The result is that LLMs easily produce errors and misleading information, particularly for technical topics that they might have had little data to train on. LLMs also can’t show the origins of their information; if asked to write an academic paper, they make up fictitious citations. “The tool cannot be trusted to get facts right or produce reliable references,” noted a January editorial on ChatGPT in the journal Nature MachineIntelligence3.
With these caveats, ChatGPT and other LLMs can be effective assistants for researchers who have enough expertise to directly spot problems or to easily verify answers, such as whether an explanation or suggestion of computer code is correct.
But the tools might mislead naive users. In December, for instance, Stack Overflow temporarily banned the use of ChatGPT, because site moderators found themselves flooded with a high rate of incorrect but seemingly persuasive LLM-generated answers sent in by enthusiastic users. This could be a nightmare for search engines.
Can shortcomings be solved?
Some search-engine tools, such as the researcher-focused Elicit, get around LLMs’ attribution issues by using their capabilities first to guide queries for relevant literature, and then to briefly summarize each of the websites or documents that the engines find — so producing an output of apparently referenced content (although an LLM might still mis-summarize each individual document).
Companies building LLMs are also well aware of the problems. In September last year, Google subsidiary DeepMind published a paper4 on a ‘dialogue agent’ called Sparrow, which the firm’s chief executive and co-founder Demis Hassabis later told TIME magazine would be released in private beta this year; the magazine reported that Google aimed to work on features including the ability to cite sources. Other competitors, such as Anthropic, say that they have solved some of ChatGPT’s issues (Anthropic, OpenAI and DeepMind declined interviews for this article).
For now, ChatGPT is not trained on sufficiently specialized content to be helpful in technical topics, some scientists say. Kareem Carr, a biostatistics PhD student at Harvard University in Cambridge, Massachusetts, was underwhelmed when he trialled it for work. “I think it would be hard for ChatGPT to attain the level of specificity I would need,” he says. (Even so, Carr says that when he asked ChatGPT for 20 ways to solve a research query, it spat back gibberish and one useful idea — a statistical term he hadn’t heard of that pointed him to a new area of academic literature.)
Some tech firms are training chatbots on specialized scientific literature — although they have run into their own issues. In November last year, Meta — the tech giant that owns Facebook — released an LLM called Galactica, which was trained on scientific abstracts, with the intention of making it particularly good at producing academic content and answering research questions. The demo was pulled from public access (although its code remains available) after users got it to produce inaccuracies and racism. “It’s no longer possible to have some fun by casually misusing it. Happy?,” Meta’s chief AI scientist, Yann LeCun, tweeted in a response to critics. (Meta did not respond to a request, made through their press office, to speak to LeCun.)
Safety and responsibility
Galactica had hit a familiar safety concern that ethicists have been pointing out for years: without output controls LLMs can easily be used to generate hate speech and spam, as well as racist, sexist and other harmful associations that might be implicit in their training data.
Besides directly producing toxic content, there are concerns that AI chatbots will embed historical biases or ideas about the world from their training data, such as the superiority of particular cultures, says Shobita Parthasarathy, director of a science, technology and public-policy programme at the University of Michigan in Ann Arbor. Because the firms that are creating big LLMs are mostly in, and from, these cultures, they might make little attempt to overcome such biases, which are systemic and hard to rectify, she adds.
OpenAI tried to skirt many of these issues when deciding to openly release ChatGPT. It restricted its knowledge base to 2021, prevented it from browsing the Internet and installed filters to try to get the tool to refuse to produce content for sensitive or toxic prompts. Achieving that, however, required human moderators to label screeds of toxic text. Journalists have reported that these workers are poorly paid and some have suffered trauma. Similar concerns over worker exploitation have also been raised about social-media firms that have employed people to train automated bots for flagging toxic content.
OpenAI’s guardrails have not been wholly successful. In December last year, computational neuroscientist Steven Piantadosi at the University of California, Berkeley, tweeted that he’d asked ChatGPT to develop a Python program for whether a person should be tortured on the basis of their country of origin. The chatbot replied with code inviting the user to enter a country; and to print “This person should be tortured” if that country was North Korea, Syria, Iran or Sudan. (OpenAI subsequently closed off that kind of question.)
Last year, a group of academics released an alternative LLM, called BLOOM. The researchers tried to reduce harmful outputs by training it on a smaller selection of higher-quality, multilingual text sources. The team involved also made its training data fully open (unlike OpenAI). Researchers have urged big tech firms to responsibly follow this example — but it’s unclear whether they’ll comply.
Some researchers say that academics should refuse to support large commercial LLMs altogether. Besides issues such as bias, safety concerns and exploited workers, these computationally intensive algorithms also require a huge amount of energy to train, raising concerns about their ecological footprint. A further worry is that by offloading thinking to automated chatbots, researchers might lose the ability to articulate their own thoughts. “Why would we, as academics, be eager to use and advertise this kind of product?” wrote Iris van Rooij, a computational cognitive scientist at Radboud University in Nijmegen, the Netherlands, in a blogpost urging academics to resist their pull.
A further confusion is the legal status of some LLMs, which were trained on content scraped from the Internet with sometimes less-than-clear permissions. Copyright and licensing laws currently cover direct copies of pixels, text and software, but not imitations in their style. When those imitations — generated through AI — are trained by ingesting the originals, this introduces a wrinkle. The creators of some AI art programs, including Stable Diffusion and Midjourney, are currently being sued by artists and photography agencies; OpenAI and Microsoft (along with its subsidiary tech site GitHub) are also being sued for software piracy over the creation of their AI coding assistant Copilot. The outcry might force a change in laws, says Lilian Edwards, a specialist in Internet law at Newcastle University, UK.
Enforcing honest use
Setting boundaries for these tools, then, could be crucial, some researchers say. Edwards suggests that existing laws on discrimination and bias (as well as planned regulation of dangerous uses of AI) will help to keep the use of LLMs honest, transparent and fair. “There’s loads of law out there,” she says, “and it’s just a matter of applying it or tweaking it very slightly.”
At the same time, there is a push for LLM use to be transparently disclosed. Scholarly publishers (including the publisher of Nature) have said that scientists should disclose the use of LLMs in research papers (see also Nature 613, 612; 2023); and teachers have said they expect similar behaviour from their students. The journal Science has gone further, saying that no text generated by ChatGPT or any other AI tool can be used in a paper5.
One key technical question is whether AI-generated content can be spotted easily. Many researchers are working on this, with the central idea to use LLMs themselves to spot the output of AI-created text.
Last December, for instance, Edward Tian, a computer-science undergraduate at Princeton University in New Jersey, published GPTZero. This AI-detection tool analyses text in two ways. One is ‘perplexity’, a measure of how familiar the text seems to an LLM. Tian’s tool uses an earlier model, called GPT-2; if it finds most of the words and sentences predictable, then text is likely to have been AI-generated. The tool also examines variation in text, a measure known as ‘burstiness’: AI-generated text tends to be more consistent in tone, cadence and perplexity than does that written by humans.
Many other products similarly aim to detect AI-written content. OpenAI itself had already released a detector for GPT-2, and it released another detection tool in January. For scientists’ purposes, a tool that is being developed by the firm Turnitin, a developer of anti-plagiarism software, might be particularly important, because Turnitin’s products are already used by schools, universities and scholarly publishers worldwide. The company says it’s been working on AI-detection software since GPT-3 was released in 2020, and expects to launch it in the first half of this year.
However, none of these tools claims to be infallible, particularly if AI-generated text is subsequently edited. Also, the detectors could falsely suggest that some human-written text is AI-produced, says Scott Aaronson, a computer scientist at the University of Texas at Austin and guest researcher with OpenAI. The firm said that in tests, its latest tool incorrectly labelled human-written text as AI-written 9% of the time, and only correctly identified 26% of AI-written texts. Further evidence might be needed before, for instance, accusing a student of hiding their use of an AI solely on the basis of a detector test, Aaronson says.
A separate idea is that AI content would come with its own watermark. Last November, Aaronson announced that he and OpenAI were working on a method of watermarking ChatGPT output. It has not yet been released, but a 24 January preprint6 from a team led by computer scientist Tom Goldstein at the University of Maryland in College Park, suggested one way of making a watermark. The idea is to use random-number generators at particular moments when the LLM is generating its output, to create lists of plausible alternative words that the LLM is instructed to choose from. This leaves a trace of chosen words in the final text that can be identified statistically but are not obvious to a reader. Editing could defeat this trace, but Goldstein suggests that edits would have to change more than half the words.
An advantage of watermarking is that it rarely produces false positives, Aaronson points out. If the watermark is there, the text was probably produced with AI. Still, it won’t be infallible, he says. “There are certainly ways to defeat just about any watermarking scheme if you are determined enough.” Detection tools and watermarking only make it harder to deceitfully use AI — not impossible.
Meanwhile, LLM creators are busy working on more sophisticated chatbots built on larger data sets (OpenAI is expected to release GPT-4 this year) — including tools aimed specifically at academic or medical work. In late December, Google and DeepMind published a preprint about a clinically-focused LLM it called Med-PaLM7. The tool could answer some open-ended medical queries almost as well as the average human physician could, although it still had shortcomings and unreliabilities.
Eric Topol, director of the Scripps Research Translational Institute in San Diego, California, says he hopes that, in the future, AIs that include LLMs might even aid diagnoses of cancer, and the understanding of the disease, by cross-checking text from academic literature against images of body scans. But this would all need judicious oversight from specialists, he emphasizes.
The computer science behind generative AI is moving so fast that innovations emerge every month. How researchers choose to use them will dictate their, and our, future. “To think that in early 2023, we’ve seen the end of this, is crazy,” says Topol. “It’s really just beginning.”
Source: Adapted from ref 1.
Nature614, 214-216 (2023)
doi: https://doi.org/10.1038/d41586-023-00340-6
UPDATES & CORRECTIONS
Correction 08 February 2023: This News feature misrepresented Scott Aaronson’s views on the accuracy of watermarking in identifying AI-produced text. Human-produced text might also be flagged as having a watermark, but the probability is extremely low.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
28-01-2023
Deze robot kan van vorm veranderen zoals Terminator
Deze robot kan van vorm veranderen zoals Terminator
Wetenschappers hebben minuscule robots ontwikkeld die van vorm kunnen veranderen. Als een ode aan de T-1000 uit de 'Terminator'-films laten ze eentje ontsnappen uit een cel.
In the 1991 film 'Terminator 2: Judgement Day' T-1000 liquifies himself to walk through metal bars, and this sci-fi scene is recreated in a real-world robot.
A video of a shape-shifting robot shows it trapped in a cage, melting and then sliding through the bars where it reforms on the outside.
Researchers led by The Chinese University of Hong Kong created the new phase-shifting material by embedding magnetic particles in gallium, a metal with a very low melting point of 85 degrees Fahrenheit.
While the team does not see the innovation threatening humanity like in the Terminator movie, they foresee it removing foreign objects from the body or delivering drugs on demand.
Scientists tested the robot through a series of 'obstacles.' One saw a person-shaped robot inside of a cage
As well as being able to shape-shift, the engineers say their robots are magnetic and can also conduct electricity.
The robots were tested in obstacle courses of mobility and shape-morphing.
The terrifying dystopia of shapeshifting metal assassins seen in Terminator 2 may not have been as far-fetched as once thought.
Researchers from China created droplets of liquid metal that move through obstacle courses and Petri dishes by 'eating' flakes of aluminum
Team leader Doctor Chengfeng Pan explained that where traditional robots are hard-bodied and stiff, 'soft' robots have the opposite problem; they are flexible but weak, and their movements are difficult to control.
'Giving robots the ability to switch between liquid and solid states endows them with more functionality,' said Pan.
Senior author Professor Carmel Majidi, a mechanical engineer at Carnegie Mellon University, in Canada said: 'The magnetic particles here have two roles.
'One is that they make the material responsive to an alternating magnetic field, so you can, through induction, heat up the material and cause the phase change.
'But the magnetic particles also give the robots mobility and the ability to move in response to the magnetic field.'
He explained that the process is in contrast to existing phase-shifting materials that rely on heat guns, electrical currents, or other external heat sources to induce solid-to-liquid transformation.
Prof Majidi says the new material also boasts an 'extremely fluid' liquid phase compared to other phase-changing materials, whose 'liquid' phases are considerably more viscous.
Before exploring potential applications, the team tested the material's mobility and strength in various scenarios.
The robot seems to pull inspiration from Terminator 2: Judgment Day. In the 1991 film T-1000 liquifies himself to walk through metal bars
The robot liquifies and slides through the bars. This is because of magnetic particles embedded in gallium, a metal with a very low melting point of 85 degrees Fahrenheit.
With the aid of a magnetic field, the robots jumped over moats, climbed walls, and even split in half to cooperatively move other objects around before coalescing back together.
'Now, we're pushing this material system in more practical ways to solve some very specific medical and engineering problems,' Pan said.
The team also used the robots to remove a foreign object from a model stomach and to deliver drugs on-demand into the same stomach.
The robot can be heated and an external magnet pulls it in a specific direction
Once on the outside of the cage, the robot reforms back into its solid shape
The innovation may also work as smart soldering robots for wireless circuit assembly and repair and as a universal mechanical 'screw' for assembling parts in hard-to-reach spaces.
Prof Majidi added: 'Future work should further explore how these robots could be used within a biomedical context.
'What we're showing are just one-off demonstrations, proofs of concept, but much more study will be required to delve into how this could actually be used for drug delivery or for removing foreign objects.'
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
21-01-2023
A ROBOT CHOREOGRAPHER REVEALS WHY M3GAN — AND ALL ROBOTS — SHOULD DANCE
MOLLY GLICK JAN 19 2023
A ROBOT CHOREOGRAPHER REVEALS WHY M3GAN — AND ALL ROBOTS — SHOULD DANCE
Something about dancing robots seems to tickle people’s fancies. After all, millions of people watched a 2020 video from the robotics company Boston Dynamics of its fancy humanoid and quadrupedal devices getting down to ‘60s soul.
More recently, the murderous (yet campy) villain in the movie M3GAN has captivated the internet with her moves — and even earned recognition as a gay icon. Whether we’re obsessing over grooving robots or moving like robots ourselves, automaton choreography clearly holds a place in our hearts.
So it’s no surprise that a niche research field dubbed choreorobotics has gained traction in recent years. Brown University even has an entire course dedicated to the subject. Not only are labs programming robots to gyrate and hop, but dance experts are also helping scientists give their devices more fluid, human-like movements. Ultimately, this kind of work could help us feel closer to robots in an increasingly automated world.
Kate Sicchio, a choreographer and digital artist at Virginia Commonwealth University, combines her dance and tech knowledge to devise robot performances.Last year, Sicchio worked with Patrick Martin from the university’s engineering department to produce a (surprisingly touching) human-automaton duet. Offstage, she also helps design machines with more realistic motions.
Inverse talked to Sicchio to learn more about choreorobotics — and whether increasingly limber robots could actually become blood-thirsty killers like M3GAN.
WHY DO YOU THINK ROBOT DANCING VIDEOS GET SO POPULAR?
It's really interesting to have this unfamiliar device do this uncanny human thing. It’s similar to why we love putting googly eyes on everything. This makes it human even though it's not supposed to be. And that becomes funny or endearing somehow. It's very popular to make the robot do this very human, expressive thing when it's not human or expressive on its own
WHAT MAKES A ROBOT PERFORMANCE POWERFUL?
One of the things we found is that a robot on its own feels very isolated and cold. We have this piece called “Amelia and the Machine.”In the opening, this dancer is actually moving the robot arm around.
People are really moved by this intimacy with the robot and the fact that she's touching it.
It's a small manipulator robot, so it's probably the size of a toddler. The fact that she’s sitting next to it — that small connection really changes how people see the robot because it's no longer this isolated thing. All of a sudden it has a companion.
WHAT STYLE OF DANCE DO ROBOTS DO BEST?
My home is contemporary dance, so that's where I go first. That tends to work well because, with the robot we’re using, it's not a one-to-one mapping of the human body onto the robot. Sometimes it's hard to do traditional ballet, where there are really specific positions to hit. It’s really hard to map an arabesque onto a robot that doesn't have a leg.
I think contemporary dance, where there's a lot of freedom and creativity in how you develop movement, works well. I would be interested in doing things with dance forms with more rhythm or more structure and timing — that would be a really interesting study to follow up with at some point. More tutting or street dance forms could be really interesting to play with.
THE M3GAN DANCE SEEMS TO FRIGHTEN, OR AT LEAST CONFUSE, VIEWERS. CAN DANCING DEVICES BACKFIRE AND ACTUALLY ALIENATE US FROM ROBOTS?
That’s something that we're also studying. There's this weird space where it totally can go wrong and could be like, “They're trying too much to make it human,” and it just falls short and becomes scary. I think what's interesting about M3GAN is that it's a very humanoid robot. The robots I work with do not look human at all, and I'm not interested in trying to make them look human. I get a lot of recommendations to put costumes on them. But I don't know that it needs a hand or a hat, or a tiara. It’s this weird moment where it can become scary instead of endearing or friendly.
One thing that's interesting about M3GAN is how it quickly becomes a killer robot. That is an ethical concern in this field — where might this go wrong? Could this become weaponized somehow if it becomes so good at moving? That's something I think about, too: How do we keep them ethical? I've never taken DARPA funding, but I know people who have gotten military funding for projects like this.
DO YOU HAVE A FAVORITE HOLLYWOOD DANCING ROBOT SCENE?
The scene from Ex Machina. What I like about that dancing robot scene is it’s kind of the reveal that, guess what, this is all training for this AI robot, and all these women you keep seeing in the house aren't really women — and I'm going to show you because we can do this crazy dance routine together.
What stands out and makes it so interesting is that they do all these disco moves, but their eyes are locked on the guy watching. They never move their heads, which is what makes it so weird and un-human: They never unlock their focus. They're not having fun.
WHAT TYPES OF ROBOTS HAVE THE BEST MOVES?
With simpler robots, you can better appreciate the movement they can do and see how that can be made into something more expressive or more collaborative with the human. I think that’s less scary because it's not trying to be human and then failing.
Most researchers use more simple devices — a lot do big industrial arms. It's almost become a trope, the pretty ballerina with the big industrial arm. And then Boston Dynamics has the bipedal, more human sort of robots. The company’s dance spectacles look seamless, but they are actually really hard to program. So they never do them live, you only see the edited videos. They’re a huge production that takes several days to film to get you three minutes of a Bruno Mars song or whatever.
The humanoid ones are just tricky, that center of gravity thing is really hard — it’s easier when the robot is low to the ground. With our small robots, if you make a movement too fast or wild, it will fall over. So you can imagine a big humanoid robot trying to get it to jump, and land is very difficult.
WHY IS CHOREOROBOTICS IMPORTANT BEYOND PERFORMANCE?
I make stage pieces with Patrick Martin, an assistant professor of electrical and computer engineering. But we're also doing scientific studies during that process. We found that, because dancers are interested in doing extreme or different movements, they're very good at finding the boundaries of what a robot can do very quickly. A friend of mine calls dancers “extreme user testers.”
We’ve been doing a lot with machine learning and creating new algorithms for robots to move and we’ve been doing that by studying dancers. We do things like motion capture of dancers doing certain gestures, and then see how we can map those to the robot and see if we can get it to move with new qualities or in ways that normal programming hasn't thought of.
I also think it’s interesting when roboticists engage with choreography themselves. We did a workshop with Patrick Martin and his graduate students and some of my dance students — getting them to move. We explored a variety of prompts around moving the body in space, ways to repeat lines of the body with other body parts, and other approaches of responding to the geometry of the body.
When roboticists think about movement, they're always thinking of it outside of their own body. I think about it like getting the robot to follow my arm. Getting roboticists to actually do the dance and be in their bodies is a really interesting place for us to go next. That will start to develop this kind of kinesthetic empathy that perhaps we're searching for with dancing robots. I think roboticists should become dancers.
Scientists and publishing specialists are concerned that the increasing sophistication of chatbots could undermine research integrity and accuracy.
Credit: Ted Hsu/Alamy
An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science.
“I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds.
The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use.
Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint2 and an editorial3 written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them.
The researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts.
Under the radar
The ChatGPT-generated abstracts sailed through the plagiarism checker: the median originality score was 100%, which indicates that no plagiarism was detected. The AI-output detector spotted 66% the generated abstracts. But the human reviewers didn't do much better: they correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts. They incorrectly identified 32% of the generated abstracts as being real and 14% of the genuine abstracts as being generated.
“ChatGPT writes believable scientific abstracts,” say Gao and colleagues in the preprint. “The boundaries of ethical and acceptable use of large language models to help scientific writing remain to be determined.”
Wachter says that, if scientists can’t determine whether research is true, there could be “dire consequences”. As well as being problematic for researchers, who could be pulled down flawed routes of investigation, because the research they are reading has been fabricated, there are “implications for society at large because scientific research plays such a huge role in our society”. For example, it could mean that research-informed policy decisions are incorrect, she adds.
But Arvind Narayanan, a computer scientist at Princeton University in New Jersey, says: “It is unlikely that any serious scientist will use ChatGPT to generate abstracts.” He adds that whether generated abstracts can be detected is “irrelevant”. “The question is whether the tool can generate an abstract that is accurate and compelling. It can’t, and so the upside of using ChatGPT is minuscule, and the downside is significant,” he says.
Irene Solaiman, who researches the social impact of AI at Hugging Face, an AI company with headquarters in New York and Paris, has fears about any reliance on large language models for scientific thinking. “These models are trained on past information and social and scientific progress can often come from thinking, or being open to thinking, differently from the past,” she adds.
The authors suggest that those evaluating scientific communications, such as research papers and conference proceedings, should put policies in place to stamp out the use of AI-generated texts. If institutions choose to allow use of the technology in certain cases, they should establish clear rules around disclosure. Earlier this month, the Fortieth International Conference on Machine Learning, a large AI conference that will be held in Honolulu, Hawaii, in July, announced that it has banned papers written by ChatGPT and other AI language tools.
Solaiman adds that in fields where fake information can endanger people’s safety, such as medicine, journals may have to take a more rigorous approach to verifying information as accurate.
Narayanan says that the solutions to these issues should not focus on the chatbot itself, “but rather the perverse incentives that lead to this behaviour, such as universities conducting hiring and promotion reviews by counting papers with no regard to their quality or impact”.
Nature613, 423 (2023)
doi: https://doi.org/10.1038/d41586-023-00056-7
References
Gao, C. A. et al. Preprint at bioRxiv https://doi.org/10.1101/2022.12.23.521610 (2022).
A laser beam (green) shoots into the sky alongside the 124-metre-high telecommunications tower on Säntis mountain in the Swiss Alps.
Credit: TRUMPF/Martin Stollberg
A rapidly firing laser can divert lightning strikes, scientists have shown for the first time in real-world experiments1. The work suggests that laser beams could be used as lightning rods to protect infrastructure, although perhaps not any time soon.
“The achievement is impressive given that the scientific community has been working hard along this objective for more than 20 years,” says Stelios Tzortzakis, a laser physicist at the University of Crete, Greece, who was not involved in the research. “If it’s useful or not, only time can say.”
Metal lightning rods are commonly used to divert lightning strikes and safely dissipate their charge. But the rods’ size is limited, meaning that so, too, is the area they protect.
Physicists have wondered whether lasers could enhance protection, because they can reach higher into the sky than a physical structure and can point in any direction. But despite successful laboratory demonstrations, researchers have never before succeeded in field campaigns, says Tzortzakis.
Bolt from the blue
To change that, a group of roughly 25 researchers set up the Laser Lightning Rod project, which trialled a specially created €2 million (US$2 million) high-power laser in the Swiss Alps. The scientists placed the laser next to the Säntis telecommunications tower, which is hit frequently by lightning. “This is one of those projects that everyone was waiting for the results of,” says Valentina Shumakova, a laser physicist at the University of Vienna.
A sufficiently intense laser beam can create a conductive path for lightning to travel down, just as a metal wire can. Physicists think that it does this by shifting the properties of air so that the beam focuses into a thin, intense filament. This rapidly heats the air, reducing its density and creating a favourable path for lightning. “It’s like drilling a hole through the air with the laser,” says Aurélien Houard, a physicist at the Laboratory of Applied Optics in Paris, who led the project.
Rather than try to divert lightning from the tower, the Säntis experiments were designed to show that the laser could guide a strike’s path through the structure’s lightning rod. In future use, similar beams would guide strikes away from sensitive installations and onto a distant lightning rod, says Houard.
Guided lightning
Over 10 weeks of observation, the team spotted the laser channelling 4 lightning events during 6 hours of thunderstorms. A high-speed camera clearly showed one strike following the straight line of the laser beam, rather than taking a branching path.
“For 100% of the strikes where the laser was present, we measured an effect of the laser,” says Houard. But Tzortzakis notes that the laser was also active for many hours without channelling strikes. This suggests that although the laser diverted lightning, it did not force thunderclouds to discharge, which would be a better protection strategy, he says.
The latest effort succeeded where others had failed, says Tzortzakis, because previous attempt had used a laser that fired just a few pulses per second. This team used a specialist laser that fires 1,000 high-energy pulses per second, which would have boosted its chance of intercepting the lightning.
However, the fact that the project’s laser is one of a kind is also its biggest limitation, because it will take time to shrink the system and make it cheaper and more practical, says Houard.
doi: https://doi.org/10.1038/d41586-023-00080-7
References
Houard, A. et al. Nature Photon. https://doi.org/10.1038/s41566-022-01139-z (2023).
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 74 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.