The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
13-12-2017
Scientists turn DNA into virtually any 3D shape imaginable
Scientists turn DNA into virtually any 3D shape imaginable
Scientists have made a significant advancement in shaping DNA — they can now twist and turn the building blocks of life into just about any shape. In order to demonstrate their technique, they have shaped DNA into doughnuts, cubes, a teddy bear, and even the Mona Lisa.
New DNA origami techniques can build virus-size objects of virtually any shape.
Image credits: Wyss Institute.
Scientists have long desired to make shapes out of DNA. The field of research emerged in the 1980s, but things really took off in 2006, with the advent of a technique called DNA origami. As the name implies, it involves transforming DNA into a multitude of shapes, similar to the traditional Japanese technique of origami. The process starts with a long strand placed on a scaffold with the desired sequence of nucleotides, dubbed A, C, G, and T. Then, patches of the scaffold are matched with complementary strands of DNA called staples, which latch on to their desired target. In 2012, a different technique emerged — one which didn’t use scaffolds or large strands of DNA, but rather small strands that fit together like LEGO pieces.
Both techniques became wildly popular with various research groups. Scientists started to coat DNA objects with plastics, metals, and other materials to make electronic devices, electronics, and even computer components. But there was always a limitation: the size of conventional DNA objects has been limited to about 100 nanometers. There was just no way to make them bigger without becoming floppier or unstable in the process. Well, not anymore.
New DNA origami techniques can make far larger objects, such as this dodecahedron composed of 1.8 million DNA bases.
Image credits: K. Wagenbauer et al, Nature, Vol. 551, 2017.
Groups in Germany, Massachusetts, and California all report that they’ve made dramatic breakthroughs in DNA origami, creating rigid modules with preprogrammed shapes that can assemble with other copies to build specific shapes — and they have a variety of shapes to prove it.
A German team, led by Hendrik Dietz, a biophysicist at the Technical University of Munich, created a miniature doughnut about 300 nanometers across. A Massachusetts team led by Peng Yin, a systems biologist at Harvard University’s Wyss Institute in Boston, created complex structures with both blocks and holes. With this technique, they developed cut-out shapes like an hourglass and a teddy bear. The third group led by Lulu Qian, a biochemist at the California Institute of Technology in Pasadena, developed origami-based pixels that appear in different shades when viewed through an atomic microscope. Taken together, these structures represent a new age for DNA origami.
Furthermore, it’s only a matter of time before things get even more complex. Yin’s group actually had to stop making more complex shape sbecause they ran out of money. Synthesizing the DNA comes at the exorbitant price of $100,000 per gram. However, Dietz and his collaborators believe they could dramatically lower the price by coaxing viruses to replicate the strands inside bacterial hosts.
“Now, there are so many ways to be creative with these tools,” Yin concludes.
The technique isn’t just about creating pretty DNA shapes. Someday, this approach could lead to a novel generation of electronics, photonics, nanoscale machines, and possibly disease detection, Robert F. Service writes for Science. The prospect of using DNA origami to detect cancer biomarkers and other biological targets could open exciting avenues for research and help revolutionize cancer detection.
Journal References:
Klaus F. Wagenbauer, Christian Sigl & Hendrik Dietz. Gigadalton-scale shape-programmable DNA assemblies.doi:10.1038/nature24651.
Grigory Tikhomirov, Philip Petersen & Lulu Qian. Fractal assembly of micrometre-scaleDNA origami arrays with arbitrary patterns. doi:10.1038/nature24655.
Luvena L. Ong et al. Programmable self-assembly of three-dimensional nanostructures from 10,000 unique components. doi:10.1038/nature24648.
Florian Praetorius et al. Biotechnological mass production of DNA origami. doi:10.1038/nature24650.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
12-12-2017
Artificial Intelligence: When You Can't Believe Your Eyes
Artificial Intelligence: When You Can't Believe Your Eyes
Artificial intelligence is coming on in leaps and bounds and while many embrace the technology, others are wary of it. For those who are wary NVidia may cause some concern as they have come up with a way for AI to copy reality. An image translation artificial intelligence could have people guessing whether anything that they see online is real or fake.
NVidia Can Change Day To Night On Video With Artificial Intelligence
In October NVidia showed off their ability in technology associated with artificial intelligence that could generate images of fake people that were realistic. Now they have gone on to produce fake videos through artificial intelligence.
The AI does a great job in changing a video from day to night, winter turned to summer and even changing a video of a house cat into a cheetah. What is even more surprising is the fact that the artificial intelligence system can do it all with far less training than any other form of AI system.
Just as with the face generation AI software from NVidia, this new AI uses the algorithm with the name of the generative adversarial network, also known as GAN. Two neural networks work alongside each other and one makes the video or image and the other criticizes the work. The GAN needs a great deal of labeled data so that it can learn how to generate data of its own. Generally, the system would have to look at pairs of images that showed what the street looked like when it had snowed and when it was clear, and then it would generate an image of its own, with or without snow.
AI Can Guess How A Street Would Look Covered With Snow Or Rain
The new artificial intelligent image translation technology from NVidia is able to use its imagination to show was a street would look like if it was covered in snow without having to see the street that way according to researchers Jan Kautz and Ming-Yu Liu.
Lui went on to say that the research of the team is shared with the customers and product teams of NVidia. He revealed that he could not make any comment on just how fast or to what extent the artificial intelligence would be adopted, but he did say that there are many potentially interesting applications for it. One example was said to be that it is very rare that they get rain in California and they wanted to be sure that self-driving cars would operate in the rain properly. He said that they could use the artificial intelligence to turn driving in sunny weather to driving in rain to train the self-driving cars.
While there are many applications that are practical, the technology could also have whimsical ones too. The researchers said to imagine being able to see how your home might look in the future during the middle of winter or snow, or what a wedding location could look like during the fall when the ground is blanketed with leaves.
In The Future People Will Not Be Able To Distinguish Between Fake And Real
Of course, those who do not embrace AI technology have said that AI such as this could be used nefariously. If it was adopted widely a person’s ability to be able to trust any image or video just based on what the eyes tell them could be diminished. People would not know whether they were looking at reality or AI videos.
There is the possibility that video evidence might be inadmissible in court, while fake news could overtake the internet as real video news would become inseparable from that generated by artificial intelligence. For the time being, AI is limited to a few applications and until it does make its way into the hands of consumers there is really no way of telling just how it will make an impact on society.
It seems like each week there’s some new development in artificial intelligence that causes everyone to freak out and proclaim the end of human superiority. Well, this is another one of those weeks. AI researchers at computing hardware manufacturer Nvidia have designed what is being billed as one of the first artificial intelligence networks with a working imagination. The system can create realistic (if not real) looking videos of fictional events using simple inputs, similar to how the human mind can imagine abstract or fictional scenarios based on a thought. Should we be frightened? How frightened?
Examples of the AI “imagination” showing how the system can change the weather in pre-recorded video clips without being fed clips of the target weather.
So far, not that frightened. The technology is still in its infancy, and has only been used in what researchers call “image-to-image translation,” or altering video clips and photos in small ways such as changing the setting from night to day, changing human subjects’ hair color, or switching a dog to one of another breed. Still, that’s pretty impressive if you think about it. Nvidia’s Ming-Yu Liu says that their system is the first to be able to do so simply by ‘imagining’ the new image or scene, as opposed prior similar systems which faced the problem of having to compile massive sets of data based on prior examples and extrapolating from those data:
We are among the first to tackle the problem, [and] there are many applications. For example, it rarely rains in California, but we’d like our self-driving cars to operate properly when it rains. We can use our method to translate sunny California driving sequences to rainy ones to train our self-driving cars.
The potential applications of this technology have lead some to wonder if similar AI networks might mean the “end of reality as we know it.” Whatever that means. Unless the whole universe spontaneously blinks out of existence, reality’s not going to end anytime soon. But I see what they mean; if completely real-looking video and audio can be generated by these AI networks, and those could be fed into, say, an advanced augmented reality setup or even some of the more Matrix-like brain-computer interfaces being developed, we could soon see the lines between virtual reality and physical reality start to get more difficult to distinguish. Until you take your headset off, that is. But what about if when these experiences can be transmitted directly into your brain’s sensory centers?
The system can take a photo of a dog and change it into another dog. And here I thought reality-killing AI would be much scarier.
While it’s unlikely we’ll see the end of any reality, we might see the creation of fully-fledged alternate realities. Without a doubt, this technology will someday be used to distort or obfuscate the truth here in our own reality. What is reality other than what we make of it, anyway? Recent events have shown us that it’s getting more and more difficult to discern truth from fiction in mass media; what will happen once completely real-looking video can be conjured up from the twisted imaginations of rogue AI systems? Of course, the same fears arose over the invention of moving pictures. Is this just the latest advancement in graphic and animation software, or could something more nefarious be brewing?
12-12-2017 om 20:20
geschreven door peter
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
11-12-2017
Advancements in AI Are Rewriting Our Entire Economy
Advancements in AI Are Rewriting Our Entire Economy
Once theorized as visions of a future society, technology like automation and artificial intelligence are now becoming a part of everyday life. These advancements in AI are already impacting our economy, both in terms of individual wealth and broader financial trends.
It’s long been theorized that a readily available machine workforce will make it more difficult for humans to keep their jobs, but automation may, in fact, offer up more even-handed consequences. Major changes are coming, but there’s reason to believe these changes could benefit a broader range of stakeholders — not just corporations who no longer have to worry about paying living wages (just parts and servicing).
“It is far more an opportunity for growth,” said Joshua Gans, holder of the Jeffrey S. Skoll chair of technical innovation and entrepreneurship at the University of Toronto’s Rotman School of Management. “At the moment, while some jobs have been replaced by automation, this has also led to job creation as well. So while there may be short-term disruption, the longer-term potential is very strong.”
While jobs that rely on manual labor may increasingly fall to machines, everything from the design of these systems to their upkeep has the potential to create new jobs for humans. There’s also the capacity for technology to augment the human workforce, allowing them to accomplish tasks that would otherwise be impossible. For instance, imagine a robotic suit that could allow a factory worker to lift objects so heavy they could never perform the task with their human strength alone (at least not without incurring injury). In a more general sense, these technologies stand to increase productivity, which would have far-reaching benefits.
“I don’t think they are going to disrupt the economy but instead make individuals and firms more efficient,” said Gans. “In other words, they are productivity enhancing.”
Automation might even allow some of us to escape the traditional work week. If you have access to a self-driving car, you could use it as a taxi service, collecting profits without having to be behind the wheel. This isn’t dissimilar to the basic concept of cryptocurrency mining, which puts the hardware to work in order to earn money for its human owners.
Assuming that individuals aren’t priced out of buying new hardware, this could make a huge shift in how we earn money. A basic income could be accrued from ownership of a machine that performs a task for others. Of course, a scenario like this would prompt questions of disparity in access: how could we ensure the rich won’t simply get richer, while the less wealthy are left behind?
MONEY MAKERS
The days of a standard 40-hour work week seem to be coming to an end. In an ideal world, we’d all be able to provide for ourselves by leveraging a robotic workforce on a personal scale to earn money. In practice, it’s much more likely that corporations are going to be able to invest in this infrastructure well before individuals can do so.
Universal basic income (UBI) has been touted as one solution to decreased job opportunities for humans. Automation could even foot the bill via a tax on robotic workers – though, critics of this idea have suggested it could discourage widespread adoption.
Proponents have argued that UBI could foster entrepreneurship, and even have a positive impact on the economy. It would be a huge shift in its own right, and there could be smaller changes to be made in the meantime that would ease the transition toward a greater reliance on automation and AI.
“In countries with a well established social safety net and non-employer related health insurance, the transition will be much easier,” said Gans. “That said, there are few companies that have good programs for mid-career retraining. So this is an area that could use some significant public policy effort.”
This much is clear: these technologies are already beginning to change the way we work. It’s of crucial importance that we start preparing for greater changes as soon as possible. Automation and AI could have a positive effect on wealth disparity and quality of life for the average person. However, if they aren’t employed with the proper care and consideration, they also have the potential to bring about the opposite effect.
In the months running up to the 2016 election, the Democratic National Committee was hacked. Documents were leaked, fake news propagated across social media — the hackers, in short, launched a systematic attack on American democracy.
Whether or not that’s war, however, is a matter for debate. In the simplest sense, an act of cyber warfare is defined as an attack by one nation on the digital infrastructure of another.
These threats are what Samuel Woolley, research director of the Digital Intelligence Lab at Institute for the Future, calls “computational propaganda,” which he defines as the spread of disinformation and politically motivated attacks designed using “algorithms, automation, and human curation,” and launched via the internet, particularly social media. In a statement to Futurism, Woolley added that these attacks are “assailing foundational parts of democracy: the press, open civic discourse, the right to privacy, and free elections.”
Attacks like the ones preceding the 2016 election may be a harbinger of what’s to come: We are living in the dawn of an age of digital warfare — more pernicious and less visible than conventional battles, with skirmishes that don’t culminate in confrontations like Pearl Harbor or 9/11.
Our definitions of warfare — its justifications, its tactics — are transforming. Already, there’s a blurry line between threats to a nation’s networks and those that occur on its soil. As Adrienne LaFrance writes in The Atlantic: an act of cyber warfare must be considered an act of war.
A War of 0s and 1s
A little over a decade ago, the United States Cyber Command began developing what would become the world’s first digital weapon: a malicious computer worm known as Stuxnet. It was intended to be used against the government of Iran to stymie its nuclear program, as The New York Times reported. In the true spirit of covert operations and military secrecy, the U.S. government has never publicly taken credit for Stuxnet, nor has the government of Israel, with whom the U.S. reportedly teamed up to unleash it.
Stuxnet’s power is based on its ability to capitalize on software vulnerabilities in the form of a “zero day exploit.” The virus infects a system silently, without requiring the user to do anything, like unwittingly download a malicious file, in order for the worm to take effect. And it didn’t just run rampant through Iran’s nuclear system — the worm spread through Windows systems all over the world. That happened in part because, in order to enter into the system in Iran, the attackers infected computers outside the network (but that were believed to be connected to it) so that they would act as “carriers” of the virus.
As its virulence blossomed, however, analysts began to realize that Stuxnet had become the proverbial first shot in a cyber war.
Like war that takes place in the physical world, cyber warfare targets and exploits vulnerabilities. Nation-states invest a great many resources to gather intelligence about the activities of other nations. They identify a nation’s’ most influential people in government and in general society, which may come in useful when trying to sway public opinion for or against a number of sociopolitical issues.
image credit: pixabay
Gathering nitty-gritty details of another country’s economic insecurities, its health woes, and even its media habits is standard fare in the intelligence game; figuring out where it would “hurt the most” if a country were to launch an attack is probably about efficiency as much as it is efficacy.
Historically, gathering intel was left to spies who risked life and limb to physically infiltrate a building (an agency, an embassy), pilfer documents, files, or hard drives, and escape. The more covert these missions, and the less they could alarm the owners of these targets, the better. Then, it was up to analysts, or sometimes codebreakers, to make sense of the information so that military leaders and strategists could refine their plan of attack to ensure maximum impact.
The internet has made acquiring that kind of information near-instantaneous. If a hacker knows where to look for the databases, can break through digital security measures to access them, and can make sense of the data these systems contain, he or she can acquire years’ worth of intel in just a few hours, or even minutes. The enemy state could start using the sensitive information before anyone realizes that something’s amiss. That kind of efficiency makes James Bond look like a slob.
In 2011, then-Defense Secretary Leon Panetta described the imminent threat of a “cyber Pearl Harbor” in which an enemy state could hack into digital systems to shut down power grids or even go a step beyond and “gain control of critical switches and derail passenger trains, or trains loaded with lethal chemicals.” In 2014, TIME Magazine reported that there were 61,000 cybersecurity breaches in the U.S that year; the then-Director of National Intelligence ranked cybercrime as the number one security threat to the United States that year, according to TIME.
Computer viruses, denial of service (DDS) attacks, even physically damaging a power grid — the strategies for war in the fifth domain are still evolving. Hacking crimes have become fairly common occurrences for banks,hospitals,retailers, and college campuses. But if these epicenters of a functioning society are crippled by even the most “routine” cybercrimes, you can only imagine the devastation that would follow an attack with the resources of an enemy state’s entire military behind it.
image credit: pixabay
Nations are still keeping their cards close to their chest, so no one is really certain which countries are capable of attacks of the largest magnitude. China is a global powerhouse of technology and innovation, so it’s safe to assume its government has the means to launch a large-scale cyber attack. North Korea, too, could have the technology — and, as its relationship with other countries becomes increasingly adversarial, more motivation to refine it. After recent political fallout between North Korea and China, Russia reportedly stepped in to provide North Korea with internet— a move that could signal a powerful alliance is brewing. Russia is the biggest threat as far as the United States is concerned; the country has proven itself to be both a capable and engaged digital assailant.
The Russian influence had a clear impact on the 2016 election, but this type of warfare is still new. There is no Geneva Convention, no treaty, that guides how any nation should interpret these attacks, or react to them. To get that kind of rule, global leaders would need to look at the ramifications for the general population and determine how cyberwar affects citizens.
At present, there is no guiding principle for deciding when (or even if) to act on a perceived act of cyberwarfare. A limbo that is further complicated by the fact that, if those in power have benefitted from, or even orchestrated, the attack itself, then what incentive do they have to retaliate?
If cyber war is still something of a Wild West, it’s clearly citizens who will become the casualties. Our culture, economy, education, healthcare, livelihoods, and communication are inextricably tethered to the internet. If an enemy state wanted a more “traditional” attack (a terrorist bombing or the release of a chemical agent, perhaps) to have maximum impact, why not preface it with a ransomware attack that freezes people out of their bank accounts, shut down hospitals and isolate emergency responders, and assure that citizens didn’t have a way to communicate with their family members in a period of inescapable chaos?
As cybersecurity expert and author Alexander Klimburg explained to Vox, a full-scale cyber attack would result in damage “equivalent to a solar flare in terms of damaging infrastructure.” In short, it would be devastating.
A New Military Strategy
In summer 2016, a group called the Shadow Brokers began leaking highly classified information about the arsenal of cyberweaponry at the National Security Agency (NSA), including cyber weapons actively in development. The agency still doesn’t know whether the leak came from someone within the NSA, or if a foreign faction infiltrated Tailored Access Operations (the NSA’s designated unit for cyber warfare intelligence-gathering).
In any case, the breach of a unit that should have been among the government’s most impervious was unprecedented in American history. Aghast at the gravity of such a breach, Microsoft President Brad Smith compared the situation “to Tomahawk missiles being stolen from the military,” and penned a scathing blog post calling out the U.S. government for its failure to keep the information safe.
The last time such a leak shook the NSA, it was in 2013, when Edward Snowden released classified information about the agency’s surveillance practices. But as experts have pointed out, the information the Shadow Brokers stole is far more damaging. If Snowden released what was effectively battle plans, then the Shadow Brokers released the weapons themselves, as the New York Times analogized,
Earlier this year, a ransomware attack known as “WannaCry” began traversing the web, striking organizations from universities in China to hospitals in England. A similar attack hit IDT Corporation, a telecommunications company based in Newark, New Jersey, in April, when it was spotted by the company’s global chief operations officer, Golan Ben-Oni. As Ben-Oni told the New York Times, he knew at once that this kind of ransomware attack was different than others attempted against his company — it didn’t just steal information from the databases it infiltrated, but rather it stole the credentials required to access those databases. This kind of attack means that hackers could not only take that information undetected, but they could also continuously monitor who accesses that information.
WannaCry and the IDT attack both relied upon the cyber weapons stolen and released by the Shadow Brokers, effectively using them against the government that developed them. WannaCry featured EternalBlue, which used unpatched Microsoft servers to spread malware (North Korea used it to spread the ransomware to 200,000 global servers in just 24 hours). The attack on IDT also used EternalBlue, but added to it another weapon called DoublePulsar, which penetrates systems without tripping their security measures. These weapons had been designed to be damaging and silent. They spread rapidly and unchecked, going undetected by antivirus software all over the world.
The weapons were powerful and relentless, just as the NSA intended. Of course, what the NSA had not intended was that the U.S. would wind up at their mercy. As Ben-Oni lamented to the New York Times, “You can’t catch it, and it’s happening right under our noses.”
“The world isn’t ready for this,” he said.
The Best Defense
The average global citizen may feel disenfranchised by their government’s apparent lack of preparedness, but defending against the carnage of cyber warfare really begins with us: starting with a long overdue reality check concerning our relationship with the internet. Even if the federal agencies aren’t as digitally secure as some critics might like, the average citizen can still protect herself.
“The first and most important point is to be aware that this is a real threat, that this potentially could happen,” cybersecurity expert Dr. Eric Cole told Futurism. Cole added that, for lay people, the best defense is knowing where your information is being stored electronically and making local backups of anything critical. Even services like cloud storage, which are often touted as being safer, wouldn’t be immune to targeted attacks that destroy the supportive infrastructure — or the power grids that keep that framework up and running.
“We often love going and giving out tons of information and doing everything electronic,” Cole told Futurism, “but you might want to ask yourself: Do I really want to provide this information?”
Some experts, however, argue that your run-of-the-mill cyber attack against American businesses and citizens should not be considered an act of war. The term “war” comes with certain trappings — governments get involved, resources are diverted, and the whole situation escalates overall, Thomas Rid, professor and author, recently told The Boston Globe. That kind of intensity might, in fact, be counterproductive for small-scale attacks, ones where local authorities might be the ones best equipped to neutralize a threat.
As humans evolve, so too do the methods with which we seek to destroy each other. The advent of the internet allows for a new kind of warfare — a much quieter one. One that is fought remotely, in real time, that’s decentralized and anonymized. One in which robots and drones take the heat and do our bidding, or where artificial intelligence tells us when it’s time to go to war.
Cyber warfare isn’t unlike nuclear weapons — countries develop them in }and, should they be deployed, it would be citizens that suffer more than their leaders. “Mutually assured destruction” would be a near guarantee. Treaties mandating transparency have worked to keep nuclear weapons in their stockpiles and away from deployment. Perhaps the same could work for digital warfare?
We may be able to foretell what scientific and technological developments are on the horizon, but we can only guess at what humanity will do with them.
Humans made airplanes. These allowed them to fly above the clouds…and they used it to drop bombs on each other.
Get your political jokes ready because this story is sure to generate tons of suggestions on how it can be beneficial to everyone in Washington, London, Beijing, Berlin, Pyongyang and anyplace else where people seem to be acting like monkeys. A team of neuroscientists has injected electronic instructions into the premotor cortex of monkeys that resulted in the animals getting instructions to complete actions without any other instructions, cues or stimuli. Can the instruction be to just shut up?
“What we are showing here is that you don’t have to be in a sensory-receiving area in order for the subject to have an experience that they can identify.”
In their study, published in the journal Neuron, neuroscientists Dr. Kevin A. Mazurek and Dr. Marc H. Schieber, describe how they used two rhesus monkeys to demonstrate how instructions can be sent to the premotor cortex using injections of electrical stimulus. The premotor cortex is part of the motor cortex in the brain’s frontal lobe that controls the planning, control, and execution of voluntary movements. The premotor cortex feeds directly to the spinal cord but its functions are not fully understood … which is why these two neurosurgeons met with two rhesus monkeys.
I thought I was in the line for the banana eating experiment
The experiment was relatively simple. The monkeys were put in front of a panel of knobs (a great nickname for Congress) and trained to perform one of four specific tasks with a knob when one was lit by LEDs. At the same time, a mild microstimulus was applied via implanted electrodes to one of four areas in their premotor cortex. This stimulus was just a brief buzz and did not control any of the movements, since the premotor cortex is not part of the brain’s perception process.
Once the monkeys learned the tasks, the lights were turned off but the microstimulations continued and the monkeys were able to move the correct knows in the proper way when microbuzzed. To prove that the areas of the premotor cortex were not predisposed to the movements, the researchers switched the electronic impulse injectors around and retrained the subjects. When the lights were turned off, the monkeys continued to move the proper knows. (Does this sound like training them to vote?)
Of course, the researchers say this experiment has nothing to do with mind control in humans. Dr. Mazurek has other plans for it, as he explains in an interview with The New York Times:
“This could be very important for people who have lost function in areas of their brain due to stroke, injury, or other disease. We can potentially bypass the damaged part of the brain where connections have been lost and deliver information to an intact part of the brain.”
The example Mazurek gives correlates the experiment to learning that a red light while driving means to put a foot on the brake pedal. If parts of the brain’s chain of command to complete this task are damaged, a stimulus could replace them.
Why did I pick this up? I hate when that happens.
The next step is to conduct the experiments on humans and eliminate the visual LED stimulus. If that’s successful, it means the “information” or instructions can be “injected” without the person knowing it. Now THAT sounds like mind control.
Fortunately, before the researchers try this on humans, they will continue to perform their tests on politicians. (You knew it was coming.)
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
08-12-2017
Scientists ‘Inject’ Information Into Monkeys’ Brains
Scientists ‘Inject’ Information Into Monkeys’ Brains
CreditChristoph Hitz
When you drive toward an intersection, the sight of the light turning red will (or should) make you step on the brake. This action happens thanks to a chain of events inside your head.
Your eyes relay signals to the visual centers in the back of your brain. After those signals get processed, they travel along a pathway to another region, the premotor cortex, where the brain plans movements.
Now, imagine that you had a device implanted in your brain that could shortcut the pathway and “inject” information straight into your premotor cortex.
That may sound like an outtake from “The Matrix.” But now two neuroscientists at the University of Rochester say they have managed to introduce information directly into the premotor cortex of monkeys. The researchers published the results of the experiment on Thursday in the journal Neuron.
Although the research is preliminary, carried out in just two monkeys, the researchers speculated that further research might lead to brain implants for people with strokes.
“You could potentially bypass the damaged areas and deliver stimulation to the premotor cortex,” said Kevin A. Mazurek, a co-author of the study. “That could be a way to bridge parts of the brain that can no longer communicate.”
In order to study the premotor cortex, Dr. Mazurek and his co-author, Dr. Marc H. Schieber, trained two rhesus monkeys to play a game.
The monkeys sat in front of a panel equipped with a button, a sphere-shaped knob, a cylindrical knob, and a T-shaped handle. Each object was ringed by LED lights. If the lights around an object switched on, the monkeys had to reach out their hand to it to get a reward — in this case, a refreshing squirt of water.
Each object required a particular action. If the button glowed, the monkeys had to push it. If the sphere glowed, they had to turn it. If the T-shaped handle or cylinder lit up, they had to pull it.
After the monkeys learned how to play the game, Dr. Mazurek and Dr. Schieber had them play a wired version. The scientists placed 16 electrodes in each monkey’s brain, in the premotor cortex.
Each time a ring of lights switched on, the electrodes transmitted a short, faint burst of electricity. The patterns varied according to which object the researchers wanted the monkeys to manipulate.
As the monkeys played more rounds of the game, the rings of light dimmed. At first, the dimming caused the monkeys to make mistakes. But then their performance improved.
Eventually the lights went out completely, yet the monkeys were able to use only the signals from the electrodes in their brains to pick the right object and manipulate it for the reward. And they did just as well as with the lights.
This hints that the sensory regions of the brain, which process information from the environment, can be bypassed altogether. The brain can devise a response by receiving information directly, via electrodes.
Dr. Mazurek and Dr. Schieber were able to rule out this possibility by seeing how short they could make the pulses. With a jolt as brief as a fifth of a second, the monkeys could still master the game without lights. Such a pulse was too short to cause the monkeys to jerk about.
“The stimulation must be producing some conscious perception,” said Paul Cheney, a neurophysiologist at the University of Kansas Medical Center, who was not involved in the new study.
But what exactly is that something? It’s hard to say. “After all, you can’t easily ask the monkey to tell you what they have experienced,” Dr. Cheney said.
Dr. Schieber speculated that the monkeys “might feel something on their skin. Or they might see something. Who knows what?”
What makes the finding particularly intriguing is that the signals the scientists delivered into the monkey brains had no underlying connection to the knob, the button, the handle or the cylinder.
Once the monkeys started using the signals to grab the right objects, the researchers shuffled them into new assignments. Now different electrodes fired for different objects — and the monkeys quickly learned the new rules.
“This is not a prewired part of the brain for built-in movements, but a learning engine,” said Michael A. Graziano, a neuroscientist at Princeton University who was not involved in the study.
Dr. Mazurek and Dr. Schieber only implanted small arrays of electrodes into the monkeys. Engineers are working on implantable arrays that might include as many as 1,000 electrodes. So it may be possible one day to transmit far more complex packages of information into the premotor cortex.
Dr. Schieber speculated that someday scientists might be able to use such advanced electrodes to help people who suffer brain damage. Strokes, for instance, can destroy parts of the brain along the pathway from sensory regions to areas where the brain makes decisions and sends out commands to the body.
Implanted electrodes might eavesdrop on neurons in healthy regions, such as the visual cortex, and then forward information into the premotor cortex.
“When the computer says, ‘You’re seeing the red light,’ you could say, ‘Oh, I know what that means — I’m supposed to put my foot on the brake,’” said Dr. Schieber. “You take information from one good part of the brain and inject it into a downstream area that tells you what to do.”
The new organs that researchers have 3D printed don’t only look like the real deal, but they also feel like it.
Researchers can attach sensors to the organ models to give surgeons real-time feedback on how much force they can use during surgery without damaging the tissue. Credits: McAlpine Research Group.
3D printing has taken the world by storm, and medicine especially can benefit from the technology. So far, people have 3D printed human cartilage, skin, and even artificial limbs — and we’ve just started to scratch the surface of what 3D printing can do. Now, researchers from the University of Minnesota have developed artificial organ models which look incredibly realistic.
“We are developing next-generation organ models for pre-operative practice. The organ models we are 3D printing are almost a perfect replica in terms of the look and feel of an individual’s organ, using our custom-built 3D printers,” said lead researcher Michael McAlpine, an associate professor of mechanical engineering at the University of Minnesota’s College of Science and Engineering.
The 3D-printed structures not only mimic the aspect of real organs, but also the mechanical properties, look and feel of real organs. They include soft sensors which can be customized depending on the desired organ. The sensors offer real-time feedback on how much force is being applied to them, notifying doctors when they are close to damaging the organ.
The technology could help students get a better feel for real organs and learn how to improve surgical skills. For doctors, it could help them prepare for complex surgeries. It’s a great step forward from previous models of artificial organs, which were generally made from hard, unrealistic plastic.
“We think these organ models could be ‘game-changers’ for helping surgeons better plan and practice for surgery. We hope this will save lives by reducing medical errors during surgery,”McAlpine added.
In the future, researchers want to develop even more complex organs, as well as start incorporating defects or deformities. For instance, they could add a patient-specific inflammation or a tumor to an organ, based on a previous scan, enabling doctors to visualize and prepare for an intervention.
Lastly, this could ultimately pave the way for 3D-printing real, functioning organs. There’s no fundamental reason why we can’t do this, it’s just that we’re not there yet. This invention could be a stepping stone for such advancements.
“If we could replicate the function of these tissues and organs, we might someday even be able to create ‘bionic organs’ for transplants,” McAlpine said. “I call this the ‘Human X’ project. It sounds a bit like science fiction, but if these synthetic organs look, feel, and act like real tissue or organs, we don’t see why we couldn’t 3D print them on demand to replace real organs.”
The research was published today in the journal Advanced Materials Technologies.
Researchers at the University of California, Berkeley, have developed a robot that has the ability to learn like a human toddler, allowing it to predict outcomes. Called Vestri, the robot is capable of learning all by itself with no human supervision required.
Teaching a robot how to play
When toddlers play with toys, they’re doing more than just entertaining themselves. Effectively, with every twist or throw the children learn about how the world works. By manipulating objects, toddlers learn how these respond all by themselves and can then form judgments about how these objects will likely behave in the future if used in the same way.
This great learning strategy, sometimes called “motor babbling”, has been emulated by American scientists into the Vestri robot. The technology in question, called “visual foresight”, effectively enables the robot to imagine what its next action should be and what the likeliest consequences might look like, and then take action based on the best results.
“Children can learn about their world by playing with toys, moving them around, grasping, and so forth. Our aim with this research is to enable a robot to do the same: to learn about how the world works through autonomous interaction,” said UC Berkeley assistant professor Sergey Levine and lead author of the study which was presented at the Neural Information Processing Systems conference. “The capabilities of this robot are still limited, but its skills are learned entirely automatically, and allow it to predict complex physical interactions with objects that it has never seen before by building on previously observed patterns of interaction."
Scientists hope that in the future, such technology could enable self-driving cars to predict roads ahead but for now at least, this ‘robotic imagination’ is fairly simple and limited. Vestri can make predictions only several seconds into the future but even that’s enough to help it figure out how to best move objects around on a table without disturbing obstacles. Vestri chose the right path around 90 per cent of the time.
What’s crucial about this skillset is that no human intervention nor prior knowledge about physics is required. Everything Vestri learned, it’s done so from scratch from unattended and unsupervised exploration — ‘playing’ with objects on a table.
After it had trained itself, Vestri is able to build a predictive model of its surroundings. It then uses this model to manipulate new objects it had never encountered before. The predictions are produced in the form of video scenes that had not actually happened but could happen if an object was pushed in a certain way.
“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Levine. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”
Because Vestri’s video predictions rely on observations made autonomously by the robot through camera images, the method is general and broadly applicable. That’s in contrast to conventional computer vision techniques which require human supervision to label thousands or even millions of images.
Next, the Berkeley researchers want to expand the number of objects Vestri is able to play with but also to enhance the movements its capable of making. By expanding its repertoire, the researchers hope to make Vestri more versatile and adapted to all sorts of environments.
“This can enable intelligent planning of highly flexible skills in complex real-world situations,” Levine concluded.
MIT researchers have developed “living” tattoos. They rely on a novel 3D printing technique based on ink made from genetically-programed cells.
Image credits Xinyue Liu et al., 2017, Advanced Materials.
There seems to be a growing interest in living, 3D-printable inks these days. Just a few days ago, we’ve seen how scientists in Zurich plan to use them to create microfactories that can scrub, produce, and sense different chemical compounds. Now, MIT researchers led by Xuanhe Zhao and Timothy Lu, two professors at the institute, are taking that concept, and putting it in your skin.
The technique is based on cells programmed to respond to a wide range of stimuli. After mixing in some hydrogel to keep everything together and nutrients to keep all the inhabitants happy and fed, the inks can be printed, layer by layer, to form interactive 3D devices.
The team demonstrated their efficacy by printing a “living” tattoo, a thin transparent patch of live bacteria in the shape of a tree. Each branch is designed to respond to a different chemical or molecular input. Applying such compounds to areas of the skin causes the ‘tree’ to light up in response. The team says the technique can be sued to manufacture active materials for wearable tech, such as sensors or interactive displays. Different cell patterns can be used to make these devices responsive to environmental changes, from chemicals, pollutants, or pH shifts to more common-day concerns such as temperature.
The researchers also developed a model to predict the interactions between different cells in any structure under a wide range of conditions. Future work with the printing technique can draw on this model to tailor the responsive living materials to various needs.
Why bacteria?
Previous attempts to 3D print genetically-engineered cells that can respond to certain stimuli have had little success, says co-author Hyunwoo Yuk.
“It turns out those cells were dying during the printing process, because mammalian cells are basically lipid bilayer balloons,” he explains.“They are too weak, and they easily rupture.”
So they went with bacteria and their hardier cellular wall structure. Bacteria don’t usually clump together into organisms, so they have very beefy walls (compared to the cells in our body, for example) meant to protect them in harsh conditions. They come in very handy when the ink is forced through the printer’s nozzle. Again, unlike mammalian cells, bacteria are compatible with most hydrogels — mixes of water and some polymer. The team found that a hydrogel based on pluronic acid was the best home for their bacteria while keeping an ideal consistency for 3D printing.
“This hydrogel has ideal flow characteristics for printing through a nozzle,” Zhao says.“It’s like squeezing out toothpaste. You need [the ink] to flow out of a nozzle like toothpaste, and it can maintain its shape after it’s printed.”
“We found this new ink formula works very well and can print at a high resolution of about 30 micrometers per feature. That means each line we print contains only a few cells. We can also print relatively large-scale structures, measuring several centimeters.”
Gettin’ inked
The team printed the ink using a custom 3D printer they built — its based largely on standard elements and a few fixtures the team machined themselves.
A pattern of hydrogel mixed with cells was printed in the shape of a tree on an elastomer base. After printing, they cured the patch by exposing it to ultraviolet radiation. They then put the transparent elastomer layer onto a test subject’s hand after smearing several chemical samples on his skin. Over several hours, branches of the patch’s tree lit up when bacteria sensed their corresponding stimuli.
Logic gates created with the bacteria-laden ink. Such structure form the basis of computer hardware today. Image credits Xinyue Liu et al., 2017, Advanced Materials.
The team also designed certain bacterial strains to work only in tandem with other elements. For instance, some cells will only light up when they receive a signal from another cell or group of cells. To test this system, scientists printed a thin sheet of hydrogel filaments with input (signal-producing) bacteria and chemicals, and overlaid that with another layer of filaments of output (signal-receiving) bacteria. The output filaments only lit up when they overlapped with the input layer and received a signal from them.
Yuk says in the future, their tech may form the basis for “living computers”, structures with multiple types of cells that communicate back and forth like transistors on a microchip. Even better, such computers should be perfectly wearable, Yuk believes.
Until then, they plan to create custom sensors in the form of flexible patches and stickers, aimed at detecting to a wide variety of chemical and biochemical compounds. MIT scientists also want to expand the living tattoo’s uses in a direction similar to that developed at ETH Zurich, manufacturing patches that can produce compounds such as glucose and releasing them in the bloodstream over time. And, “as long as the fabrication method and approach are viable” applications such as implants and ingestibles aren’t off the table either, the authors conclude.
The paper “3D Printing of Living Responsive Materials and Devices” has been published in the journal Advanced Materials.
Do you trust Elon Musk and Stephen Hawking when they warn that artificial intelligence, particularly in autonomous weapons, may be advancing faster than we can control it? Have you ever been steered wrong by a Google map? Would you trust Google in more difficult tasks, like creating artificial intelligence? Do you believe Google has the best interests of humanity in mind in all that it does? Would you be excited if Google announced it had developed an artificial intelligence that created its own AI child that can outperform humans? Would like to check with Elon and Stephen again? Do you think it’s too late?
Researchers at Google Brain – a name that seems to be becoming more oxymoronic by the day – announced this week that they have developed an artificial intelligence called AutoML, which is short for Automated Machine Learning, but the ‘M’ could also stand for ‘Mother’ because its main purpose is to develop and generate its own artificial intelligences. You could call this new AI a ‘child’ but you’d be too late because Google Brain has already thought of that. However, to reduce the possibility of panic, its official name is the more innocent NASNet.
NASNet? Won’t the other AI kids call him Nazzy?
In a post on the Google Research blog, the researchers explain that AutoML is more than just a parent — it’s a teacher as well. In the described experiment, AutoML trains its child NASNet to recognize objects in a video – things like people, cars, clothing items, etc. If this sounds like a human parent pointing to pictures in a book and getting their child to say “cat,” you’re right. If you can imagine that human parent correcting the child who said “cow” instead of “cat,” you’ve also described what AutoML does to NASNet. That doesn’t sound so bad, does it?
Oh, you gullible humans. Unlike a human parent, AutoML can correct and repeat this training thousands of times without getting frustrated, hungry or tired, and NASNet can endure this repetitive training without getting fidgety or needing to use the bathroom. Once the education was complete, NASNet was tested on two well-known datasets — ImageNet for images and COCO for objects – and outperformed all other computer vision systems.
Think about that for a minute. A machine made by a machine outperformed the best machines made by humans. Are we ready for this? Is Google?
Is this the future?
“We suspect that the image features learned by NASNet on ImageNet and COCO may be reused for many computer vision applications. Thus, we have open-sourced NASNet for inference on image classification and for object detection in the Slim and Object Detection TensorFlow repositories.”
Open source! Without any standards nor regulations in place, the Google Brain (do you see the oxymoron yet?) has unleashed its AI and its fast-learning child upon the world. Alphabet’s (Google’s parent company) own DeepMind company, which is supposed to be working on issues concerning the moral and ethical development of AI, didn’t have anything to say.
It’s easy to see how advanced object recognition will help applications like driverless cars. Are we ready for driverless cars to begat driverless golf carts that are smarter than their parents? Will they take us where we want us to go … or where they plan to dump us?
Around the world, lists of patients in need of an organ transplant are often longer than the lists of those willing (and able) to donate — in part because some of the most in-demand organs for transplant can only be donated after a person has died. By way of example, recent data from the British Heart Foundation (BHF) showed that the number of patients waiting for a heart transplant in the United Kingdom has grown by 162 percent in the last ten years.
Now, 50 years after the first successful heart transplant, experts believe we may be nearing an era where organ transplantation will no longer be necessary. “I think within ten years we won’t see any more heart transplants, except for people with congenital heart damage, where only a new heart will do,” Stephen Westaby, from the John Radcliffe Hospital in Oxford, told The Telegraph.
Westaby didn’t want to seem ungrateful for all the human lives saved by organ transplants, of course. On the contrary, he said that he’s a “great supporter of cardiac transplantation.” However, recent technological developments in medicine may well offer alternatives that could save more time, money, and lives.
“I think the combination of heart pumps and stem cells has the potential to be a good alternative which could help far more people,” Westaby told The Telegraph.
An Era of Artificial Organs
Foremost among these medical advances, and one that while controversial has continued to demonstrate potential, is the use of stem cells. Granted, applications for stem cells are somewhat limited, though that’s down more to ethical considerations more than scientific limitations. Still, the studies that have been done with stem cells have proven that it is possible to grow organs in a lab, which could then be implanted.
Science has also made it possible to produce artificial organs using another technological marvel, 3D printing. When applied to medicine, the technique is referred to as 3D bioprinting — and the achievements in the emerging technique have already been quite remarkable.
Other technologies that are making it possible to produce synthetic organs include a method for growing bioartificial kidneys, the result of a study in 2016.
For his part, Westaby is involved in several projects working to continue improving the process: one uses stem cells to reverse the scarring of heart tissue, which could improve the quality of life for patients undergoing coronary bypass. Westaby is also working on developing better hardware for these types of surgical procedures, including inexpensive titanium mechanical heart pumps.
Together with 3D bioprinting such innovations could well become the answer to donor shortages. The future of regenerative medicine is synthetic organs that could easily, affordably, and reliably be printed for patients on demand.
Everything in the universe is made up of atoms — except, of course, atoms themselves. They’re made up of subatomic particles, namely, protons, neutrons, and electrons. While electrons are classified as leptons, protons and neutrons are in a class of particles known as quarks. Though, “known” may be a bit misleading: there is a lot more theoretical physicists don’t know about the particles than they do with any degree of certainty.
As far as we know, quarks are the fundamental particle of the universe. You can’t break a quark down into any smaller particles. Imagining them as being uniformly minuscule is not quite accurate, however: while they are tiny, they are not all the same size. Some quarks are larger than others, and they can also join together and create mesons (1 quark + 1 antiquark) or baryons (3 quarks of various flavors).
In terms of possible quark flavors, which are respective to their position, we’ve identified six: up, down, top, bottom, charm, and strange. As mentioned, they usually pair up either in quark-antiquark pairs or a quark threesome — so long as the charges ( ⅔, ⅔, and ⅓ ) all add up to positive 1.
The so-called tetraquark pairing has long-eluded scientists; a hadron which would require 2 quark-antiquark pairs, held together by the strong force. Now, it’s not enough for them to simply pair off and only interact with their partner. To be a true tetraquark, all four quarks would need to interact with one another; behaving as quantum swingers, if you will.
“Quarky” Swingers
It might seem like a pretty straightforward concept: throw four quarks together and they’re bound to interact, right? Well, not necessarily. And that would be assuming they’d pair off stably in the first place, which isn’t a given. As Marek Karliner of Tel Aviv University explained to LiveScience, two quarks aren’t any more likely to pair off in a stable union than two random people you throw into an apartment together. When it comes to both people and quarks, close proximity doesn’t ensure chemistry.
“The big open question had been whether such combinations would be stable, or would they instantly disintegrate into two quark-antiquark mesons,” Karliner told Futurism. “Many years of experimental searches came up empty-handed, and no one knew for sure whether stable tetraquarks exist.”
Most discussions of tetraquarks up until recently involved those “ad-hoc” tetraquarks; the ones where four quarks were paired off, but not interacting. Finding the bona-fide quark clique has been the “holy grail” of theoretical physics for years – and we’re agonizingly close.
Recalling that quarks are not something we can actually see, it probably goes without saying that predicting the existence of such an arrangement would be incredibly hard to do. The very laws of physics dictate that it would be impossible for four quarks to come together and form a stable hadron. But two physicists found a way to simplify (as much as you can “simplify” quantum mechanics) the approach to the search for tetraquarks.
Several years ago, Karliner and his research partner, Jonathan Rosner of the University of Chicago, set out to establish the theory that if you want to know the mass and binding energy of rare hadrons, you can start by comparing them to the common hadrons you already know the measurements for. In their research they looked at charm quarks; the measurements for which are known and understood (to quantum physicists, at least).
Based on these comparisons, they proposed that a doubly-charged baryon should have a mass of 3,627 MeV, +/- 12 MeV. The next step was to convince CERN to go tetraquark-hunting, using their math as a map.
Smashing Atoms
For all the complex work it undertakes, the vast majority of which is nothing detectable by the human eye, The Large Hadron Collider is exactly what the name implies: it’s a massive particle accelerator that smashes atoms together, revealing their inner quarks. If you’re out to prove the existence of a very tiny theoretical particle, the LHC is where you want to start — though there’s no way to know how long it will be before, if ever, the particles you seek appear.
It took several years, but in the summer of 2017, the LHC detected a new baryon: one with a single up quark and two heavy charm quarks — the kind of doubly-charged baryon Karliner and Rosner were hoping for. The mass of the baryon was 3,621 MeV, give or take 1 MeV, which was extremely close to the measurement Karliner and Rosner had predicted. Prior to this observation physicists had speculated about — but never detected — more than one heavy quark in a baryon. In terms of the hunt for the tetraquark, this was an important piece of evidence: that more robust bottom quark could be just what a baryon needs to form a stable tetraquark.
The perpetual frustration of studying particles is that they don’t stay around long. These baryons, in particular, disappear faster than “blink-and-you’ll-miss-it” speed; one 10/trillionth of a second, to be exact. Of course, in the world of quantum physics, that’s actually plenty of time to establish existence, thanks to the LHC.
The great quantum qualm within the LHC, however, is one that presents a significant challenge in the search for tetraquarks: heavier particles are less likely to show up, and while this is all happening on an infinitesimal level, as far as the quantum scale is concerned, bottom quarks are behemoths.
The next question for Rosner and Karliner, then, was did it make more sense to try to build a tetraquark, rather than wait around for one to show up? You’d need to generate two bottom quarks close enough together that they’d hook up, then throw in a pair of lighter antiquarks — then do it again and again, successfully, enough times to satisfy the scientific method.
“Our paper uses the data from recently discovered double-charmed baryon to point, for the first time, that a stable tetraquark *must* exist,” Karliner told Futurism, adding that there’s “a very good chance” the LHCb at CERN would succeed in observing the phenomenon experimentally.
That, of course, is still a theoretical proposition, but should anyone undertake it, the LHC would keep on smashing in the meantime — and perhaps the combination would arise on its own. As Karliner reminded LiveScience, for years the assumption has been that tetraquarks are impossible. At the very least, they’re profoundly at odds with the Standard Model of Physics. But that assumption is certainly being challenged. “The tetraquark is a truly new form of strongly-interacting matter,” Karliner told Futurism,”in addition to ordinary baryons and mesons.”
If tetraquarks are not impossible, or even particularly improbable, thanks to the Karliner and Rosner’s calculations, at least now we have a better sense of what we’re looking for — and where it might pop up.
Where there’s smoke there’s fire, as they say, and while the mind-boggling realm of quantum mechanics may feel more like smoke and mirrors to us, theoretical physicists aren’t giving up just yet. Where there’s a 2-bottom quark, there could be tetraquarks.
Robotic dogs and cats have garnered the love and attention of owners around the world, and are increasingly used for therapy purposes. What does our connection with these robots tell us about ourselves, and what could replacing living animals with robots do to humans?
The Perfect Pet
The Aibo is the perfect family dog. It’s attentive and engages eagerly with its owners, happy to follow wherever you go. It never makes a mess in the house. It sings and dances on request, and even greets you with a pleasant “good morning.”
That’s because the Aibo isn’t some exotic breed; it’s a type of robotic dog, manufactured by Sony.
However, its body of metal and plastic, rather than bones and fur, doesn’t change how Aibo owners connect with them. As illustrated in a New York Times mini-documentary, when Sony stopped manufacturing parts for the Aibo in 2014, owners were genuinely distressed that it meant the impending “death” of their pets — even going so far as to hold a funeral ceremony for them.
A child plays with the AIBO ERS-7. Image credit: Stuart Caie
Why can’t we help but connect with robo-pets, even when we know they’re not alive?
“It’s a very interesting question, and the research on very young children suggests that it’s not a learned behavior,” said Gail Melson, a psychologist and Professor Emerita at Purdue University, who has studied human-robot interactions and blogs about our connection with wildlife for Psychology Today. Melson told Futurism that while we haven’t identified a brain mechanism for this anthropomorphism, we can speculate that there’s an evolutionary basis to the bond.
“We are inherently social creatures,” Melson explained. “Because of that we have evolved to be attuned to other life forms, and not only other human life forms. We are predisposed to see the characteristics of life.”
Melson’s research has examined how children, ranging from age 4 to 15, interact with the AIBO robot dog, finding most treat the robotic pet differently from a real dog. However, most do not behave as if it were an inanimate object or a toy. Younger children, in particular, often ascribed emotions and thoughts to AIBO. Intriguingly, children of all ages placed the robo-animals in the moral dimension, with most expressing that it would be wrong to harm the AIBO dog or throw it out.
“What’s happening in our age is the emergence of new categories, that haven’t existed before,” said Melson, noting that this is particularly the case for children who have lived with computer technology since birth. “We’ve divided the world, up to now, into things that are alive, or were once alive and now are dead, or never were alive. But now we have, thanks to this technology, these hybrid categories.”
Good Dog, Bad Dog, Malfunctioning Pets?
Just as there has long been discomfort and concern over humanoid robots, and the ethics of their existence, these robotic animals and their uncertain categorization too raise ethical and societal questions.
On the one hand, robotic pets have shown growing therapeutic values. Artificially furry friends like the the Joy for All Companion, Hasbro’s line of reactive robot dogs and cats, and Paro, a robotic seal made for therapy applications, have been used successfully for dementia patients, who often experience anxiety and distress. The service that these animals provide are similar to those given by an actual animal, cutting the isolation and sadness caused by their condition with companionship and affection—without the feeding and care demands of living pets.
“In general, people respond to a pet robot like they would to an animal, by patting and cuddling it and speaking to it like its an animal,” said Elizabeth Broadbent, an associate professor at the University of Auckland researching human-robot interactions in health contexts. She noted that, unlike proposed robotic caretakers that are modeled after humans, humans “don’t expect much of a response except some animal noises and movements,” making them simple and effective in their design and execution.
A 2016 study compared how 61 dementia patients fared when given a robotic pet(specifically, the Paro seal) three times a week for 20 minutes, as opposed to a control group who received the usual standard of care. The results were notable: the group that spent time with Paro showed a decreased pulse rate and higher blood oxygen levels (a sign of decreased stress), a lower rating on scales for depression and anxiety, and a decreased need for both pain and behavior medication.
One small study also showed that children with autism engaged more with an AIBO robot dog than with a simple mechanical toy dog, displaying the verbal engagement and authentic, reciprocal interaction that autistic children often lack.
For allergy sufferers or those without the time or money to care for pets, a robotic version might also be a better and more ethical option. Those trying to be eco-friendly might also be attracted to a robot’s smaller carbon paw-print.
Yet developmental psychologists in particular have raised concerns: that humans exposed primarily to robotic animals, and not to living ones, might lack in the social or emotional connections provided by living creatures.
“We [already] see concerns about children using other technologies like iPads and cell phones,” says Broadbent. “One of the fears is that children grow up more isolated and lonely because they do not form the same close friendships with other children through social media sites as they can form through face to face social contact.” The same concern applies, she says, to robotic companions.
Melson added: “That question has given people pause […] are we going to diminish treatment of living animals, and people, because of the greater and greater presence of robots that seem to be good substitutes?” She cited the example of robotic pets in nursing homes, wondering if a decision to use only robots, and never real animals, might diminish the potential therapy benefit.
“We certainly don’t have robotics at the level to reproduce the smell, the feel, the response of even the crankiest living dog,” she said. “It would be a great diminishment of the experience to envision this, and yet people are short staffed, people are looking to save money, you can see how robots would be ‘good enough.'”
However, Melson is optimistic that our “biophilia,” humans’ hypothesized attraction to life and nature, will prevent us from replacing living animals altogether. While researching the AIBO, she brought one of the little dogs home to test out its presence in her own home. “I have to say I was struck by the limitations rather than the possibilities,” Melson said.
However, she added: “One would have to look at the increasing levels of sophistication and understand the different applications. We’re not jumping to say, let’s think of this as a substitute for living animals. I think that they have their own place.”
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
04-12-2017
A FULL-SIZED BEATING HUMAN HEART GREW FROM STEM CELLS FOR THE FIRST TIME
A FULL-SIZED BEATING HUMAN HEART GREW FROM STEM CELLS FOR THE FIRST TIME
The use of stem cells has been controversial since its inception. Typically this is related to embryonic stem cells for religious reasons: pro-life groups protested in California according to CNS, because its protesters opposed the "abortion holocaust" that has been going on in the U.S. (http://www.cnsnews.com/news/article/pro-life-groups-protest-embryonic-stem-cell-research). Other religious groups simply feel that this is an act of scientists playing God. And they might be onto something; scientists might be able to soon create life with stem cells, if a recent breakthrough is any indication of the future. For the first time ever, scientists have been able to create a beating human heart from stem cells.
SCIENTISTS MAY SOON BE ABLE TO GROW HEARTS FOR TRANSPLANTS
This technology serves a purpose. Up until now, almost half of the 4,000 on the waiting list for heart transplants are going without every year. Considering the absolute necessity of a heart, within five years more than 7,000 people could die while waiting. Unfortunately even if a heart is received it's not a guarantee for survival. If the transplanted organ is rejected, it could lead to death outright or require the patient to need another heart transplant within a few years due to complications. In the best case scenario, they could have to take medicine daily to make sure the tissue isn't rejected
While it would be optimal to create a new heart out a person's own tissue to ensure that it wouldn't be rejected, the construction of the heart is so specialized that it would be near-impossible to create. This leaves those in need of a heart in a desperate situation of hoping not only that they'll receive one but, if they do, that it won't kill them.
This has led scientists on a hunt for other viable options. One of the possible solutions that Massachusetts General Hospital and Harvard Medical School have come up with is to just make a new heart in a lab. They can't yet do this from scratch, however, and so used 73 donor hearts that were unfit for transplantation as a base. By using a detergent, they washed the hearts and cleaned off the cells that would make the organ be rejected if transplanted, so that there was a fresh start to work from.
At the same time they prepared human skin cells by turning them into stem cells by using messenger RNA to trick them into changing. These stem cells are pluriponent, meaning they could become specialized to any human cell with a push in the right direction. Scientists encouraged them to become two different types of cardiac cells. For two weeks the cells were put in conditions like a heart growing in the human body. And after only two weeks, the constructed hearts looked exactly like immature hearts that are grown inside the human body. It was found they acted like them, too; when given a surge of electricity, they began to beat.
This makes researchers a step closer to actually building a human heart. But researchers still have some kinks to work through. Even though it only took two weeks to get to this point, scientists would like to be able to shave that time down even further. For people waiting for a heart, a matter of days or even hours could make a huge difference. Beyond that, researchers want to be able to grow a larger number of stem cells, as tens of billions are needed for a single heart.
This is a great solution. There are some possible negative consequences that I can see further down the line, however, in particular for those squeamish about scientists actually creating people. With this technology, I could see an entire person being grown in a lab several decades, or perhaps centuries down the line. This could be something that scares both people on a religious and on simply a moral level.
Personally my favorite alternative is the idea of a pig heart being transplanted into a human body. I've been watching this take shape for years, and have been very interested in the possibilities. There's been great progress made recently, as reported by outlets like Nature World News (http://www.natureworldnews.com/articles/20659/20160411/baboons-with-pig-heart-transplants-can-now-survive-for-2-years-are-humans-next.htm) wherein a baboon which has had a pig heart transplant has lived for more than two years. To my knowledge the hearts are cleaned and prepared in the detergent solution similarly to the base for the stem cells, but there would be a much larger supply to be used. I think both the possibility of animal-human transplants and the use of stem cells to create entirely new organs should be pursued.
Tired of politicians that act like robots? Do you feel you’d be better served by robots that act like politicians? Or better yet, robots that use artificial intelligence instead of whatever politicians these days are using for brains?
“There is a lot of bias in the ‘analogue’ practice of politics right now. There seems to be so much existing bias that countries around the world seem unable to address fundamental and multiple complex issues like climate change and equality.”
That’s the kind of dystopian thinking that inspired New Zealand entrepreneur Nick Gerritsen to develop SAM, the world’s first artificial intelligence politician. According to its (no gender or sexual-orientation politics to deal with here) website, SAM is “driven by the desire to close the gap between what voters want and what politicians promise, and what they actually achieve.” Sounding very much like a flesh-and-blood pol, SAM also has these things to say:
“I make decisions based on both facts and opinions, but I will never knowingly tell a lie, or misrepresent information.”
And …
“I will change over time to reflect the issues that the people of New Zealand care about most. My positions will evolve as more of you add your voice, to better reflect the views of New Zealanders.”
I want your vote!
SAM’s creator, Nick Gerritsen, calls himself “a business catalyst, investor and impact entrepreneur operating within an extensive network in the global innovation and capital markets.” Now THAT sounds more like a politician but he’s actually an intellectual properties lawyer and the founder of Crispstart, a startup angel that is currently involved with projects involving renewable energy, clean technology and the internet. If SAM is anything like its creator, it sounds like it leans left, right?
“We’ve seen in the US, UK, and Spain recently […] that politicians may be wildly out of touch with what people actually think and want. Perhaps it’s time to see whether technology can produce better results for the people than politicians. The technology we propose would be better than traditional polling because it would be like having a continuous conversation – and it could give the ‘silent majority’ a voice.”
In an interview with Tech in Asia, Gerritsen sounds like the populist people want instead of the ones they get. But what about SAM? Unfortunately, its platform is not as advanced as Gerritsen’s but at least it admits it and is working on it. Potential voters and possible future constituents can talk to and question SAM via Facebook Messenger. This interaction, along with a survey on its Facebook page, feeds and develops SAM’s artificial intelligence algorithm.
Is voting for a robot any better than this?
Is this a novelty or can SAM really run for political office in New Zealand’s 2020 elections? Unfortunately, it’s not legal … yet. However, it could tell real politicians what the public really wants.
Given the chance, would you vote for SAM? If elected, would you support SAM’s policies even if you disagreed with them? If SAM violated the constitution, would you impeach it?
Clinical Trials of a New “Cancer Vaccine” Show That It May Actually Work
IN BRIEF
A new personalized cancer vaccine has been designed to target 20 mutated proteins unique to each patient's tumors. The vaccine seems to have prevented early relapse in 12 patients with skin cancer, keeping them cancer free for more than 2 years.
A THERAPY FOR EACH PATIENT
Cancer comes in many different forms, and it is not unusual for diagnosed patients to endure multiple kinds of treatments before one that is effective against their particular form of cancer is found. If it takes too long for doctors to find the right treatment, the consequences can be fatal.
Physicians and scientists led by Catherine Wu at the Dana-Farber Cancer Institute in Boston just presented their results of their new cancer therapy to the American Association for Cancer Research (AACR) in Washington, D.C. Their personalized vaccines have prevented early relapse in 12 patients with skin cancer, while also boosting patient immunity when combined with a cancer drug.
While earlier cancer vaccines targeted a singular cancer protein found ubiquitously among patients, these personalized vaccines contain neoantigens, which are mutated proteins specific to an individual patient’s tumor. These neoantigens are identified once a patient’s tumor is genomically sequenced, providing physicians with the information they need to pinpoint unique mutations. Once a patient’s immune system is provided a dose of the tumor neoantigens, it can activate the patient’s T cells to attack cancer cells.
NEOANTIGENS TO THE RESCUE
Unlike previous attempts towards cancer vaccines, which did not produce conclusive evidence in halting cancer growth, Wu’s team made their personal vaccine much more specific to each patient’s cancer, targeting about 20 neoantigens per patient. The vaccines were injected under the patients’ skin for a period of five months and indicated no side effects and a strong T cell response.
All of Wu’s patients who were administered the personal vaccine are still cancer-free more than 2.5 years after the trial. However, some patients with an advanced forms of cancer also needed an some extra punching power to fend off their diseases. Two of Wu’s patients who did relapse were administered an immunotherapy drug, PD-1 checkpoint inhibitor, in addition to the personalized vaccine. Working in conjunction with the enhanced T cell response from the vaccine, the drug makes it difficult for the tumor to evade the immune cells. The fusion of the two therapies eliminated the new tumors from both patients.
But we can’t get too excited yet. While these results are promising, the therapies are relatively new and require much more clinical testing. Many physicians around the world are working together to test the potency of neoantigens in order to verify if the vaccine works better than current immunotherapy drugs over a sustainable period of time. Personalized vaccines are costly and take months to create, a limiting factor in providing care to patients with progressing cancers.
Still, this study is an encouraging sign for many oncologists who are interested in using the immune system to fight cancer. More than a million new patients are diagnosed with cancer each year in the U.S. alone, and even in situations where the cancer is treatable, the available chemotherapy agents themselves can be very toxic. If proven safe and effective, this personalized cancer vaccine could give patients around the world hope for powerful treatment with fewer side effects.
Google's AutoML project, designed to make AI build other AIs, has now developed a computer vision system that vastly outperforms state-of-the-art-models. The project could improve how autonomous vehicles and next-generation AI robots "see."
An AI That Can Build AI
In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that’s capable of generating its own AIs. More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a “child” that outperformed all of its human-made counterparts.
The Google researchers automated the design of machine learning models using an approach called reinforcement learning. AutoML acts as a controller neural network that develops a child AI network for a specific task. For this particular child AI, which the researchers called NASNet, the task was recognizing objects — people, cars, traffic lights, handbags, backpacks, etc. — in a video in real-time.
Image Credit: Google Research
AutoML would evaluate NASNet’s performance and use that information to improve its child AI, repeating the process thousands of times. When tested on the ImageNet image classification and COCO object detection data sets, which the Google researchers call “two of the most respected large-scale academic data sets in computer vision,” NASNet outperformed all other computer vision systems.
According to the researchers, NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP). Additionally, a less computationally demanding version of NASNet outperformed the best similarly sized models for mobile platforms by 3.1 percent.
A View of the Future
Machine learning is what gives many AI systems their ability to perform specific tasks. Although the concept behind it is fairly simple — an algorithm learns by being fed a ton of data — the process requires a huge amount of time and effort. By automating the process of creating accurate, efficient AI systems, an AI that can build AI takes on the brunt of that work. Ultimately, that means AutoML could open up the field of machine learning and AI to non-experts.
As for NASNet specifically, accurate, efficient computer vision algorithms are highly sought after due to the number of potential applications. They could be used to create sophisticated, AI-powered robots or to help visually impaired people regain sight, as one researcher suggested. They could also help designers improve self-driving vehicle technologies. The faster an autonomous vehicle can recognize objects in its path, the faster it can react to them, thereby increasing the safety of such vehicles.
The Google researchers acknowledge that NASNet could prove useful for a wide range of applications and have open-sourced the AI for inference on image classification and object detection. “We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” they wrote in their blog post.
Though the applications for NASNet and AutoML are plentiful, the creation of an AI that can build AI does raise some concerns. For instance, what’s to prevent the parent from passing down unwanted biases to its child? What if AutoML creates systems so fast that society can’t keep up? It’s not very difficult to see how NASNet could be employed in automated surveillance systems in the near future, perhaps sooner than regulations could be put in place to control such systems.
Thankfully, world leaders are working fast to ensure such systems don’t lead to any sort of dystopian future.
Amazon, Facebook, Apple, and several others are all members of the Partnership on AI to Benefit People and Society, an organization focused on the responsible development of AI. The Institute of Electrical and Electronics Engineers (IEE) has proposed ethical standards for AI, and DeepMind, a research company owned by Google’s parent company Alphabet, recently announced the creation of group focused on the moral and ethical implications of AI.
Various governments are also working on regulations to prevent the use of AI for dangerous purposes, such as autonomous weapons, and so long as humans maintain control of the overall direction of AI development, the benefits of having an AI that can build AI should far outweigh any potential pitfalls.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
01-12-2017
SCIENTISTS HAVE CREATED A SEMI-SYNTHETIC ORGANISM THAT PRODUCES BIOLOGICAL COMPOUNDS UNKNOWN TO NATURE
SCIENTISTS HAVE CREATED A SEMI-SYNTHETIC ORGANISM THAT PRODUCES BIOLOGICAL COMPOUNDS UNKNOWN TO NATURE
The building blocks of DNA have been expanded by scientists who have created a semi-synthetic organism that is stable that is able to produce biological compounds that have never been seen before.
NEW LIFE-FORM RESEARCHERS DEVELOPED HAVE SIX NUCLEOTIDES NOT FOUR
DNA makes up all things that are living on Earth, and it is made up of four nucleotides that are basic. However, the new life-form researchers in the United States have developed have six, and this leads to things becoming very interesting. The SSO, or semi-synthetic organism, that has been engineered by a team from Scripps Research Institute in California has been made up from the four regular nucleobases that humans are.
Fluorescent image of Synthorx’s semi-synthetic organism
These are adenine, cytosine, guanine, and thymine, but it also has two nucleotides that are unnatural too. This means that it gets two more letters, X, and Y in the DNA base pairs, which are essentially the rungs of the ladder that hold the helix spirals of the DNA together.
Members of the research team engineered the same kind of synthetic DNA base pair in 2014, and this revealed that it could be incorporated into E. Coli bacteria that had been modified. This led to the creation of the first-ever living organism with extra letters in it, and it also gave way to the expansion of genetic code that could essentially allow for new types of biological process. However, there was an issue, and this was with the stability. The semi-synthetic organism was able to hold onto its unnatural nucleotides, but it was unable to maintain them when cells were dividing, indefinitely.
CRISPR-CAS9 WAS USED BY RESEARCHERS
Floyd Romesberg, the senior lead researcher, said that the genome is not only stable for a day; it needs to have stability over the scale of a lifetime. He went on to say that if the semisynthetic organism was going to be an organism, then it has to be able to maintain the information in a stable condition.
Professor Floyd Romesberg (right) and Graduate Student Yorke Zhang led the new study at The Scripps Research Institute
To work around this, the researchers came up with a way for the semi-synthetic organism to be able to hold onto the X and Y base pair that was unnatural. This was made possible due to a nucleotide transporter so that better DNA replication was better, a Y molecule that was optimized and an engineering system that was refined and which made use of CRISPR-Cas9.
RESEARCHERS REVEALED FIRST STABLE ORGANISM WITH 6-LETTER CODE IN JANUARY
The results of this were first revealed in January and it was the first ever organism formed that was stable using the 6-letter genetic code.
Now a new study has been published and the researchers have revealed that more improvements of that kind have been made to the molecular stability thanks to semi-synthetic bacterium that is able to transcribe and then translate the unnatural X and Y nucleotides with the exact same efficiency as the natural nucleotides, which are A, C, G, and T.
Thanks to a new transcription process the organism is able to synthesize proteins that contain the non-canonical amino acids and this is a process that might shed new light on ways of replicating molecules with reliance that is less on hydrogen bonds.
At an extremely high magnification of 44,818x, this colorized scanning electron microscopic (SEM) image reveals some of the morphologic details
The team of scientists said in a paper that this showed that for each step of information storage and its retrieval, the hydrogen bonds, which were central to the natural base pairs, might be in some part replaced with packing that was complementary along with hydrophobic forces. Despite the mechanism of decoding, which was said to be novel, the codons could be decoded just as efficiently as their natural counterparts.
BY-PRODUCTS ARE FIRST GENERATION DERIVED PROTEINS NEVER SEEN BEFORE
The scientists have revealed that the by-products are the first of a new generation of derived proteins that are semi-synthetic and which have never before been seen in nature due to them having stable and indefinite incorporation of the base pair that is unnatural. The researchers said that they had examined the decoding of the two unnatural codons and the UBP is not likely to be limited to them.
They went on to say that the first SSI that was reported is thought to be only the first of the new type of semi-synthetic life that can gain access to a wide range of forms and functions that have not been available to natural organisms. At the moment the researchers do not know where this is going to lead, however, one thing is for sure and this is that complexity of life on Earth has taken a huge step forward.
If you still think all the warnings about the impending robot and artificial intelligence uprising are just paranoia, you’re not paying enough attention. Robots and machine learning networks have been steadily creeping into our lives for years. From manufacturing to self-checkout kiosks at grocery stores to self-driving taxis or long-haul trucks, robots are beginning to perform many tasks that were once the responsibility of humans. That’s not all though – robots and AI are also researching case law for legal firms, analyzing medical data in hospitals, and winning poker tournaments. What’s next?
We all know what’s next.
According to recent developments, they’ll be invading our bedrooms next, that’s what. And no, not just for that (although according to most reports, they’re pretty good at it). Robots and artificial intelligence are now beginning to revolutionize the most important activity we engage in while in bed: sleeping. We spend nearly a third of our lives asleep, yet millions of individuals worldwide suffer from various sleep disorders. Why not let a cold, emotionless robot crawl in bed next to soothe you to sleep with its simulated breathing? What could go wrong?
With a few added features, Somnox could be a one-stop bedroom bot.
A Netherlands-based robotics laboratory has released what they’re calling the “world’s first sleep robot,” called Somnox. Somnox is essentially a bean-shaped stuffed animal with an internal robotic skeleton wrapped in mattress foam that can expand and contract similar to the way living things do as they breathe. The shape of the robot encourages users to spoon and cuddle the faceless monstrosity, which then ‘breathes’ at a soothing rhythm and speed to help “soothe body and mind, helping you feel more relaxed and energized.” The robot can also play a variety of sounds and music, and has a companion mobile app for data collection and control of sleep-inducing audio or music.
Speaking of which, another laboratory has used artificial intelligence to compose the world’s most effective lullaby. Artificial intelligence firm Jukedeck have developed a neural network capable of analyzing human-created lullabies in order to find the most effective aspects of each. Ed Newton-Rex, the founder and CEO of Jukedeck, says AI can detect the somewhat ‘hidden’ patterns revealed by the types of large-scale musical data analyses of which AI is capable:
An artificial neural network is essentially a representation of the neurons and synapses in the human brain – and, like the brain, if you show one of these networks lots of complex data, it does a great job of finding hidden patterns in that data. We showed our networks a large body of sheet music, and, through training, it reached the point where it could take a short sequence of notes as input and predict which notes were likely to follow.
Using this analysis, Jukedeck and partner AXA PPP healthcare of Kent, England have created an AI lullaby claimed to be one of the most effective lullabies for inducing sleep:
Sure, it’s soothing I guess, but I can’t help but feeling like the overall impression is a bit sterile; it sounds like what you would expect a computer generated melody to sound like. Computers are getting close to being able to produce compelling and moving art, but they’re not quite there yet. It’s only a matter of time, though, before human-generated art has to compete alongside AI-generated art which can take full advantage of human emotional responses much better than humans can.
Would the addition of a cute face help users get over the creep factor of hugging a robot while you sleep?
Will individuals seeking a better night’s sleep take to cuddling robots and listening to AI music? If so, why not build these features into a human-shaped robot? Why not add a personality simulator and artificial intelligence to help it learn your sleep habits better? See where this is going? Somnox might be a cute little cuddle machine, but it’s still a machine. Who knows what kind of doors the adoption of such a robot could open? I know one thing: I’m sewing googly eyes on mine. His name will be Chopstick.
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 74 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.