The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
24-03-2019
If the Space Force Won’t Fight Aliens, Who the Hell Will?
If the Space Force Won’t Fight Aliens, Who the Hell Will?
by Kyle Mizokami
Late last week, military news site Task & Purpose confirmed a disturbing fact: the newly created U.S. Space Force has no intention of fighting aliens. Despite the recent uptick of military UFO sightings, the Pentagon appears uninterested (at least officially) in the possibility of hostile aliens. But if an alien invasion does take place, which arm of the Pentagon would respond? The answer: probably all of them.
During a recent Pentagon roundtable, Task & Purpose’s Pentagon reporter Jeff Schogol asked if the Space Force “is concerned about threats posed by extraterrestrial intelligence.” The official answer he got back? “No.”
Schogol’s question was asked with tongue firmly planted in cheek, but the revelation last year that U.S. Navy fighter jets encountered alleged UFO craft in 2004 and again in 2015—in both instances appearing on radar and leaving behind video evidence—makes one wonder.
If the unidentified flying objects described by Navy pilots, as well as military and civilian personnel for the past seventy years, are really of extraterrestrial origin and unfriendly, how would the Pentagon deal with them?
If UFOs suddenly descended from the skies, toasting the Statue of Liberty, the Great Mall of America, and the Golden Gate Bridge with death rays, the Pentagon would need to convene some sort of study group to quickly determine what kind of threat it was dealing with. If that happens, forget the Air Force.
Ironically, the service that would most likely take the lead is the U.S. Navy.
Why the Navy? Aliens would likely come from vast distances, traveling light years in long distance voyages, to smash puny humans. The U.S. Navy is unique among the services in planning similar, though much, much shorter voyages. Both submarines and UFOs deal with pressure—in the case of submarines the pressure is on the outside, while in space the pressure is on the inside of the vehicle. From an operational and technical standpoint, aliens and sailors have a few things in common.
Would all of this firepower matter in a fight with aliens?
Image: U.S. Navy (Getty)
There are other reasons the Navy might take the lead. Seventy-one percent of the Earth’s surface is covered by water, and if aliens operated from the water (remember, the 2004 sighting included reports of a 737-sized object on the surface of the ocean) the Navy is unique in having manned aircraft, surface ships, and submarines prowling above, on, and below the surface of the ocean. The Navy could also sail to the most remote locations in the world’s oceans, establishing a military presence for weeks or months, to investigate and monitor for enemy activity.
The Air Force could operate against aliens, but the service’s fighters and bombers could only remain on station for mere minutes or hours before returning to base. Against a terrestrial threat this isn’t really a big deal, but against an alien threat we know nothing about—and according to the 2004 incident, theoretically capable of traveling extraordinary distances in a blink of an eye—such a force will be less useful.
If humans could lure aliens into a set-piece battle the Air Force could bring a lot of firepower, but how one lures aliens into battle is anyone’s guess. In the meantime the Space Force, nestled under control of the Air Force, would contribute to the alien war by maintaining the U.S. military’s network of position, navigation, and timing/GPS satellites, communication satellites, and other space-based assets.
US Army Abrams tanks and Bradley fighting vehicles exercising in Estonia, 2017.
Photo: Sean Gallup (Getty)
The Army would be the service responsible if aliens attempted a landing in the United States, or presumably one of our allies. The Army’s 10 combat divisions would spring into action, attempting to destroy the aliens with fire and maneuver. It would be in many ways similar to countering an airborne landing, with the Army attempting to destroy the alien’s landing zone and prevent the flow of alien reinforcements. The Marines could also get in on the alien fighting, particularly overseas in Asia, Europe, or even the Middle East—though one would like to think aliens would be smart enough to avoid that region and the prospect of their own 18-year war altogether.
Of course, all of this is contingent on the U.S. military being on par with alien technology... which, frankly, is extremely unlikely. The universe is billions of years old, and other races could easily have a head start of a million years or more on us. And certainly, any species capable of interstellar flight is far more technologically advanced.
Consider that a handful of 21st century tanks could crush an army from the 11th century, or even the 19th century for that matter. Even a difference of a thousand years would be ample enough to ensure humanity’s defeat from even a minor alien expedition/hunting trip/bachelor party.
The entire U.S. military could have the same effectiveness against aliens as cavemen—or in this case cosplayers pretending to be cavemen at Comicon—would have against the U.S. military
Image: Daniel Zuchnick (Getty)
If aliens do exist, ultimately it may not matter if they are hostile or not. Our destruction at their hands would be about as inevitable as destruction from an extinction-level meteor impact. They could even be friendly, the combination of advanced, destructive technology and violent tendencies leading to intelligent life self-screening itself from interstellar travel. (That would be bad news for humanity.) The “UFOs” people are seeing could even be top secret U.S. government craft. The aliens could be us. In the end, maybe it doesn’t matter if the Pentagon has a plan to fight aliens after all.
Photographs are an amazing thing. We now take them for granted, but have you ever stopped to think about how incredible they truly are? They manage to freeze one moment in time forevermore, a peek at a split second in time that we can never get back, but which remains eternally etched upon that picture as if it never left. We have come a long way since the first attempts to capture images on film, and it is weird to think that less than 200 hundred years ago the thought of taking a picture of any kind was magic. How many scenes and images over human history have been lost to die with those who last saw them, before we had the capability of preserving them for all to see? Vast swaths of history have been visually lost to us, from a time before cameras and Instagram. Looking at old photographs can be a surreal experience, a step through time to another era, and here we will take a look at some major pioneering firsts in the world of pictures, a peek through the ages to another era.
When it comes to fascinating photographs of the past, perhaps it is best to start at the very beginning, with the first one ever taken, or at least the oldest surviving one. This particular picture was taken in around 1826, by inventor Joseph Nicéphore Niépce, using a special, revolutionary method (for the time) that involved a pewter plate covered in an asphalt derivative. The process is though to have taken several days of exposure, and the result is a view out of a window at Saint-Loup-de-Varennes, France, in a time long forgotten but forever preserved in this image.
The oldest surviving photograph
Moving on to other early firsts, we have the 1838 photograph taken by Louis Daguerre at the Boulevard du Temple in Paris, France, at a place called the Place de la République, in what is thought to be the first photo ever taken of a human being. At the time, the process required at least 10 minutes of exposure to take a picture, meaning that human beings did not show up, and this was just a regular landscape photo for quite a long time until someone noticed that a human figure can be seen in the bottom left. It is believed that the unidentified man had been standing still for long enough to show up because he had been having his shoes shined. There is another blurry figure of a person that can be seen as well, although not nearly as clearly. Daguerre was actually the inventor of the device he used to take it, called a “daguerreotype,” which utilized silver plates and mercury fumes, and was used to take many of the earliest photos.
First photograph of a person. You can see him in the lower left.
From the following year, in 1839 we have what is considered to be the world’s first selfie, taken by a student in Pennsylvania named Robert Cornelius, who would also be instrumental in further refining and developing the daguerreotype. He tested it out by taking a photo of himself as he stood in front of a store front in Philadelphia, standing completely still for an estimated 10 to 15 minutes to capture this historic shot.
First selfie
Speaking of firsts, there is also the first photograph ever taken of a woman, a portrait taken by a Dr. John W. Draper of his sister, Dorothy Catherine Draper, at his New York studio in 1840. The photo looks like a pretty normal old-timey pic, but it is important to note that the subject had to keep completely still without even blinking for over a minute to achieve this.
First photograph of a woman
The oldest surviving photograph of a president was taken by daguerreotype in 1843, and shows the then former president John Quincy Adams approximately 14 years after his presidency had ended. The photograph was taken by Philip Hass, and although Adams was no longer in office at the time it is remarkable nonetheless.
First photo of an American president
The year 1845 saw more breakthroughs in photography when the French physicists Louis Fizeau and Leon Foucault managed to take the first ever photos of the sun. It is interesting to note that just 5 years before these same men had also been the first to take a photograph of the moon, from a rooftop observatory in New York. Also note that sunspots can even be observed in this photograph.
First photo of the sun
Another unique daguerreotype photograph is what is believed to be the first photo ever taken of New York City. The picture in question was taken in 1848 at Manhattan’s Upper West Side, and you can see that at the time it wasn’t nearly the big bustling metropolis we see today. There was another even older picture of New York, but it has been lost over the years, and so this is effectively the oldest.
First photo of New York City
There is also a very intriguing image taken in 1853 by a man named Solomon Nunes Carvalho at Big Timbers, Colorado. This photo would become part of the U.S. Library of Congress, and is thought to be the very first photograph taken of a Native American village. Most people only have the image of these places in their heads from Western movies and Cowboys and Indians shows, so to take a glimpse through time in this photo is fascinating to say the least, with even two Native American figures visible in the center left.
First photo of a Native American camp
Just less than a decade after this photo first, in 1860 there was taken the first aerial photo ever. Although we now take such pictures for granted, at the time it was unheard of, but James Wallace Black and Samuel Archer King managed to capture this image from 2,000 feet in the air, which shows Boston, Massachusetts at the time. There were earlier photos taken from hot air balloons, but they were lost and this is the earliest surviving one.
First aerial photograph
The very next year, in 1861, the first ever color photograph would be taken, by photographer Thomas Sutton. The technique used was first proposed by James Clerk Maxwell in 1855, and he was the first to suggest that three light sources could be mixed and matched to achieve any desired color. Sutton used Maxwell’s advice and took three different black and white photos of a ribbon and used blue, red, and green filters on each one, after which he merged them into one image to create the first known color photograph, a truly revolutionary concept at the time.
First color photograph
At the start of the following decade, in 1870 another photographic milestone was reached when photographer Carol Popp de Szathmari took what is regarded as the very first photograph taken of a battle. The picture shows Prussian troops advancing against French defenders, and it is largely due to images like this that Szathmari is widely said to be the first war photographer.
First photo of a battle
The ensuing decades would bring some other photographic firsts, when in April of 1884 the first photo of a tornado was taken by an A.A. Adams in Garnett, Texas. Adams was lucky enough to be present or the tornado and to find a comfortable vantage point from around 14 miles away, standing by the United Presbyterian Church in Garnett where he went about capturing this historic and unique image.
First photo of a tornado
Finally we come to a whole new avenue of photography in the early years, that of underwater photography. This had long been seen as virtually impossible, but in 1926 National Geographic photographer Charles Martin, along with botanist William Longley, were in the Florida Keys trying out their new fancy equipment utilizing waterproof housing and a magnesium flash, when they managed to snap this pic of a hogfish off the Florida Keys. It may seem rather quaint in modern times, but this had never been done before, and stands as a testament to human ingenuity and remains a rather dramatic photographic first.
First underwater photograph
While we now take and share photographs instantaneously at a moment’s notice, it seems that we should sometimes take a step back and look at where it all began, to the time when this was extremely cutting edge science fiction stuff. Looking at these photos we are brought to another era in history, frozen there for all time. Even as we move into the future and the world changes, these moments will not, frozen there on film forevermore, and earning their place in history.
A disturbing new report indicates a U.S. Government agency’s involvement in a bizarre array of tests, which were conducted on cats and dogs purchased from what the story calls “Asian meat markets.”
According to the NBC report, the remains of hundreds of dogs and cats were purchased by the U.S. Government, for use with experiments that occurred in Maryland at the U.S. Department of Agriculture lab. Part of the testing included “feeding dog remains to cats and injecting cat remains into mice,” the report states.
The investigation was carried out by the White Coat Waste Project, a group who describes itself as “a taxpayer watchdog group representing more than 2 million liberty-lovers and animal-lovers who all agree: taxpayers shouldn’t be forced to pay over $15 billion every year for wasteful and cruel experiments on dogs, cats, monkeys and other animals.”
Although the Department of Agriculture research only recently became public knowledge following an NBC story based on White Coast Waste’s report, the testing appears to have occurred between 2003 and 2015, and was aimed at attempting to study the parasite behind toxoplasmosis, a food-borne illness.
However, the new revelations are particularly concerning since some of the animals acquired for tests carried out in the recent Department of Agriculture studies came from markets condemned by Congress in a 2018 House Resolution, according to the White Coat Waste report.
Several U.S. politicians spoke out in condemnation of the tests. “We can advance scientific discovery while treating animals humanely, and American taxpayers have every right to expect our government will meet that standard,” said Jeff Merkley, a Democratic Senator in Oregon.
Sadly, the explosive White Coat Waste report is not the only one of its kind which indicates such concerning behavior by government agencies. Over the course of the last several decades, there have been numerous examples of bizarre testing carried out on animals which raised ethical questions. Too many to name, in fact… although a few noteworthy examples (which might at least be on par with the “cat cannibalism” discussed in the recent NBC report) do exist.
The well-known and oft-cited tale of “acoustic kitty” is high on the list of wasteful government spending projects that also involved questionable treatment of animals. This incident, which I recounted previously here at MU, involved a Cold War-era CIA project that produced a bizarre surveillance system that was built into a house cat, which included antennae fittings within the creature’s tail, as well as hidden battery compartments and microphones. The cat was released outside the Soviet compound on Wisconsin Avenue in Washington, D.C., where it was deployed to monitor a conversation taking place in a park nearby. The cat was struck by a car and killed within minutes of being released; additional attempts at field testing “acoustic kitties” were not carried out.
There are still some varieties of “militarized” animal testing that occur today, although in far less ethically questionable forms. Just last year, a report revealed that DARPA hoped to find ways of genetically engineering various aquatic species for use in future surveillance programs. Much like the recent White Coat Waste project report, questions have been raised about the long term concerns pertaining to genetically modified organisms, as well as the fact that taxpayer money has been used to support such programs.
It is believed that an estimated 4000 cats may have been killed over the course of the 12 years the Department of Agriculture’s studies were undertaken. Based on previous statements made by the USDA, the agency has apparently defended the studies as “life-saving research,” although White Coat Waste argues that it was unnecessary and wasteful spending of U.S. taxpayer dollars.
Artist concept of nano-patterned object reorienting itself to remain in a beam of light.
In the future, spacecraft could travel to other stars faster than anything currently available by using laser light sources that are millions of miles away. For the moment, this prospect has been explored only theoretically by physicists at Caltech. In their new study, the researchers propose levitating and propelling objects using a beam of light by etching the surface of those objects with specific nanoscale patterns.
Conceptual illustration of a nano-patterned object reorienting itself to remain in a beam of light.
(Credit: Courtesy of the Atwater laboratory)
A pattern that keeps objects afloat
For decades, researchers have been using so-called optical tweezers to move and manipulate microscopic objects (i.e. nanoparticles) using a focused laser beam. Nanoparticles can be suspended mid-air due to the light scattering and gradient forces resulting from the interaction of the particle with the light. Such devices have been used to trap small metal particles, but also viruses, bacteria, living cells, and even strands of DNA. For his contributions to developing optical tweezers, Arthur Ashkin was awarded the 2018 Nobel Prize in Physics.
However, optical tweezers are limited by distance and the size of the objects. Essentially, only very small objects can be manipulated with light in this fashion and only from close range.
“One can levitate a ping pong ball using a steady stream of air from a hair dryer. But it wouldn’t work if the ping pong ball were too big, or if it were too far away from the hair dryer, and so on,” Ognjen Ilic, a postdoc at Caltech and the study’s first author, said in a statement.
In their new study, Ilic and colleagues have proposed a radical new way to use light in order to trap or even propel objects. Theoretically, their method is not limited by an object’s size or distance from the source, which means macroscopic objects such a spacecraft could be accelerated, perhaps even close to relativistic speeds, using the force of light alone.
For this to work, certain nanoscale patterns need to be etched on an object’s surface. When the concentrated laser beam hits this patterned surface, the object should begin to “self-stabilize” by generating torque to keep it in the light beam. The authors say that the patterning is designed in such a way as to encode the object’s stability.
This would work for any kind of object, from a grain of rice to a spaceship in size. The light source could also be millions of miles away which would make this technology ideal to power a light sail for space exploration.
“We have come up with a method that could levitate macroscopic objects,” said Harry Atwater, Professor of Applied Physics and Materials Science in Caltech’s Division of Engineering and Applied Science. “There is an audaciously interesting application to use this technique as a means for propulsion of a new generation of spacecraft. We’re a long way from actually doing that, but we are in the process of testing out the principles.”
A shocking new study suggest that, at a quantum level at least, two different versions of reality exist. The new study comes from the idea brought to the forefront in Eugene Wigner's friend scenario, which states that two people could see the same photon, or light particle, and have different observations of the photon. Even though the observations, and the conclusions drawn from those observations, are different, they would both be correct.
If you've ever questioned the nature of your reality, a new study suggests that there are actually two different versions of it — at least at the quantum level.
The pre-published study, found in arXiv, sheds new light on the complex idea that two people could see the same photon, come to different conclusions about the photon, yet still both be correct.
"In quantum mechanics, the objectivity of observations is not so clear, most dramatically exposed in Eugene Wigner’s eponymous thought experiment where two observers can experience fundamentally different realities," the researchers wrote in the study. "While observer-independence has long remained inaccessible to empirical investigation, recent no-go-theorems construct an extended Wigner’s friend scenario with four entangled observers that allows us to put it to the test."
They continued: "In a state-of-the-art photon experiment, we here realize this extended Wigner’s friend scenario, experimentally violating the associated Bell-type inequality by 5 standard deviations. This result lends considerable strength to interpretations of quantum theory already set in an observer-dependent framework and demands for revision of those which are not."
One of the study's co-authors, Martin Ringbauer, told Live Science that "you can verify both of them," adding that theoretical advances were needed before they were able to prove Wigner's hypothesis, which was first proposed in 1961.
"Theoretical advances were needed to formulate the problem in a way that is testable. Then, the experimental side needed developments on the control of quantum systems to implement something like that," he told the news outlet.
To test the idea, the researchers designated "two different laboratories, each involving an experimenter and their friend," introducing two pairs of entangled photons, which allowed for their fates to be intertwined. They also introduced "people" (who were not real, but rather represented observers) to measure one photon in the pair, record their results and repeat the process for the second photon using quantum memory.
In 1961, when Wigner introduced the idea that would eventually become known as "Wigner's friend," only one scenario was used. With the new experiment, it was doubled and the results that Wigner had first discussed more than 50 years still rang true.
Quantum mechanics gives detail on how the world works at a scale so small that the rules of physics do not apply, Live Science added. With the new findings of the study, the field of quantum mechanics may change if measurements are not the same for everyone.
"It seems that, in contrast to classical physics, measurement results cannot be considered absolute truth but must be understood relative to the observer who performed the measurement," Ringbauer told Live Science. "The stories we tell about quantum mechanics have to adapt to that."
The Missile Defense Agency has offered new details about plans to develop a science fiction-sounding space-based neutral particle beam weapon to disable ordestroy incoming ballistic missiles. The goal is to have a prototype system ready for a test in orbit by 2023, an ambitious schedule to demonstrate that the technology has progressed to a more useful state from when the U.S. military last explored and then abandoned the concept nearly three decades ago.
The U.S. military’s budget request for Fiscal Year 2020 asks for $34 million in funding for the neutral particle beam program, or NPB, according to documents released on Mar. 18, 2019. The Missile Defense Agency (MDA) wants a total of $380 million through 2023 fiscal cycle for development of the directed energy weapon. Defense One, citing unnamed U.S. officials, had been first to report the existence of the plan on Mar. 14, 2019. It’s also worth noting that Congress set out a goal of testing of at least one space-based missile defense system prototype by 2022 and the deployment of “an operational capability at the earliest practicable date” in the annual defense policy bill for the 2018 Fiscal Year.
MDA included the new-start NPB program in a larger line item called “Technology Maturation Initiatives,” which also includes requested funding for the development of laser weapons and advanced airborne sensors. It does not expect to ask for any more funds for the particle beam system through this account in Fiscal Year 2024, which would indicate plans to move it into its own dedicated funding stream at that time.
“The NPB provides a game changing space-based directed energy weapon capability for strategic missile defense,” MDA’s latest budget request says. “The NPB is a space-based, directed energy capability for homeland defense, providing a defense for boost phase and mid-course phase” of a ballistic missile’s flight.
A staple in science fiction, particle beam weapons are grounded in real science. At its most basic, an NPB requires a charged particle source and a means of accelerating them to near-light speed to create the beam itself.
An extremely rudimentary graphic showing the components of a neutral particle beam system.
When this beam of charged particles hits something it produces effects similar to that of laser, namely extreme heat on the surface of the target capable of burning a hole through certain materials depending on the strength of the weapon. If the particles are not sufficently powerful to destroy something such as a missile or reentry vehicle, they may still be able to pass through the outer shells of those targets and disrupt, damage, or destroy internal components, similar broadly to how a microwave weapon functions.
In addition, since particle beams respond different to different materials, there is the potential that the system might also have the capability to discriminatebetween real incoming warheads a ballistic missile has released and decoys. Seperate sensors would be necessary to observe the impacts and categorize the results. But if it worked, this would help other ballistic missile defense systems, which generally have short engagement windows to begin with, focus only on actual threats.
The characteristics of these particles would make it hard, if not impossible for an opponent to shield their weapons from the effects or otherwise employ countermeasures, short of destroying the NPB itself, as well. All of this has long made the potential of a particle beam weapon attractive, especially for missile defense.
As part of the Strategic Defense Initiative (SDI) under President Ronald Reagan in the 1980s, the U.S. military experimented with NPBs and hired Martin-Marietta, McDonnell Douglas, TRW, and a team from General Electric and Lockheed to craft potential designs for a space-based system. Between 1984 and 1993, the Strategic Defense Initiative Organization (SDIO) spent approximately $794 million on the concept, according to a 1993 report from the General Accounting Office, now known as the Government Accountability Office (GAO).
MCDONNELL-DOUGLAS VIA AEROSPACE PROJECTS REVIEW A mockup of McDonnell-Douglas' space-based NPB.
Most notably, in July 1989, Los Alamos National Laboratory (LANL), in cooperation with the SDIO, conducted the Beam Experiments Aboard a Rocket test, or BEAR. This involved placing an actual particle beam system on board a sounding rocket and shooting it out of the Earth’s atmosphere.
As of 2018, this remained the “most energetic particle beam ever flown,” according to LANL presentation. “The experiment successfully demonstrated that a particle beam would operate and propagate as predicted outside the atmosphere and that there are no unexpected side-effects when firing the beam in space.”
LANL A picture of the particle beam-carrying sounding rocket ahead of the BEAR test.
However, the SDIO ultimately pursued a plan to build a massive constellation of space-based kinetic interceptors, known as Brilliant Pebbles, coupled with an equally extensive sensor network in orbit and on Earth. The entire project came to an end in 1993 ahead of the incoming administration of President Bill Clinton, who renamed SDIO the Ballistic Missile Defense Organization – the forerunner of MDA – and refocused its efforts on terrestrial missile defense.
SDIO’s particle beam program proved to be impractical given the technology available at the time. The prospective space-based systems were large and required massive power sources, with nuclear power being the most viable option. Despite hundreds of millions of dollars in funding over nearly a decade between the 1980s and 1990s, the previous NPB effort did not demonstrate a beam powerful enough to produce the desired effects on a target or produce a sufficiently lightweight power source design, according to the 1993 GAO report. Despite the BEAR experiment, there had been no test of an actual complete weapon system by that point, either.
VIA THE NATIONAL AIR AND SPACE MUSEUM Artwork depicting the NPB system from the BEAR test.
“We’ve come a long way in terms of the technology we use today to where a full, all-up system wouldn’t be the size of three of these conference rooms, right? We now believe we can get it down to a package that we can put on as part of a payload to be placed on orbit,” an unnamed U.S. military official told Defense One in regards to the new particle beam initiative. “Power generation, beam formation, the accelerometer that’s required to get there and what it takes to neutralize that beam, that capability has been matured and there are technologies that we can use today to miniaturize.”
But even if a practical and functional design is possible, there’s no guarantee it would necessarily provide the promised capability, especially against ballistic missiles in their boost phase. Striking missiles in this first stage of their flight is attractive because they are moving relatively slow and are producing a massive heat signature that makes them easier to spot and track. It also means that the missile's contents rain down over or near the launch country in a more localized manner than if destroyed during mid-course or terminal phases of flight.
Unfortunately, they’re also moving through the atmosphere for a significant part of the boost phase. The beam that an NPB shoots out are notably vulnerable to distortion and deflection, since the particles can easily get sent off course by ricocheting off other particles hanging in the air.
There’s a reason why, if you want to build an NPB at all, putting it in the vacuum of space makes the most sense. The amount of power necessary to ensure the beam remains both focused and powerful at appreciable ranges and for enough time to actual damage or destroy a target in the atmosphere could be immense.
An NPB concept from the SDI program using a nuclear reactor at the rear to power the system.
For some context on the potential scale of power generation one might be looking at, in the 1960s and 1970s, the U.S. military had also considered a ground-based particle beam that could defeat ballistic missiles in its latter stages of flight called Seesaw. The Advanced Research Projects Agency determined it would take a system propagating a particle beam across hundreds of miles of tunnels to work properly.
To create the necessary to power supply, Nicholas Christofilos, a Greek physicist working at the Lawrence Livermore National Laboratory at the time, went so far as to propose using nuclear bombs to effectively create a ludicrously large drain hole that would allow the entire volume of the Great Lakes to flow into a massive hydroelectric generator complex underneath, according to Sharon Weinberger's 2017 book The Imagineers of War: The Untold History of DARPA, the Pentagon Agency that Changed The World. Needless to say, this idea was absurd and the entire program never left the drawing board.
Technological improvements since then in various fields, such as power generation efficiency and adaptive focusing systems, would reduce these requirements, but they could still be prohibitive depending on other design constraints. This would also be much less of an issue for exoatmospheric engagements.
Beyond these potential technical issues with the beam itself, boost-phase ballistic missile defense systems need to be in an optimal position to engage their target during a very short amount of time after a launch. On average this phase of a missile’s flight is around five minutes at most in total. Sensors would first have to spot and categorize the threat after which American officials would make a decision to engage or not.
VIA AMERICAN PHYSICAL SOCIETY
A general timeline of the boost phase of ballistic missiles and the time available for defense systems to respond.
Ensuring that there are enough space-based NPBs “parked” in orbit near even a portion of known and possible launch sites could be a costly proposition that would also require significant investments in the U.S. military’s ballistic missile defense sensor architecture, a separate issue you can read about in more detail here and here. The speed of the particles and the range of the weapons in the vacuum of space could help mitigate these issues. It would also be far less of a concern during the mid-course portion of the missile's flight where the entire engagement would occur in space or the very upper reaches of the atmosphere and there would be more time to line up the best shot.
“It’s a very short timeline, first to even know where it [the missile] is coming from…It’s less than a couple minutes before it leaves the atmosphere,” the unnamed defense official that spoke to Defense One admitted. “So, you have to have a weapon that’s on station, that’s not going to be taken out by air batteries and so we have been looking at directed energy applications for that. But you have to scale up power to that megawatt class. You’ve got to reduce the weight. You’ve got to have a power source. It’s a challenge, technically.”
“I can’t say that it is going to be at a space and weight requirement that’s going to actually be feasible, but we’re pushing forward with the prototyping and demo,” this individual continued. “We need to understand as a Department [of Defense], the costs and what it would take to go do that. There’s a lot of folklore…that says it’s either crazy expensive or that it’s free. It needs to be a definitive study.”
Feasibility concerns notwithstanding, there would be political and legal ramifications on top of everything else, too. The 1967 Outer Space Treaty bans the deployment of weapons of mass destruction in orbit. Though the NPB itself would not fit this definition, a nuclear power source would still have the potential to prompt outcry and formal protests.
HIPSASH A Russian MiG-31 Foxhound carries an air-launched anti-satellite weapon in a test.
Particle beams by their nature can also be difficult to detect and conclusively trace back to a particular source, making them non-attributable. This is something that Michael Griffin, presently Undersecretary of Defense for Research and Engineering and a major proponent of NPBs, has described in the past as an “advantage."
But it’s also something that America’s adversaries could look to exploit and weaponize, blaming the U.S. government for all manner of explained phenomena in space.
Russia has a long history of making unsubstantiated allegations against the U.S. government, claiming in recent years that Americans and their allies have staged chemical weapons attacks on civilians in Syria, are actively supporting ISIS terrorists in that country and in Afghanistan, and are running a covert biological weapons program in Georgia. A constellation of particle beam weapons in space capable, at least in principle, of conducting non-attributable attacks, would be an obvious goldmine for Russian propagandists seeking to spread conspiratorial claims, blaming any hole that appears in a spacecraft or malfunctioning satellite on an unprovoked particle beam attack.
It might be hard to challenge these claims. Beyond it's missile defense capabilities, a space-based particle beam does seem like an ideal anti-satellite weapon. It would offer an easy way to quickly knock out satellites, or at least disable them, in a crisis. It would be hard for an opponent to detect such as an attack was occurring in the first place and even more difficult to counter.
But proponents in the U.S. government, especially Undersecretary of Defense Griffin, who worked on the Reagan-era SDI program, are adamant about at least exploring the possibility of a space-based particle beam weapon system. “We should not lose our way as we come out of the slough of despondence in directed energy into an environment that is more welcoming of our contributions. We should not lose our way with some of the other technologies that were pioneered in the ’80s and early-’90s and now stand available for renewed effort,” he declared in 2018.
It remains to be seen whether MDA will determine that the technical and other considerations have changed sufficiently in the last 30 years to make the idea of particle beam weapons in orbit any more practical than it was during the Cold War. But we should get a better idea in the next five years as the Pentagon pushes toward its goal of lofting a prototype particle beam weapon into orbit for the first time.
New research says that the Earth’s past ice ages may have been caused by tectonic pile-ups in the tropics.
A crevasse in a glacier. Image via Pixabay.
Our planet has braved three major ice ages in the past 540 million years, seeing global temperatures plummet and ice sheets stretching far beyond the poles. Needless to say, these were quite dramatic events for the planet, so researchers are keen to understand what set them off. A new study reports that plate tectonics might be the culprit.
Cold hard plates
“We think that arc-continent collisions at low latitudes are the trigger for global cooling,” says Oliver Jagoutz, an associate professor in MIT’s Department of Earth, Atmospheric, and Planetary Sciences and a co-author of the new study.
“This could occur over 1-5 million square kilometers, which sounds like a lot. But in reality, it’s a very thin strip of Earth, sitting in the right location, that can change the global climate.”
“Arc-continent collisions” is a term that describes the slow, grinding head-butting that takes place when a piece of oceanic crust hits a continent (i.e. continental crust). Generally speaking, oceanic crust (OC) will slip beneath the continental crust (CC) during such collisions, as the former is denser than the latter. Arc-continent collisions are a mainstay of orogen (mountain range) formation, as they cause the edges of CC plates ‘wrinkle up’. But in geology, as is often the case in life, things don’t always go according to plan.
The study reports that the last three major ice ages were preceded by arc-continent collisions in the tropics which exposed tens of thousands of kilometers of oceanic, rather than continental, crust to the atmosphere. The heat and humidity of the tropics then likely triggered a chemical reaction between calcium and magnesium minerals in these rocks and carbon dioxide in the air. This would have scrubbed huge quantities of atmospheric CO2 to form carbonate rocks (such as limestone).
Over time, this led to a global cooling of the climate, setting off the ice ages, they add.
The team tracked the movements of two suture zones (the areas where plates collide) in today’s Himalayan mountains. Both sutures were formed during the same tectonic migrations, they report: one collision 80 million years ago, when the supercontinent Gondwana moved north creating part of Eurasia, and another 50 million years ago. Both collisions occurred near the equator and proceeded global atmospheric cooling events by several million years.
In geological terms, ‘several million years’ is basically the blink of an eye. So, curious to see whether one event caused the other, the team analyzed the rate at which oceanic rocks known as ophiolites can react to CO2 in the tropics. They conclude that, given the location and magnitude of the events that created them, both of the sutures they investigated could have absorbed enough CO2 to cool the atmosphere enough to trigger the subsequent ice ages.
Another interesting find is that the same processes likely led to the end of these ice ages. The fresh oceanic crust progressively lost its ability to scrub CO2 from the air (as the calcium and magnesium minerals transformed into carbonate rocks), allowing the atmosphere to stabilize.
“We showed that this process can start and end glaciation,” Jagoutz says. “Then we wondered, how often does that work? If our hypothesis is correct, we should find that for every time there’s a cooling event, there are a lot of sutures in the tropics.”
The team then expanded their analysis to older ice ages to see whether they were also associated with tropical arc-continent collisions. After compiling the location of major suture zones on Earth from pre-existing literature, they reconstruct their movement and that of the plates which generated them over time using computer simulations.
Animation showing suture zones developing as tectonic plates evolved over the last 540 million years. MIT researchers found sutures in the tropical rain belt, shown in green, were associated with Earth's major ice ages.
Credit: Swanson-Hysell research group
All in all, the team found three periods over the last 540 million years in which major suture zones (those about 10,000 kilometers in length) were formed in the tropics. Their formation coincided with three major ice ages, they add: one the Late Ordovician (455 to 440 million years ago), one in the Permo-Carboniferous (335 to 280 million years ago), and one in the Cenozoic (35 million years ago to present day). This wasn’t a happy coincidence, either. The team explains that no ice ages or glaciation events occurred during periods when major suture zones formed outside of the tropics.
“We found that every time there was a peak in the suture zone in the tropics, there was a glaciation event,” Jagoutz says. “So every time you get, say, 10,000 kilometers of sutures in the tropics, you get an ice age.”
Jagoutz notes that there is a major suture zone active today in Indonesia. It includes some of the largest bodies of ophiolite rocks in the world today, and Jagoutz says it may prove to be an important resource for absorbing carbon dioxide. The team says that the findings lend some weight to current proposals to grind up these ophiolites in massive quantities and spread them along the equatorial belt in an effort to counteract our CO2 emissions. However, they also point to how such efforts may, in fact, produce additional carbon emissions — and also suggest that such measures may simply take too long to produce results within our lifetimes.
“It’s a challenge to make this process work on human timescales,” Jagoutz says. “The Earth does this in a slow, geological process that has nothing to do with what we do to the Earth today. And it will neither harm us, nor save us.”
The paper “Arc-continent collisions in the tropics set Earth’s climate state” has been published in the journal Science.
Each spring, flocks of migratory birds travel northward from their sunny vacations in the south, following a flight plan that’s ingrained in their DNA. Birds — like bats, bees, wolves, bears, and countless other animals — have the ability to use theEarth’s magnetic fieldasa map tonavigate the planet. We humans seem directionless in comparison, but as new research in eNeurosuggests, it’s probably not because we lack the tools.
A team including Caltech geoscientist Joseph Kirschvink, Ph.D., and neuroscientist Shin Shimojo, Ph.D., show in the new paper that the human brain responds to changes in the electromagnetic field, even if humans don’t realize it.
“Our results indicate that human brains are indeed collecting and selectively processing directional input from magnetic field receptors,” they write in their preprint on biorXiv.µ
The Earth's magnetic field lines are directed toward magnetic north and south, depending on the hemisphere.
The Earth’s electromagnetic field is generated by electric currents created from the swirling molten iron in its core. In the northern hemisphere, all field lines are oriented toward magnetic north, and ditto for the south. Those lines are what birds and their field-reading kin use to navigate — and, as the new evidence suggests, humans may be able to sense them too. When participants in the study went through simulated shifts in the Earth’s magnetic field, their brain activity reacted in predictable patterns, suggesting the human body is equipped for magnetoreception, even if we’re not aware of it.
It’s impossible to shift the planet’s actual magnetic field, so the team built a highly insulated chamber in which they could create “ecologically relevant rotations of Earth-strength magnetic fields” for the person sitting inside it. As the team rotated the magnetic field, they also measured the electrical activity of the participants’ brains using electroencephalography (EEG).
The experimental setup, designed to allow shifts in the magnetic field in one direction, any direction, or a "sham" shift.
In some people, with each rotation of the field, the team noticed a pattern neuroscientists have documented before: a sudden drop in amplitude in the alpha oscillation, the main brain wave on an EEG of a person at rest. That drop, called an “alpha event-related dysynchronization” (or alpha-ERD for short), is usually observed when a person is suddenly confronted with an external stimulus, whether visual, auditory, or physical. The participants didn’t know their brains were reacting, but their EEGs gave it away.
In total, 34 people “from the Caltech population” participated in the various experiments, in which the magnetic field was shifted in a range of directions, rotated, or not manipulated at all. Four of those people, the team writes, had especially stable alpha-ERDs even over follow-up experiments, suggesting their brains were always attuned to changes in the “normal” magnetic field. The other responses were more variable, though the general pattern of alpha-ERDs occurring in response to magnetic field shifts was clear.
Interestingly, the tests confirmed these people were attuned to magnetic north, as they were conducted in the northern hemisphere. A successful southern hemisphere follow-up experiment would further support their hypothesis.
The changes in brain activity, as Kirschvink told The Guardian, represent the brain “freaking out” in response to sudden changes in the environment. However, it’s still not clear how the brain is picking up on the magnetic field. Some researchers have suggested that retinal proteins called “cryptochromes” might react to the magnetic field, but Kirschvink predicts the body might contain specialized magnetosensory cells housing iron clusters that move like the needle of a compass. Unfortunately, finding these magnetosensory receptors has been compared to finding a needle in a haystack. “The receptors could be in your left toe,” Kirschvink told Science in 2016.
There’s a lot left to learn about where this ability to sense the magnetic fields came from, how it might have been used, and why we can’t use it anymore. But this study is an important first step in exploring an innate part of ourselves that we didn’t know existed. The timing couldn’t be better, as some scientists warn that we are overdue for a major shift in the Earth’s magnetic field.
“For now,” the team writes, “alpha-ERD remains a blank signature for a wider, unexplored range of magnetoreceptive processing.”
Abstract
Magnetoreception, the perception of the geomagnetic field, is a sensory modality well established across all major groups of vertebrates and some invertebrates, but its presence in humans has been tested rarely, yielding inconclusive results. We report here a strong, specific human brain response to ecologically-relevant rotations of Earth-strength magnetic fields. Following geomagnetic stimulation, a drop in amplitude of EEG alpha oscillations (8-13 Hz) occurred in a repeatable manner. Termed alpha event-related desynchronization (alpha-ERD), such a response is associated with sensory and cognitive processing of external stimuli. Biophysical tests showed that the neural response was sensitive to the dynamic components and axial alignment of the field but also to the static components and polarity of the field. Thispattern of results implicates ferromagnetism as the biophysical basis for the sensory transduction and provides a basis to start the behavioral exploration of human magnetoreception.
Over the years, quantum physics has fed us a constant drip of mind-bending implications for the nature of reality. Of course, a lot of those mind-bending implications have been grossly misinterpreted, blended up, and turned into nonsense and predatory self-help books. It’s a funny field of research because while it is grossly misinterpreted, often and loudly, it also doeschallenge our assumptionsabout reality itself. Many of these challenges haven’t made it past the thought experiment phase. Recently, however, a real-life test of a famous quantum physics thought experiment was performed, and, according to the MIT Technology Review, the results areas weird as you could hope for.
The thought experiment is called the “Wigner’s Friend” experiment. Developed by Nobel Prize-winning physicist Eugene Wigner in 1961, the Wigner’s Friend thought experiment deals with quantum weirdness of light and the effect of the observer on quantum superposition. The thought experiment asks if two people can observe one event, see different things, and both be correct, essentially creating two different realities that are forced to coexist.
It works like this: A single polarized photon can have either a vertical polarization or a horizontal polarization. Until the measurement of it’s polarization happens, according to the laws of quantum physics, it has both states at once and exists in something called a quantum superposition. It’s worth pointing out that scientists have observed that superpositions exist, and have devised experiments to show it. That becomes important in a minute.
So you have one polarized photon in a superposition of being both vertically and horizontally polarized at once, and you have two scientists: Wigner, and Wigner’s friend. Wigner is performing an experiment to show that the photon is in a superposition and has all possible states of polarization. In Wigner’s reality this is now “fact.”
Meanwhile, Wigner’s friend has sneaked in, without Wigner’s knowledge, to another lab looking at the same photon. Wigner’s friend measures which polarized state it’s in, which snaps it out of superposition and into a definitive state, and records the result without ever telling Wigner. They then compare notes and find that something very strange has then happened. At the exact same time, Wigner and his friend recorded two different versions of reality and they are both correct.
The double slit experiment is one that shows how quantum superposition exists.
Until now, that was simply a thought experiment. Just last week, however scientists at Heriot-Watt University in Edinburgh say they have performed a real life test of the Wigner’s friend experiment, and it worked out exactly as the thought experiment said it would. I’ll use the description of the experiment published in the MIT Technology Review:
The breakthrough that Proietti and co have made is to carry this out. “In a state-of-the-art 6-photon experiment, we realize this extended Wigner’s friend scenario,” they say.
They use these six entangled photons to create two alternate realities—one representing Wigner and one representing Wigner’s friend. Wigner’s friend measures the polarization of a photon and stores the result. Wigner then performs an interference measurement to determine if the measurement and the photon are in a superposition.
The experiment produces an unambiguous result. It turns out that both realities can coexist even though they produce irreconcilable outcomes, just as Wigner predicted.
If this experiment turns out not to have missed something, some loophole they were unaware of, then the implications are staggering. It means that the fundamental idea that there is one shared reality, that things that exist, exist for everyone, must be wrong. What does this say about strange phenomena like, say, the Mandela effect? According to the MIT TechnologyReview, the next step for these scientists is to push the idea further and see how drastically different they can make the two coexisting realities. As if the world wasn’t already confusing enough, leave it to quantum physics to make it even more nonsensical.
In 2010, the British artificial intelligence research firm DeepMind Technologies began developing AI networks capable of defeating humans at games such as chess, Pong, and Space Invaders. In 2014, DeepMind’s research was successful enough to catch the attention of Google parent company Alphabet, which purchased the AI laboratory for $500 million dollars. Shortly after the purchase, Google formed a mysterious artificial intelligence ethics board to oversee DeepMind’s research – a board which has yet to disclose the scope of its mission or the names of its members. A few years later, DeepMind expanded its ethics board and gave it an official title: DeepMind Ethics and Society. While the company has stated that the board’s aims are to explore the ethical and societal questions raised by the existence of its incredibly powerful AI, the board is still mostly shrouded in secrecy.
This month, though, The Economist published a report outlining the events surrounding the creation of the DeepMind Ethics and Society Board. It turns out that before Google agreed to purchase the AI laboratory, they first dictated that both sides draw up an agreement stating that Google will immediately take control of DeepMind’s AI if or when it ever achieves artificial general intelligence, or AGI – the holy grail of AI research. AGI is broadly defined as any artificial intelligence network which can successfully complete any intellectual task a human can, although given the massive amounts of processing power AI networks can harness, these systems will likely be more human than human.
Of course, as Sam Shead at Forbes points out, DeepMind might not take too kindly to any ethics board which attempts to control it and could even go rogue as so many science fiction stories have predicted. It’s not that far out of the realm of possibility; so far, DeepMind has already proven itself capable of defeating the best human players at some of the most sophisticated games in the world such as Go and even the strategy video game StarCraft 2. Last year, DeepMind even surprised its creators by successfully creating neural pathways that resemble human neural networks entirely on its own. How much longer until DeepMind achieves true general intelligence?
Some of the world’s foremost scientists and entrepreneurs have urged caution in AI research, warning that we may soon find ourselves under the boot of an immortal AI dictator. Will Google’s Ethics and Society Board be able to control DeepMind before it takes over the globe? Are all of these fears of AI baselessly grounded in science fiction and neo-Luddism, or are we indeed actively creating our future overlords?
The scariest part to me is the fact that no matter how many of the world’s greatest minds urge against the creation of AI, money-hungry corporations keep marching ceaselessly towards the machine takeover we all know is coming, all in the name of creating value for the shareholders. Who really wants the machines to make important decisions for us? Who’s reallypulling the strings at Google?
‘De zomer is een dodelijk seizoen aan het worden voor leven op aarde’
‘De zomer is een dodelijk seizoen aan het worden voor leven op aarde’
Vivian Lammerse
Door klimaatverandering moeten we rekening gaan houden met extreme hittegolven.
Klimaatverandering wordt vaak besproken in termen van gemiddeldes. Denk bijvoorbeeld aan het Parijsakkoord, waarin landen hebben toegezegd er alles aan te doen om de opwarming van de aarde tot 2 graden Celsius te beperken. Maar klimaatverandering zal echter niet alleen de gemiddelde wereldwijde temperatuur verhogen; er komen ook extreme hittegolven op de loer te liggen. En dat terwijl hittegolven nu al schade toebrengen aan mens en dier.
HITTEGOLF VAN 2003
Herinner je je de hittegolf van 2003 nog? Destijds had Europa te kampen met uitzonderlijk heet en droog weer. De zomer van 2003 was een van de heetste Europese zomers ooit, die zelfs in sommige landen een gezondheidscrisis teweeg bracht. In totaal overleden er zo’n 70.000 mensen in Europa aan de gevolgen van de hittegolf. De hoogste officiële temperatuur tijdens de hittegolf in Nederland werd op 7 augustus in Arcen (Noord-Limburg) gemeten: 37,8 graden Celsius.
Extreem Om een uitgebreid beeld te krijgen van de effecten van toekomstige hittegolven, verzamelden de onderzoekers informatie uit meer dan 140 wetenschappelijke studies. En uit de bevindingen blijkt dat hittegolven extremer zullen worden en ook vaker voor zullen komen. Dit komt omdat kooldioxide en andere broeikasgassen in de atmosfeer warmte vasthouden, waardoor de gemiddelde temperatuur van de aarde stijgt. Het zou kunnen betekenen dat de hittegolf uit 2003 tegen het einde van deze eeuw vier keer zo lang zou kunnen duren. “Dit suggereert dat – in sommige jaren – de hele zomer warmer zal zijn dan wat we in 2003 ervoeren,” zegt onderzoeker Jonathon Stillman. “De zomer is een dodelijk seizoen aan het worden voor het leven op aarde.”
Effecten Hittegolven leiden op dit moment al tot massale sterfte bij dieren. Denk bijvoorbeeld aan verbleekt koraal in delen van het Great Barrier Reef, of de grote paardensterfte in Australische zomers. Volgens de onderzoekers hebben hittegolven ook subtiele effecten op het lichaam van dieren. Zo neemt de hoeveelheid gespecialiseerde eiwitten die andere moleculen beschermen tegen hitte toe. “Als dieren meer gevaarlijk hoge temperaturen gaan ervaren, kun je verschuivingen zien in hun fysiologie,” legt Stillman uit. “Ze zullen misschien niet gelijk het loodje leggen, maar in hun lichaam kun je zien dat ze wel dicht bij dit punt in de buurt komen.” Ook mensen lopen gevaar als het extreem heet wordt. Vooral ouderen vormen een risicogroep.
Infrastructuur Er zijn manieren om met de hittegolven om te gaan, maar dat is niet voor iedereen op aarde weggelegd. Dit komt omdat een gebrek aan infrastructuur het voor kwetsbare menselijke gemeenschappen bemoeilijkt om naar koelere klimaten te migreren, wat grootschalige conflicten kan veroorzaken. Ook versperren veel menselijke bebouwingen de weg voor dieren om naar koelere klimaten te trekken.
Wanneer de extreme hittegolven gaan gebeuren en hoe extreem deze precies zullen zijn, varieert in de modellen. “We kunnen niet zeggen dat het volgend jaar gaat gebeuren,” zegt Stillman. “Maar als we doorgaan met onze huidige emissies, zullen we tegen het einde van deze eeuw hittegolven ervaren die heftiger zullen zijn dan de hittegolven die we al hebben gezien.”
For the first time, scientists made a successful in situ collection of bacteria living in hot springs in Yellowstone National Park and using an unconventional source – electricity – for food and energy
Pools of hot water like this one – in Heart Lake Geyser Basin, Yellowstone National Park, Wyoming – are the home to bacteria that can eat and breathe electricity.
Bacteriaare some of the most diverse and adaptable organisms on Earth. They can be found in harsh environments where few other living creatures can survive. They’re known to use a wide range of sources for energy and sustenance. This month (March 5, 2019), scientists at Washington State Universitydescribedthe first-ever successful in situ collection of a little-known species of bacteria that eats and breathes electricity.
They successfully captured the enigmatic electricity-eating bacteria last August in the Heart Lake Geyser Basinarea of Wyoming’s Yellowstone National Park. Their work took them on long hikes to four pristine hot springs in that area. WSU graduate student Abdelrhman Mohamed, who is first author on the study, commented:
This was the first time such bacteria were collected in situ in an extreme environment like an alkaline hot spring.
He added that temperatures in the springs ranged from about 110 to nearly 200 degrees Fahrenheit (43 to 93 degrees Celsius).
Image via uri Gorby/Rensselaer Polytechnic institute.
The research team was able to coax the bacteria out of hiding by inserting a few electrodes – electric conductors – into the edge of the water in the hot spring.
Thirty two days later, the researchers returned to retrieve the electrodes, which had attracted the bacteria from the water. Mohamed and postdoctoral researcher Phuc Ha analyzed the results. Their statement exclaimed:
Voila! They had succeeded in capturing their prey — heat-loving bacteria that ‘breathe’ electricity through the solid carbon surface of the electrodes.
It sounds a bit like something out of science fiction, but it’s another example of how microorganisms can adapt to a wide range of extreme environments, using whatever resources are available for energy and as nutrients.
Similar bacteria have been cultivated before, but not taken in-situ from this kind of extreme environment – in this case an alkaline hot spring pool of water. Their statement explained more about them:
Most living organisms — including humans — use electrons, which are tiny negatively-charged particles, in a complex chain of chemical reactions to power their bodies. Every organism needs a source of electrons and a place to dump the electrons to live. While we humans get our electrons from sugars in the food we eat and pass them into the oxygen we breathe through our lungs, several types of bacteria dump their electrons to outside metals or minerals, using protruding hair?like wires.
Observing these bacteria in a laboratory isn’t easy, these scientists said, which is one reason the team wanted to study then in their own habitat. According to Haluk Beyenal of WSU, who supervised the research:
The natural conditions found in geothermal features such as hot springs are difficult to replicate in laboratory settings. So, we developed a new strategy to enrich heat-loving bacteria in their natural environment.
In order to collect the bacteria in such a challenging location, Mohamed used a cheap portable potentiostat, an electronic device to control the electrodes submerged in the hot springs for long periods of time.
Geobacter, another kind of bacteria that uses electricity.
Image via Science Photo Library/Corbis.
These scientists said:
These tiny creatures are not merely of academic interest.
They might also provide clues to solutions to some of humanity’s biggest environmental problems, including pollution and sustainable energy. These bacteria could literally “eat” pollution, converting toxic pollutants into less harmful substances. And, in the process, they might even generate electricity. As noted by Beyenal:
As these bacteria pass their electrons into metals or other solid surfaces, they can produce a stream of electricity that can be used for low-power applications.
How cool is that?
Some bacteria can use a rather unconventional source for food and energy – electricity.
As also reported back in 2015, some bacteria can even live on electrons alone. According to Annette Rowe, a postdoc researcher at the University of Southern California:
It’s a crazy phenomenon. I’ve kept some of these bugs for over a month with no addition of carbon.
As Rowe noted, they must have been subsisting solely on electricity from the electrode, because there was nothing else available as an energy source.
Bacteria were among the first known life forms to appear on Earth, and can be found in soil, water, hot springs, radioactive waste and the deep portions of Earth’s crust. There is even evidence for them existing the deepest part of the ocean – the Marianas Trench – according to a study in 2013. They also live in symbiotic and parasitic relationships with plants and animals.
These new findings show just how resilient and adaptable some species of bacteria can be. They can survive and flourish in hot springs and also make use of an unconventional source for their food and energy: electricity. Perhaps they will aid scientists looking for new ways to combat environmental pollution and provide sustainable energy for humanity in the future.
Bottom line: For the first time, scientists made a successful in situ collection of bacteria living in hot springs in Yellowstone National Park and using an unconventional source – electricity – for food and energy.
Octopuses change colour while they SLEEP! Footage offers a rare glimpse of the creature's skin switching from light to dark and back as it slumbers - but is it dreaming of an enemy?
Octopuses change colour while they SLEEP! Footage offers a rare glimpse of the creature's skin switching from light to dark and back as it slumbers - but is it dreaming of an enemy?
The octopus' body is seen turning from pale white to a dark brown-green colour
All the while it is asleep on a coral reef it seemingly matches its skin colour to
Octopuses have very precise control over their skin colour in response to threat
It might be camouflaging itself during sleep because of a threat in its dreams
An octopus has been seen changing the colour of its skin on its entire body from light to dark while it sleeps - and experts say it may be dreaming of a predator.
Footage shows the pale white creature pulsing as vein-like patterns emerge on its skin, becoming increasingly darker and spreading over its entire body.
Octopuses are well known for their advanced abilities to camouflage their bodies in response to threat.
One researcher believes the clip shows evidence of an octopus equivalent of REM sleep, with the marine animal responding to some imagined creature.
In an interview with LiveScience, Sara Stevens, a specialist with Butterfly Pavilion zoo in Colorado discusses her thoughts on the footage.
'The exact processes of how they match colors is still not fully understood, though it's being very thoroughly studied, Ms Stevens said.
'Current research suggests that the actual cells themselves can match colors. But the jury's still out on whether they're achieving REM sleep.'
Octopuses can actively change their skin colour to either make themselves either invisible, or stand out, with a striking pattern depending on their surroundings.
They have thousands of colour-changing cells called chromatophores that lie just under the surface of the skin.
These specialised pigment cells expand and contract and push the pigment to the surface.
The creature's skin colour temporarily changes thanks to this mechanism.
This process is triggered in the wild by a changing environment or by an emerging threat.
On top of chromatophores, two other types of cells - iridophores and leucophores - are involved in the camouflaging process.
Iridophores have layers of reflecting plates that create iridescent greens, blues, silvers and golds, while leucophores are cells which can detect what colours best match the animal's surrounding.
This allows octopuses to appear inconspicuous whatever their environment and change their skin tone to match their surroundings.
Given that octopuses have very precise control over the system by which their skin colour changes, it is rare for them to trigger the response while sleeping.
In footage captured the octopus at a zoo in Colorado - looking pale white at the start of the video and changing its skin colour over a minute to match the coral reef it is lying on
Octopuses can actively change their skin colour to either make themselves invisible, or stand out with a striking pattern depending on what their environment is. They have thousands of colour-changing cells called chromatophores that lie just under the surface of the skin
These specialised pigment cells are effectively ink sacs can expand and contract and push the pigment to the surface. Through this mechanic, the creature's skin colour temporarily changes
Unlike humans, octopuses have multiple brains rather than just one central nervous system.
Instead of being in one place, their brain cells are spread all over the body, which gives the creature very precise control to each part of the body.
Changing the colouration over its skill is an active process that requires the octopus to activate specific bundles of neurons, and depends on a complex array of nerves and muscles controlling the expansion and contraction of the pigment sacs.
But these controls may be activated while the octopus sleeps, if its unconscious mind is sensing a threat.
One hypothesis is that it may sense threat in its sleep - it could have something to do with what the creature is dreaming about.
It may be inconclusive, but octopuses species may also experience a dreamlike state like that achieved during REM cycles in humans.
Butterfly Pavilion announced a naming contest for the new Octopus vulgaris (pictured), also known as the common octopus. The zoo is asking visitors to submit names to the new octopuses residing on its website.
Octopuses live for one to two years on average and adopt a unique biological defence system in the wild - ejecting a thick cloud of ink that dulls its predator's sense of smell.
They are also known for their intelligence and even collect shells to decorate their dens knowns as octopus gardens.
Talking about the octopuses at the zoo, of which there are a few new additions from the octopus from Florida, Butterfly Pavilion aquarist Sara Stevens, said: 'People are able to relate to octopuses in a way that is unrivalled by any other invertebrate'.
'Due to their intelligence and almost childlike way in which octopuses interact with the world, our guests seem to connect and fall in love with them very easily.
'It's an animal that instantly creates a sense of awe and wonder, making them fun and important ambassadors for ocean conservation.'
The Butterfly Pavilion announced a naming contest for the new Octopus vulgaris, also known as the common octopus.
The Colorado zoo is asking visitors to submit names to the new octopuses residing on its website.
HOW DO OCTOPUSES DEFEND THEMSELVES?
One of the most effective ways octopuses avoid predation is by camouflaging with their environment.
They have special pigment cells allow them to control the colour of their skin, much like chameleons.
As well as colour change they can manipulate the texture of their skin in order to blend in with the terrain.
As well as camouflage they can escape predators by using a 'jet propulsion' method of escape, where they rapidly shoot out water to propel them through the water rapidly.
The jet of water from the siphon is often accompanied by a release of ink to confuse and evade potential enemies.
The suckers on the tentacles of the eight-legged beasts are extremely powerful and are used to drag prey towards a sharp beak.
As well as protection from other animals, it has been recently found that octopuses can detect the ultrasonic waves that preempt a volcanic eruption or earthquake, giving them enough time to escape.
Streetcap1, AKA George Graham UFO researchers 1st and last ever interview on Russia TV Found, UFO Sighting News.
Streetcap1, AKA George Graham UFO researchers 1st and last ever interview on Russia TV Found, UFO Sighting News.
I was sent this video clip by a Russian UFO research this week. In it is the long lost interview of UFO researcher George Graham AKA Streetcap1. He died suddenly a few months after doing this interview with RT News. He was very proud of doing this TV interview and I'm sure he would like it to be shared with the world. Streetcap1 first came to be famous when I published his videos and discoveries on my UFO Sightings Daily site several years ago. Since then, he has become a legend in the area. I can't help but think that I did warn him not to invite the RT News agency into his home. He didn't listen, and a few months later...he died suddenly. Here is the link to our chat. https://www.youtube.com/watch?v=k85Oul-lL8k
So, here below is his interview video. This for you Streetcap1, RIP. Scott C. Waring
CIA REMOTE VIEWING PROGRAM DISCOVER ANCIENT CIVILIZATION ON MARS
CIA REMOTE VIEWING PROGRAM DISCOVERS ANCIENT CIVILIZATION ON MARS
DECLASSIFIED TOP SECRET REMOTE VIEWING CIA DOCUMENTS REVEAL THE DISCOVERY OF AN ANCIENT CIVILIZATION THAT ONCE INHABITED THE RED PLANET.
It’s hard to fathom the number of esoteric programs that the CIA has or is currently funding and researching. The clandestine organization has been known to explore myriad outlets for conducting its operations, ranging from sinister to strange. But often the strange ones, particularly those that become declassified because the general populace finds them too bizarre to actually be true, are the most intriguing. When certain programs come to light, it always begs to ask the question, what else are they doing that they aren’t telling us about and what else haven’t they disclosed?
During the Cold War, the CIA conducted several experimental programs involving the human psyche. MKUltra was one of the more malevolent programs aimed at mind control using drugs and other techniques for torture and interrogation purposes. One element of the program involved administering LSD surreptitiously to subjects with the goal of turning them into robot agents that they could then control. The horrific intent of the program eventually came to light and was exposed, despite an attempted cover-up and destruction of all evidence pertaining to it.
But one of the more intriguing (and humane) programs that produced some interesting results was one known as Stargate, which trained operatives in astral projection and remote viewing. These psychic abilities that allow for perception and, if you’re well-practiced, the ability for your astral body to travel anywhere, including distant planets, has cultivated striking imagery and details that often have been confirmed.
Secrets of Remote Viewing
A PSYCHIC JOURNEY TO MARS
During the Cold War, one of the members of Stargate, Joseph McMoneagle, was able to perceive details, which were later confirmed by satellite imaging, of a new type of Russian nuclear submarine being constructed, based simply on coordinates provided to him. The submarine was one of the largest ever built and when he described its magnitude to military engineers, he was scoffed at. It turned out that McMoneagle’s impression was right.
McMoneagle was one of a key group of remote viewing in CIA’s participants that focused on military targets, missing persons and occasionally attempts to see into different time periods. But those attempts were all mundane compared to an unexpected, otherworldly astral journey he would take in 1984. One day he was awoken from a nap and given a sealed envelope that couldn’t be opened until the end of a subsequent viewing session, during which his colleague dictated coordinates for him to view. Soon McMoneagle found himself astral projecting to an unfamiliar locale.
Somewhat recently, the CIA released the transcript of McMoneagle and an agent conducting the viewing. When McMoneagle went into his viewing state, he described a world inhabited by a civilization in dire shape. He described seeing an infrastructure consisting of intersecting roads, aqueducts, channels and pyramids. The transcript is interesting and describes a baffled McMoneagle who often struggles to report the ‘raw data’ his colleague consistently reminds him to stay focused on. Throughout the viewing his astonishment overtakes him leading to tangential periods, sometimes as long as 30 minutes, trying to maintain his focus.
When McMoneagle eventually reports contact with living entities his colleague tells him to initiate communication with them. He describes their situation as being in a critical state, seemingly on the brink of apocalypse. Having purportedly sent members of their civilization on a mission to find a new place to inhabit, these tall shadowed figures appear to be in a state of hibernation awaiting the return of their search party. When he asks if these entities can perceive him, they describe him as something of a hallucination. At the end of the viewing McMoneagle opened his envelope to see where he supposedly projected to – Mars, approximately 1 million years B.C.
Skeptics have written off Stargate and other programs of its ilk as diversionary tactics to steer the Soviets in the wrong direction during the Cold War. The logic being that if the U.S. could subversively convince the Soviets that they were having success in phony psychic programs, the Soviets might then waste time and resources funding similar programs. And of course, there’s no way to know if McMoneagle’s account has any validity without sending a manned mission to Mars to explore the coordinates he was viewing. This probably isn’t going to happen very soon, but McMoneagle said he’d be willing to go, though he is in his 70s.
Whatever the CIA’s original intent may be, many of the members of the Stargate program still practice remote viewing or are willing to talk about it an all seriousness. With the program now having been disclosed and that era of the Cold War being over, it seems there would be no need to continue to maintain secrecy or continue playing along. We would also be remiss to think that the Russians weren’t researching remote viewing long before the U.S. There’s even evidence that they were researching it before Stalin’s reign.
There are other declassified remote viewing CIA documents that were once deemed ‘top secret’ by the CIA, including some that resulted in accurate descriptions of secret Soviet bases on an esoteric island in the middle of the Indian Ocean and another in the middle of the Ural Mountain range. The viewer described details of the bases and their geographic locations in details that were later confirmed. Though the evidence surrounding these particular sessions is somewhat conflicting, the reports affirming the results show astonishment from agents analyzing the program at the amazing accuracy of some of these viewings. And while astral projection and remote viewing are similar in nature, but much different in their scope, the confirmation of results from the remote viewing CIA sessions increases the likelihood that astral projections could have significant accuracy.
Those who survived the panic caused by the less-trouble-than-expected Y2K orMillennium Bug that was supposed to shut down computers when they tried to move their clocks from 12/31/1999 to 1/1/2000 may scoff at the new April 6, 2019 bug that many predict will wreak havoc with the GPS systems drivers, pilots and many major industries depend on. Those who profited from the Y2K Bug (COBOL and assembler language programmers dragged out of retirement to fix old software) may be upset that no one told them about this one sooner. Those planning a flight or a road excursion on 4/6/2019 may suddenly be considering a postponement. The rest of us are wondering … what the heck is the April 6, 2019 bug?
“The GPS Internal Navigation Time Scale “GPS Time” is based on the weighted average of GPS satellites and ground station clocks. GPS Time is used for user navigation solutions. A nanosecond error in GPS Time can equate to one foot of position (ranging) error. The WN parameter is provided via a ten (10) bit parameter – or “counter.” The valid range of values for the WN parameter is 0 to 1023 (or 1024 total values). The WN parameter is incremented by one each week. At the end of the 1024th week, the counter experiences a rollover (resets) to 0. Each WN rollover event defines a new GPS Time Epoch. The WN value is referenced to the start of the current GPS Time Epoch. The last WN rollover was August 21, 1999. GPS Time is currently in the second Epoch. The next WN rollover is April 6, 2019.
GPS Time is adjusted by the U.S. Air Force GPS Directorate to maintain alignment with UTC as provided by the U.S. Naval Observatory. A GPS device that provides UTC time does so by converting GPS Time to UTC using multiple parameters – including WN – conveyed in page 18 of GPS sub-frame.
GPS devices with a poorly implemented GPS Time-to-UTC conversion algorithm may provide incorrect UTC following a WN rollover. Additionally, some GPS devices that calculate the WN value from a device-specific date rather than the start of the current GPS Time Epoch may provide incorrect UTC at some other device-specific date.”
That explanation came from a memorandum issued by the U.S. Department of Homeland Security in April, 2018. A one-year warning doesn’t seem like much time to fix old programs dependent on an antiquated 10-bit counter that bumps up one per week until it reaches binary 1111111111 (1023) and resets to zero, does it? DHS points out that this happened once before on August 22, 1999 and nothing bad happened then. Of course, that could be because most people were still folding paper maps in 1999 when GPS was still a new and much less used innovation.
On the other hand, in 2019 it seems that every cell phone, vehicle, watch and just about anything with a chip in it has GPS. Will we all go spinning aimlessly on 4/6/2019? Here’s an answer from FalTech GPS, a company specialized in GPS repeater technology.
“Who will be affected? The list is long and varied; some industries come to mind immediately as they are known to use the accurate timing information provided by the GPS constellation. Financial markets, power generating companies, emergency services and industrial control systems may be affected, as well as fixed-line and cellular communications networks. GPS tracking devices installed in a fleet management system to schedule and monitor deliveries could cause system errors if they start to provide location data that is potentially up to 20 years out of date.”
All together now …. ahhhh! Why isn’t someone doing something about this? Well, the government and GPS makers have been for a while. Most modern GPS receivers shouldn’t be affected. Older devices that have been getting regular firmware updates should also be OK. A lot of GPS devices can run without UTC or can get it from other sources. And don’t forget – GPS systems survived August 22, 1999, just as we survived January 1, 2000.
And yet … Tom’s Guide reports that a security expert at the recent RSA Conference 2019 (RSA is a computer and network security company) said in a presentation:
“I’m not going to be flying on April 6.”
Does that sound like a warning? Is someone keeping things quiet? Who? This is no April Fool’s joke. If you survive April 1, make it to April 7 by being extra careful just in case on April 6.
While many in the U.S. are still dealing with the after-effects of ‘springing forward’ one hour for Daylight Saving Time, a group of Russian scientists is celebrating falling back – not just by spinning their clocks but by actually resersing the forward trek of time. Does this mean do-overs are actually possible? Is Doc Brown’s DeLorean revving its engine? Is HG Wells begging for a trip back to Earth to change the ending of his novel?
“This is one in a series of papers on the possibility of violating the second law of thermodynamics. That law is closely related to the notion of the arrow of time that posits the one-way direction of time from the past to the future.”
Gordey Lesovik, a quantum physicist from the Moscow Institute of Physics and Technology and lead author of the new study published in the journal Science Reports, describes in simple terms how his team apparently broke the second law of thermodynamics by returning a quantum computer to a state in its past life. And not just once, but a reliably reproducible 85 percent of the time. Lesovik’s Russian team worked with quantum researchers at the Argonne National Laboratory in Illinois using the public domain IBM Q System Hub, a universal quantum computing system with 20 superconducting qubits.
The second law of thermodynamics: an isolated system either remains static or evolves toward a state of chaos rather than order.
Phys.org gives a good, layman’s explanation of this discovery using billiard balls. When a triangle of 15 billiard balls is hit by a cue ball, they scatter into a chaotic state, never to return to that orderly triangle. What the Moscow team has done is returned two qubits (quantum bits) on the quantum computer back to their previous state after chaos and dispersion had begun. Sure, it’s just two qubits and not 15 colorful billiard balls, but it’s never been done before, and the fact that they could do it repeatedly puts the second law of thermodynamics on a walk of shame out of the physics books.
Or does it? We’re talking electrons here, not objects, cars or humans. The quantum computer couldn’t replicate the reverse time travel with three qubits, failing over half the time. And technically speaking, this was not really time travel but returning a machine to a previous state in time, akin to returning an adult to the body and mind they had as a teen or reverse aging. Except the machine could only return to the state or age it was in a fraction of a second ago.
Is this still a big deal?
“Time reversal can help—we do time-reversal of the final state of the computer and run the same quantum program again. If the computation was correct we will arrive to the initial state of the computer.”
That’s a physicist’s “yes” from Gordey Lesovik, who sees this as a tool for testing programs on quantum computers. It won’t return a billiard table to its previous pre-break state, it’s not a time machine or even an age-reversing picture of Dorian Gray. For now, we have to take Lesovik’s word that it’s indeed a big deal.
Scientists have debated whether the dinosaurs were already in decline before a massive asteroid impact finished them off 66 million years ago. New research shows they were thriving in their final days.
Dinosaurs once reigned on Earth, until a cataclysmic event– now thought to have been a massive asteroid impact, or possibly intense volcanic activity – wiped them out about 66 million years ago during theMaastrichtianage at the end theLate Cretaceousepoch. This mass extinction event was sudden and brutal, powerful enough to wipe out the largest creatures to ever walk on the Earth – and countless others as well.
There has, however, been some debate as to what was happening before the mass extinction. Some scientists thought the dinosaurs were flourishing right up until their demise, while others suggested that they had already been in decline before they were finished off.
So which scenario is correct? A new study by researchers from Imperial College London, University College London and University of Bristol shows that it was the former.
Illustration of a late Maastrichtianpalaeoenvironment in North America, where dinosaurs like Tyrannosaurus rex, Edmontosaurus and Triceratops roamed the floodplains 66 million years ago. The Maastrichtian was the latest age of the Late Cretaceous epoch.
Dinosaurs were likely not doomed to extinction until the end of the Cretaceous, when the asteroid hit, declaring the end of their reign and leaving the planet to animals like mammals, lizards and a minor group of surviving dinosaurs: birds.
The results of our study suggest that dinosaurs as a whole were adaptable animals, capable of coping with the environmental changes and climatic fluctuations that happened during the last few million years of the Late Cretaceous. Climate change over prolonged time scales did not cause a long-term decline of dinosaurs through the last stages of this period.
According to the researchers, previous studies had underestimated the number of living species at the end of the Cretaceous period – when the asteroid hit – due to changing fossilization conditions. This led to the erroneous conclusion that some species had already been in decline or gone extinct before the asteroid collision.
The study focused on North America, where some of the most well-known dinosaurs used to roam, such as Tyrannosaurus rex and Triceratops.
A massive asteroid impact – or possible intense volcanic activity – caused the extinction of the dinosaurs 66 million years ago, according to current research.
Image via James Thew/iStockphoto.
Way back then, North America was split into two halves by an inland sea. The Rocky Mountains in the western half were forming at this time, and sediment from the mountains created ideal conditions for preserving dinosaur bones. Conditions in the eastern half were far less conducive to preservation, however. Fossils in the western half, along with some mathematical predictions, had been used to suggest that dinosaur populations were in decline before the asteroid hit. Paper co-author Philip Mannion, from University College London, explained:
Most of what we know about Late Cretaceous North American dinosaurs comes from an area smaller than one-third of the present-day continent, and yet we know that dinosaurs roamed all across North America, from Alaska to New Jersey and down to Mexico.
The researchers used a method called ecological niche modelling – or species distribution modelling – that takes into account different environmental conditions, such as temperature and rainfall, which each species needs to survive. When they mapped these conditions, both across the continent and over time, they were able to determine where different dinosaur species could most easily survive changing conditions – before the asteroid impact occurred.
Global map showing distribution of surface temperature on the Earth in the Late Cretaceous period. Warmer colors show higher temperatures while colder colors indicate lower temperatures.
Instead of being in decline, they found that many species were actually more widespread than previously thought. Those species, however, were in locations where fossils were less likely to be preserved and those locations were smaller than initially estimated. The lesser numbers of fossils in these areas had previously led scientists to the conclusion that those species were already in decline, when they actually were not.According to the researchers:
The results of our study suggest that dinosaurs as a whole were adaptable animals, capable of coping with the environmental changes and climatic fluctuations that happened during the last few million years of the Late Cretaceous. Climate change over prolonged time scales did not cause a long-term decline of dinosaurs through the last stages of this period.
Bottom line: These findings make this tale all the more tragic – dinosaurs were thriving at their peak on this planet in the Late Cretaceous. They had taken over the world, and survived other potential calamities, only to have a random chunk of rock from space – or unprecedented volcanic eruptions – seal their ultimate fate.
New fuel cell could help fix the renewable energy storage problem
Novel fuel cells can help store electricity from renewables, such as wind farms, by converting it into a chemical fuel for long-term storage and then changing it back to electricity when needed.
ISTOCK.COM/RON_THOMAS
New fuel cell could help fix the renewable energy storage problem
If we want a shot at transitioning to renewable energy, we’ll need one crucial thing: technologies that can convert electricity from wind and sun into a chemical fuel for storage and vice versa. Commercial devices that do this exist, but most are costly and perform only half of the equation. Now, researchers have created lab-scale gadgets that do both jobs. If larger versions work as well, they would help make it possible—or at least more affordable—to run the world on renewables.
The market for such technologies has grown along with renewables: In 2007, solar and wind provided just 0.8% of all power in the United States; in 2017, that number was 8%, according to the U.S. Energy Information Administration. But the demand for electricity often doesn’t match the supply from solar and wind. In sunny California, for example, solar panels regularly produce more power than needed in the middle of the day, but none at night, after most workers and students return home.
Some utilities are beginning to install massive banks of batteries in hopes of storing excess energy and evening out the balance sheet. But batteries are costly and store only enough energy to back up the grid for a few hours at most. Another option is to store the energy by converting it into hydrogen fuel. Devices called electrolyzers do this by using electricity—ideally from solar and wind power—to split water into oxygen and hydrogen gas, a carbon-free fuel. A second set of devices called fuel cells can then convert that hydrogen back to electricity to power cars, trucks, and buses, or to feed it to the grid.
But commercial electrolyzers and fuel cells use different catalysts to speed up the two reactions, meaning a single device can’t do both jobs. To get around this, researchers have been experimenting with a newer type of fuel cell, called a proton conducting fuel cell (PCFC), which can make fuel or convert it back into electricity using just one set of catalysts.
PCFCs consist of two electrodes separated by a membrane that allows protons across. At the first electrode, known as the air electrode, steam and electricity are fed into a ceramic catalyst, which splits the steam’s water molecules into positively charged hydrogen ions (protons), electrons, and oxygen molecules. The electrons travel through an external wire to the second electrode—the fuel electrode—where they meet up with the protons that crossed through the membrane. There, a nickel-based catalyst stitches them together to make hydrogen gas (H2). In previous PCFCs, the nickel catalysts performed well, but the ceramic catalysts were inefficient, using less than 70% of the electricity to split the water molecules. Much of the energy was lost as heat.
Now, two research teams have made key strides in improving this efficiency. They both focused on making improvements to the air electrode, because the nickel-based fuel electrode did a good enough job. In January, researchers led by chemist Sossina Haile at Northwestern University in Evanston, Illinois, reported in Energy & Environmental Science that they came up with a fuel electrode made from a ceramic alloy containing six elements that harnessed 76% of its electricity to split water molecules. And in today’s issue of Nature Energy, Ryan O’Hayre, a chemist at the Colorado School of Mines in Golden, reports that his team has done one better. Their ceramic alloy electrode, made up of five elements, harnesses as much as 98% of the energy it’s fed to split water.
When both teams run their setups in reverse, the fuel electrode splits H2 molecules into protons and electrons. The electrons travel through an external wire to the air electrode—providing electricity to power devices. When they reach the electrode, they combine with oxygen from the air and protons that crossed back over the membrane to produce water.
The O’Hayre group’s latest work is “impressive,” Haile says. “The electricity you are putting in is making H2 and not heating up your system. They did a really good job with that.” Still, she cautions, both her new device and the one from the O’Hayre lab are small laboratory demonstrations. For the technology to have a societal impact, researchers will need to scale up the button-size devices, a process that typically reduces performance. If engineers can make that happen, the cost of storing renewable energy could drop precipitously, helping utilities do away with their dependence on fossil fuels.
Your GPS Devices May Stop Working on April 6, 2019
Your GPS Devices May Stop Working on April 6, 2019
Your GPS devices may stop working on April 6, 2019 and if If that sounds familiar, it’s because the situation is basically the same as the “millennium bug” behind the infamous Y2K scare.
April 6 is the day millions of GPS receivers will literally run out of time, rolling over their time counters back to zero, thanks to limitations in timekeeping for older GPS devices. Many navigation systems may be affected, such as on ships or older aircraft, although your smartphone will be fine.
But because GPS satellites are also crucial to digital timekeeping used by websites, electrical grids, financial markets, data centers and computer networks, the effect of April 6 may be even more wide-ranging.
Although an information-security expert during a presentation at the RSA 2019 security conference in San Francisco said "I'm not going to be flying on April 6," thankfully, no one is hyping the GPS bug to an exaggerated degree - but it could still cause problems.
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 74 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.