Toggle light / dark theme

“We are absolutely losing some science,” Jonathan McDowell, an astronomer at the Harvard-Smithsonian Center for Astrophysics, tells The Register. “How much science we lose depends on how many satellites there end up being. You occasionally lose data. At the moment it’s one in every ten images.”

Telescopes can try waiting for a fleet of satellites to pass before they snap their images, though if astronomers are trying to track moving objects, such as near-Earth asteroids or comets, for example, it can be impossible to avoid the blight.

“As we raise the number of satellites, there starts to be multiple streaks in images you take. That’s no longer irritating, you really are losing science. Ten years from now, there may be so many that we can’t deal with it,” he added.

Thanks to LastPass for sponsoring PBS DS. You can check out LastPass by going to https://lastpass.onelink.me/HzaM/2019Q3JulyPBSspace.

PBS Member Stations rely on viewers like you. To support your local station, go to: http://to.pbs.org/DonateSPACE

Our universe started with the big bang. But only for the right definition of “our universe”. And of “started” for that matter. In fact, probably the Big Bang is nothing like what you were taught.
A hundred years ago we discovered the beginning of the universe. Observations of the retreating galaxies by Edwin Hubble and Vesto Slipher, combined with Einstein’s then-brand-new general theory of relativity, revealed that our universe is expanding. And if we reverse that expansion far enough – mathematically, purely according to Einstein’s equations, it seems inevitable that all space and mass and energy should once have been compacted into an infinitesimally small point – a singularity. It’s often said that the universe started with this singularity, and the Big Bang is thought of as the explosive expansion that followed. And before the Big Bang singularity? Well, they say there was no “before”, because time and space simply didn’t exist. If you think you’ve managed to get your head around that bizarre notion then I have bad news. That picture is wrong. At least, according to pretty much every serious physicist who studies the subject. The good news is that the truth is way cooler, at least as far as we understand it.

Check out the new Space Time Merch Store!
https://pbsspacetime.com/

Support Space Time on Patreon.
https://www.patreon.com/pbsspacetime.

Hosted by Matt O’Dowd.
Written by Matt O’Dowd.
Graphics by Leonardo Scholzer.
Directed by Andrew Kornhaber.
Produced By: Kornhaber Brown.

According to Klaus Schwab, the founder and executive chair of the World Economic Forum (WEF), the 4-IR follows the first, second, and third Industrial Revolutions—the mechanical, electrical, and digital, respectively. The 4-IR builds on the digital revolution, but Schwab sees the 4-IR as an exponential takeoff and convergence of existing and emerging fields, including Big Data; artificial intelligence; machine learning; quantum computing; and genetics, nanotechnology, and robotics. The consequence is the merging of the physical, digital, and biological worlds. The blurring of these categories ultimately challenges the very ontologies by which we understand ourselves and the world, including “what it means to be human.”

The specific applications that make up the 4-R are too numerous and sundry to treat in full, but they include a ubiquitous internet, the internet of things, the internet of bodies, autonomous vehicles, smart cities, 3D printing, nanotechnology, biotechnology, materials science, energy storage, and more.

While Schwab and the WEF promote a particular vision for the 4-IR, the developments he announces are not his brainchildren, and there is nothing original about his formulations. Transhumanists and Singularitarians (or prophets of the technological singularity), such as Ray Kurzweil and many others, forecasted these and more revolutionary developments,. long before Schwab heralded them. The significance of Schwab and the WEF’s take on the new technological revolution is the attempt to harness it to a particular end, presumably “a fairer, greener future.”

Trust in AI. If you’re a clinician or a physician, would you trust this AI?

Clearly, sepsis treatment deserves to be focused on, which is what Epic did. But in doing so, they raised several thorny questions. Should the model be recalibrated for each discrete implementation? Are its workings transparent? Should such algorithms publish confidence along with its prediction? Are humans sufficiently in the loop to ensure that the algorithm outputs are being interpreted and implem… See more.


Earlier this year, I wrote about fatal flaws in algorithms that were developed to mitigate the COVID-19 pandemic. Researchers found two general types of flaws. The first is that model makers used small data sets that didn’t represent the universe of patients which the models were intended to represent leading to sample selection bias. The second is that modelers failed to disclose data sources, data-modeling techniques and the potential for bias in either the input data or the algorithms used to train their models leading to design related bias. As a result of these fatal flaws, such algorithms were inarguably less effective than their developers had promised.

Now comes a flurry of articles on an algorithm developed by Epic to provide an early warning tool for sepsis. According to the CDC, “sepsis is the body’s extreme response to an infection. It is a life-threatening medical emergency and happens when an infection you already have triggers a chain reaction throughout your body. Without timely treatment, sepsis can rapidly lead to tissue damage, organ failure, and death. Nearly 270,000 Americans die as a result of sepsis.”

A need exists to accurately estimate overdose risk and improve understanding of how to deliver treatments and interventions in people with opioid use…


The Microsoft 365 Defender security research team discovered a new vulnerability in macOS that allows an attacker to bypass the System integrity protection or SIP. This is a critical security feature in macOS which uses kernel permissions to limit the ability to write critical system files. Microsoft explains that they also found a similar technique […].

Thanks to this new category of algorithms that has proved its power of mimicking human skills just by learning through examples. Deep learning is a technology representing the next era of machine learning. Algorithms used in machine learning are created by programmers and they hold the responsibility for learning through data. Decisions are made based on such data.

Some of the AI experts say, t here will a shift in AI trends. For instance, the late 1990s and early 2000s saw the rise of machine learning. Neural networks gained its popularity in the early 2010s, and growth in reinforcement came into light recently.

Well, these are just a couple of caveats we’re experienced throughout the past years.

Artificial Intelligence is rapidly improving and has recently gotten to a point where it can outperform humans in several highly competetive job markets including the media. OpenAI and Intel are working on the most advanced AI Algorithms that are actually starting to understand the world similar to the way we experience it. They call these models: OpenAI CLIP, Codex, GPT 4 and other things which are all good at certain things. Now they’re trying to combine them to improve their generality and maybe create a real and working Artificial General Intelligence for our future. Whether AI Supremacy will happen before the singularity is unclear, but one thing is for sure: AI and Machine Learning will take over many jobs in the very near future.

If you enjoyed this video, please consider rating this video and subscribing to our channel for more frequent uploads. Thank you! smile

TIMESTAMPS:
00:00 The Rise of AI Supremacy.
01:15 What Text-Generation AI is doing.
03:28 OpenAI is not open at all?
06:12 The Image AI: CLIP
08:52 LastIs AI taking over every job?
10:32 Last Words.

#ai #agi #intel

Experts in the AI and Big Data sphere consider October 2021 to be a dark month. Their pessimism isn’t fueled by rapidly shortening days or chilly weather in much of the country—but rather by the grim news from Facebook on the effectiveness of AI in content moderation.

This is unexpected. The social media behemoth has long touted tech tools such as machine learning and Big Data as answers to its moderation woes. As CEO Mark Zuckerberg explained for CBS News, “The long-term promise of AI is that in addition to identifying risks more quickly and accurately than would have already happened, it may also identify risks that nobody would have flagged at all—including terrorists planning attacks using private channels, people bullying someone too afraid to report it themselves, and other issues both local and global.”