Toggle light / dark theme

Let there be darkness.

That is the potential catchphrase for those that are concerned about nighttime light pollution.

More formerly known as Artificial Light At Night (ALAN), there is an ongoing bruhaha that our modern way of living is generating way too much light during the evening darkness. It is an ongoing issue and the amount of such pollution is likely to keep on increasing due to further industrialization and expansion of societies into additional geographical areas.

In short, you can expect more light to be emitted in existing populated areas, along with nighttime light being unleashed in regions that had so far not been especially well lit due to insufficient means or lack of a light-producing populace. When you start adding more office buildings, more homes, more cars, more lampposts, and the like, this all translates into a tsunami of unbridled light at night.

You might be puzzled as to why the mere shepherding of artificial light is considered a pollution monstrosity.

One obvious facet is that you cannot see the stars at nighttime, or they are otherwise generally blotted out of view by the abundance of overwhelming artificial light.

Full Story:

Tang Jie, the Tsinghua University professor leading the Wu Dao project, said in a recent interview that the group built an even bigger, 100 trillion-parameter model in June, though it has not trained it to “convergence,” the point at which the model stops improving. “We just wanted to prove that we have the ability to do that,” Tang said.


Ironically, China is a competitor that the United States abetted. It’s well known that the U.S. consumer market fed China’s export engine, itself outfitted with U.S. machines, and led to the fastest-growing economy in the world since the 1980s. What’s less well-known is how a handful of technology companies transferred the know-how and trained the experts now giving the United States a run for its money in AI.

Blame Bill Gates, for one. In 1992, Gates led Microsoft into China’s fledgling software market. Six years later, he established Microsoft Research Asia, the company’s largest basic and applied computer-research institute outside the United States. People from that organization have gone on to found or lead many of China’s top technology institutions.

China is a competitor that the United States abetted. A handful of U.S. tech companies transferred their know-how and trained some of China’s top AI experts.

What does 2022 have in store for AI in the enterprise? Will it be a robust year of world-altering developments and implementation, or will organizations struggle to gain appreciable value from an exceedingly complex technology?

In all likelihood, it will be a little of both. So as you chart a strategy for the coming year, keep an eye on what is really happening with AI right now and what remains on the drawing board.

If we look at Gartner’s AI Hype Cycle for 2021, it’s clear that the company has placed the majority of AI developments on the up-slope of the Innovation Trigger curve and at the Peak of Inflated Expectations. This includes everything from AI-driven automation and orchestration platforms to neural networks, deep learning, and machine learning. This isn’t to say that these applications are destined to crash and burn, just that they’re still more hype than reality at the moment – and Gartner expects it will be two to five years before they become productive assets in the enterprise.

We’re in a golden age of merging AI and neuroscience. No longer tied to conventional publication venues with year-long turnaround times, our field is moving at record speed. As 2021 draws to a close, I wanted to take some time to zoom out and review a recent trend in neuro-AI, the move toward unsupervised learning to explain representations in different brain areasfootnote.

One of the most robust findings in neuro-AI is that artificial neural networks trained to perform ecologically relevant tasks match single neurons and ensemble signals in the brain. The canonical example is the ventral stream, where DNNs trained for object recognition on ImageNet match representations in IT (Khaligh-Razavi & Kriegeskorte, 2014, Yamins et al. 2014). Supervised, task-optimized networks link two important forms of explanation: ecological relevance and accounting for neural activity. They answer the teleological question: what is a brain region for?

However, as Jess Thompson points out, these are far from the only forms of explanation. In particular, task-optimized networks are generally not considered biologically plausible. Conventional ImageNet training uses 1M images. For a human infant to get this level of supervision, they would have to receive a new supervised label every 5 seconds (e.g. the parent points at a duck and says “duck”) for 3 hours a day, for more than a year. And for a non-human primate or a mouse? Thus, the search for biologically plausible networks which match the human brain is still on.

Artificial intelligence is unlike previous technology innovations in one crucial way: it’s not simply another platform to be deployed, but a fundamental shift in the way data is used. As such, it requires a substantial rethinking as to the way the enterprise collects, processes, and ultimately deploys data to achieve business and operational objectives.

So while it may be tempting to push AI into legacy environments as quickly as possible, a wiser course of action would be to adopt a more careful, thoughtful approach. One thing to keep in mind is that AI is only as good as the data it can access, so shoring up both infrastructure and data management and preparation processes will play a substantial role in the success or failure of future AI-driven initiatives.

According to Open Data Science, the need to foster vast amounts of high-quality data is paramount for AI to deliver successful outcomes. In order to deliver valuable insights and enable intelligent algorithms to continuously learn, AI must connect with the right data from the start. Not only should organizations develop sources of high-quality data before investing in AI, but they should also reorient their entire cultures so that everyone from data scientists to line-of-business knowledge workers understand the data needs of AI and how results can be influenced by the type and quality of data being fed into the system.

Imagine a future where living in close quarters will be the norm, and so will the vehicles in about five decades from now reflect that societal bond. The Arrival Chemie is a true example of a minimalist future that will revolve around simplicity, function and of course human bond!

Automotive design is going through a metamorphosis stage wherein the gradual shift to an eco-friendly set of wheels is becoming the priority of manufacturers and consumers alike. This shift in perception has had a domino effect in the basic design of vehicles since the propulsion mechanisms and their placement in the vehicle have changed. This gives more freedom to experiment with the interior as well as exterior form. More emphasis is now on the comfort and lounging experience while traversing from point A to B. While on the exterior the multifunctional approach takes precedence.

A team of engineers from the University of California San Diego has unveiled a prototype four-legged soft robot that doesn’t need any electronics to work. The robot only needs a constant source of pressurized air for all its functions, including its controls and locomotion systems.

Most soft robots are powered by pressurized air and are controlled by electronic circuits. This approach works, but it requires complex components, like valves and pumps driven by actuators, which do not always fit inside the robot’s body.

In contrast, this new prototype is controlled by a lightweight, low-cost system of pneumatic circuits, consisting of flexible tubes and soft valves, onboard the robot itself. The robot can walk on command or in response to signals it detects from the environment.

We see that text data is ubiquitous in nature. There is a lot of text present in different forms such as posts, books, articles, and blogs. What is more interesting is the fact that there is a subset of Artificial Intelligence called Natural Language Processing (NLP) that would convert text into a form that could be used for machine learning. I know that sounds a lot but getting to know the details and the proper implementation of machine learning algorithms could ensure that one learns the important tools in the process.

Since the r e are newer and better libraries being created to be used for machine learning purposes, it would make sense to learn some of the state-of-the-art tools that could be used for predictions. I’ve recently come across a challenge on Kaggle about predicting the difficulty of the text.

The output variable, the difficulty of the text, is converted into a form that is continuous in nature. This makes the target variable continuous. Therefore, various regression techniques must be used for predicting the difficulty of the text. Since the text is ubiquitous in nature, applying the right processing mechanisms and predictions would be really valuable, especially for companies that receive feedback and reviews in the form of text.