Toggle light / dark theme

Artificial intelligence research company OpenAI has announced the development of an AI system that translates natural language to programming code—called Codex, the system is being released as a free API, at least for the time being.

Codex is more of a next-step product for OpenAI, rather than something completely new. It builds on Copilot, a tool for use with Microsoft’s GitHub code repository. With the earlier product, users would get suggestions similar to those seen in autocomplete in Google, except it would help finish lines of code. Codex has taken that concept a huge step forward by accepting sentences written in English and translating them into runnable code. As an example, a user could ask the system to create a web page with a certain name at the top and with four evenly sized panels below numbered one through four. Codex would then attempt to create the page by generating the code necessary for the creation of such a site in whatever language (JavaScript, Python, etc.) was deemed appropriate. The user could then send additional English commands to build the website piece by piece.

Codex (and Copilot) parse written text using OpenAI’s language generation model—it is able to both generate and parse code, which allowed users to use Copilot in custom ways—one of those ways was to generate programming code that had been written by others for the GitHub repository. This led many of those who had contributed to the project to accuse OpenAI of using their code for profit, a charge that could very well be levied against Codex, as well, as much of the it generates is simply copied from GitHub. Notably, OpenAI started out as a nonprofit entity in 2,015 and changed to what it described as a “capped profit” entity in 2019—a move the company claimed would help it get more funding from investors.

Houston-based ThirdAI, a company building tools to speed up deep learning technology without the need for specialized hardware like graphics processing units, brought in $6 million in seed funding.

Neotribe Ventures, Cervin Ventures and Firebolt Ventures co-led the investment, which will be used to hire additional employees and invest in computing resources, Anshumali Shrivastava, Third AI co-founder and CEO, told TechCrunch.

Shrivastava, who has a mathematics background, was always interested in artificial intelligence and machine learning, especially rethinking how AI could be developed in a more efficient manner. It was when he was at Rice University that he looked into how to make that work for deep learning. He started ThirdAI in April with some Rice graduate students.

🐶 Lifelike & Realistic Pets on Amazon: https://amzn.to/3uegCXk 🐱
🔥 Robot Dog Kit on Amazon: https://amzn.to/3jirOw9
🤖 Robot Dogs that are just like Real Dogs: https://youtu.be/pifs1DE-Ys4

▶ Subscribe: https://bit.ly/3eLtWLS

✨ Instagram: https://www.instagram.com/Robotix_with_Sina.
📌My Amazon Pick: https://www.amazon.com/shop/iloverobotics.
————————
A company called Tombot thinks it’s come up with a way to improve the quality of life for seniors facing challenges when it comes to being social: a robotic companion dog that behaves and responds like a real pup, but without all the responsibilities of maintaining a living, breathing animal. The company even enlisted the talented folks at the Jim Henson’s Creature Shop to help make the robo-dog look as lifelike as possible. It’s a noble effort, but it also raises lots of questions.

For starters, can robots actually be a good substitute for an animal companion? Replacing people with robots is a massive technological challenge—and one we’re not even close to accomplishing. Every time a multi-million dollar humanoid robot like Boston Dynamics’ ATLAS takes a nasty spill, we’re reminded that they’re nowhere near ready to interact with the average consumer. But robotic animals are a different story. It’s hard not to draw comparisons to a well-trained dog when seeing Boston Dynamics’ SpotMini in action. And even though it still comes with a price tag that soars to hundreds of thousands of dollars, there are robotic pets available on the other end of the affordability spectrum.
Sony’s Aibo, originally released 20 years ago, was so popular and beloved that owners in Japan regularly held funerals for their robo-dogs when they stopped working and replacement parts were no longer available. In late 2,017 Sony brought its Aibo line back from the dead, and despite a $2,900 price tag and questionable smarts, it’s hard not to get drawn into interacting with the plastic pet as if it were a real puppy.

But Tombot isn’t the first company to create a robotic pet specifically designed to serve as an attentive companion. For the past decade, a $5,000 robotic seal called Paro has been comforting seniors and those dealing with longtime illnesses like Alzheimer’s. And a few years ago Hasbro introduced a ~$100 robotic cat and dog under its Joy For All line (which has since spun off into its own company called Ageless Innovation) that will respond lovingly to (or at least appear to) physical interactions. As long as you don’t need a robotic pet to fetch the paper, scare off intruders, or retrieve dead ducks, robots can effectively deliver at least some of the interactions and companionship a real-life pet can.
Can Tombot actually deliver the next-generation of robotic companion pets? Enlisting experts like the Jim Henson’s Creature Shop’s Creative Supervisor Peter Brooke and Animatronic Supervisor John Criswell was a good start. In addition to designing over-the-top muppets, the studio has created life-like animatronic animals for use in movies and TV, and with a deep understanding of how creatures move, they were able to deliver a design for a robotic dog that not only looks more like a real dog than a plush toy, it moves like one as well.

#Robot #RobotDog.

I think SENS did this last year but now AlphaFold2 will make it easier and faster.


Hey it’s Han from WrySci HX discussing how breakthroughs in the protein folding problem by AlphaFold 2 from DeepMind could combine with the SENS research foundation’s approach of allotopic mitochondrial gene expression to fight aging damage. More below ↓↓↓

Subscribe! =]

Please consider supporting 🙏

Patreon: https://www.patreon.com/wrysci_hx.

Follow me on twitter: https://twitter.com/han_xavier_

Or better yet, consider supporting any of the following =]

https://www.researchamerica.org/

It’s no secret that AI is everywhere, yet it’s not always clear when we’re interacting with it, let alone which specific techniques are at play. But one subset is easy to recognize: If the experience is intelligent and involves photos or videos, or is visual in any way, computer vision is likely working behind the scenes.

Computer vision is a subfield of AI, specifically of machine learning. If AI allows machines to “think,” then computer vision is what allows them to “see.” More technically, it enables machines to recognize, make sense of, and respond to visual information like photos, videos, and other visual inputs.

Over the last few years, computer vision has become a major driver of AI. The technique is used widely in industries like manufacturing, ecommerce, agriculture, automotive, and medicine, to name a few. It powers everything from interactive Snapchat lenses to sports broadcasts, AR-powered shopping, medical analysis, and autonomous driving capabilities. And by 2,022 the global market for the subfield is projected to reach $48.6 billion annually, up from just $6.6 billion in 2015.

I would say this is probably aimed at a few things. It’s a work around to the national fight to raise the minimum wage. These will be out of sight out of mind, so no one, besides the workers, will see as they are gradually automated to 100% by around 2027. And, the delivery is gradually fully automated with long distance drones and self driving vehicles. Also, be sure every other chain is working on the same stuff.


The ‘ghost kitchens’ are coming to the UK, US and Canada.

A radical collaboration between a biologist and an engineer is supercharging efforts to protect grape crops. The technology they’ve developed, using robotics and AI to identify grape plants infected with a devastating fungus, will soon be available to researchers nationwide working on a wide array of plant and animal research.

The biologist, Lance Cadle-Davidson, Ph.D. ‘03, an adjunct professor in the School of Integrative Plant Science (SIPS), is working to develop that are more resistant to powdery mildew, but his lab’s research was bottlenecked by the need to manually assess thousands of grape leaf samples for evidence of infection.

Powdery mildew, a fungus that attacks many plants including wine and table grapes, leaves sickly white spores across leaves and fruit and costs grape growers worldwide billions of dollars annually in lost fruit and fungicide costs.

For many years now, China has been the world’s factory. Even in 2,020 as other economies struggled with the effects of the pandemic, China’s manufacturing output was $3.854 trillion, up from the previous year, accounting for nearly a third of the global market.

But if you are still thinking of China’s factories as sweatshops, it’s probably time to change your perception. The Chinese economic recovery from its short-lived pandemic blip has been boosted by its world-beating adoption of artificial intelligence (AI). After overtaking the U.S. in 2,014 China now has a significant lead over the rest of the world in AI patent applications. In academia, China recently surpassed the U.S. in the number of both AI research publications and journal citations. Commercial applications are flourishing: a new wave of automation and AI infusion is crashing across a swath of sectors, combining software, hardware and robotics.

As a society, we have experienced three distinct industrial revolutions: steam power, electricity and information technology. I believe AI is the engine fueling the fourth industrial revolution globally, digitizing and automating everywhere. China is at the forefront in manifesting this unprecedented change.

We’ve seen a lot of electric vehicle growth and success stories in the past several years, but one area that’s been a bit of a letdown has been the semi truck market. Unfortunately, we still don’t have the Tesla Semi, and it was recently delayed until 2,022 and a big side area of that market that “futurists” have long been excited about is potential self-driving trucks. Platoons of self-driving semi trucks are especially exciting since tight, train-like caravans of semi trucks would use far less energy than the current system, and those trucks could much more easily be cost-competitive electric trucks with zero tailpipe emissions. Anyway, though, we’re getting ahead of ourselves again.

Doubtful. But, i hope so, it will convince them to spend more money here to move AI research faster.


TOKYO — China is overtaking the U.S. in artificial intelligence research, setting off alarm bells on the other side of the Pacific as the world’s two largest economies jockey for AI supremacy.

In 2,020 China topped the U.S. for the first time in terms of the number of times an academic article on AI is cited by others, a measure of the quality of a study. Until recently, the U.S. had been far ahead of other countries in AI research.

One reason China is coming on strong in AI is the ample data it generates. By 2,030 an estimated 8 billion devices in China will be connected via the Internet of Things — a vast network of physical objects linked via the internet. These devices, mounted on cars, infrastructure, robots and other instruments, generate a huge amount of data.