Toggle light / dark theme

Tesla’s Full Self-Driving suite continues to improve with a recent video showing a Model 3 safely shifting away from a makeshift lane of construction cones while using Navigate on Autopilot.

Tesla owner-enthusiast Jeremy Greenlee was traveling through a highway construction zone in his Model 3. The zone contained a makeshift lane to the vehicle’s left that was made up of construction cones.

In an attempt to avoid the possibility of any collision with the cones from taking place, the vehicle utilized the driver-assist system and automatically shifted one lane to the right. This maneuver successfully removed any risk of coming into contact with the dense construction cones that were to the left of the car, which could have caused hundreds of dollars in cosmetic damage to the vehicle.

Sondors has just revealed three new high-powered electric bicycles known as the Sondors Rockstar, Cruiser and LX. The new models are part of the company’s not-yet-released Elite line. They feature stunning new frame designs, huge 1kWh batteries though the low introductory prices might actually be the biggest shocker of all.

Taking a cross-country roadtrip in your electric vehicle is a little more feasible thanks to Electrify America. Its first coast to coast EV fast charging route is now complete, and the company plans to have another route finished by September. The routes provide high-powered chargers to all EV brands, and on average, the stations are spaced about 70 miles apart, so EV owners can travel beyond a single charge without being stranded.

The first route stretches over 2,700 miles from Washington DC to Los Angeles. It follows Interstates 15 and 70 and passes through 11 states. The second route will connect Jacksonville and San Diego.

Elon Musk has sent a somewhat cryptic email to Tesla employees about going “all out” for the end of the quarter to have a “good outcome.”

On Monday afternoon, Musk sent a short email to all Tesla employees.

In the email obtained by Electrek, the CEO addressed the importance of the last week of the quarter:

‘Intelligent concrete’ could cut down on road repairs and traffic.

Roads always seem to need repairs. Luna Lu is giving concrete the ability to “talk” and even heal itself.

Her lab at Purdue University is developing technology that would allow concrete-paved bridges and highways to reveal more accurately when they need repairs and to come equipped with materials that respond to potential damage.

Like many things about Elon Musk, Tesla’s approach to achieving autonomous driving is polarizing. Bucking the map-based trend set by industry veterans such as Waymo, Tesla opted to dedicate its resources in pursuing a vision-based approach to achieve full self-driving instead. This involves a lot of hard, tedious work on Tesla’s part, but today, there are indications that the company’s controversial strategy is finally paying off.

In a recent talk, Tesla AI Director Andrej Karpathy discussed the key differences between the map-based approach of Waymo and Tesla’s camera-based strategy. According to Karpathy, Waymo’s use of pre-mapped data and LiDAR make scaling difficult, since vehicles’ autonomous capabilities are practically tied to a geofenced area. Tesla’s vision-based approach, which uses cameras and artificial intelligence, is not. This means that Autopilot and FSD improvements can be rolled out to the fleet, and they would function anywhere.

This rather ambitious plan for Tesla’s full self-driving system has caught a lot of skepticism in the past, with critics pointing out that map-based FSD is the way to go. Tesla, in response, dug its heels in and doubled down on its vision-based initiative. This, in a way, resulted in Autopilot improvements and the rollout of FSD features taking a lot of time, particularly since training the neural networks, which recognize objects and driving behavior on the road, requires massive amounts of real-world data.

When the next Ford F-150 arrives on American roads, you’ll recognize it immediately even if you can’t see the emblem on its grille. The company published a preview image that reveals the truck’s LED lighting signature.

Posted on Twitter, the blacked-out photo is our first official look at the next-generation F-150 due out for the 2021 model year. It confirms the front end receives two pairs of LEDs that create the outline of a rectangle when lit. The top bars frame the headlights and stretch into the grille, while the lower bars underline the fog lights.

Our spies have regularly sent us images of camouflaged F-150 test mules taken all over the United States, so we have a decent idea of what to expect from the truck, and the preview image reveals nothing that we don’t already know. It wears a tall hood with sculpted sides, vertical headlights, and rectangular mirrors. Its design is more of an evolution than a revolution, but Ford hinted it’s making significant changes under the body panels.

When opportunity knocks, open the door: No one has taken heed of that adage like Nvidia, which has transformed itself from a company focused on catering to the needs of video gamers to one at the heart of the artificial-intelligence revolution. In 2001, no one predicted that the same processor architecture developed to draw realistic explosions in 3D would be just the thing to power a renaissance in deep learning. But when Nvidia realized that academics were gobbling up its graphics cards, it responded, supporting researchers with the launch of the CUDA parallel computing software framework in 2006.

Since then, Nvidia has been a big player in the world of high-end embedded AI applications, where teams of highly trained (and paid) engineers have used its hardware for things like autonomous vehicles. Now the company claims to be making it easy for even hobbyists to use embedded machine learning, with its US $100 Jetson Nano dev kit, which was originally launched in early 2019 and rereleased this March with several upgrades. So, I set out to see just how easy it was: Could I, for example, quickly and cheaply make a camera that could recognize and track chosen objects?

Embedded machine learning is evolving rapidly. In April 2019, Hands On looked at Google’s Coral Dev AI board which incorporates the company’s Edge tensor processing unit (TPU), and in July 2019, IEEE Spectrum featured Adafruit’s software library, which lets even a handheld game device do simple speech recognition. The Jetson Nano is closer to the Coral Dev board: With its 128 parallel processing cores, like the Coral, it’s powerful enough to handle a real-time video feed, and both have Raspberry Pi–style 40-pin GPIO connectors for driving external hardware.

How can we train self-driving vehicles to have a deeper awareness of the world around them? Can computers learn from past experiences to recognize future patterns that can help them safely navigate new and unpredictable situations?

These are some of the questions researchers from the AgeLab at the MIT Center for Transportation and Logistics and the Toyota Collaborative Safety Research Center (CSRC) are trying to answer by sharing an innovative new open dataset called DriveSeg.

Through the release of DriveSeg, MIT and Toyota are working to advance research in autonomous driving systems that, much like , perceive the driving environment as a continuous flow of visual information.

“Beam me up” is one of the most famous catchphrases from the Star Trek series. It is the command issued when a character wishes to teleport from a remote location back to the Starship Enterprise.

While human teleportation exists only in , teleportation is possible in the subatomic world of quantum mechanics—albeit not in the way typically depicted on TV. In the , teleportation involves the transportation of information, rather than the transportation of matter.

Last year scientists confirmed that information could be passed between photons on even when the photons were not physically linked.