AI saving whales, steadying gaits and banishing traffic • TechCrunch

0

Research into machine learning and artificial intelligence, which is now a staple technology in nearly every industry and company, is too massive for anyone to read in its entirety. this column, Perceptronaims to collect some of the latest discoveries and related research papers – particularly in the field of artificial intelligence, to name a few – and explain why they are important.

Over the past few weeks, researchers at the Massachusetts Institute of Technology have found Hinge Their work is on a system to track the progress of Parkinson’s patients by constantly monitoring their walking speed. Elsewhere, Whale Safe, a project led by the Benioff Ocean Science Laboratory and partners, Launched Floats equipped with artificial intelligence sensors in an experiment to prevent ships from colliding with whales. Other aspects of the environment and academics have also seen advances powered by machine learning.

MIT’s efforts to track Parkinson’s disease aim to help clinicians overcome challenges in treating the estimated 10 million people with the disease globally. The motor skills and cognitive functions of Parkinson’s patients are usually evaluated during clinical visits, but can be affected by external factors such as fatigue. Add to this the fact that moving to an office is very confusing for many patients, and their situation is getting dire.

As an alternative, the MIT team proposes a home device that collects data using radio signals that are reflected off the patient’s body as they move through their home. About the size of a Wi-Fi router, the device, which runs all day, uses an algorithm to pick up signals even when other people are moving around the room.

In a study published in the journal translational medicine sciences, MIT researchers showed that their device was able to effectively track the progression and severity of Parkinson’s disease across dozens of participants during a pilot study. For example, they showed that walking speed decreased nearly twice as much for people with Parkinson’s disease than for those without, and that daily fluctuations in a patient’s walking speed corresponded to how well they responded to their medication.

Moving from healthcare to the plight of whales, Project Whale Safe—whose stated mission is to “leverage best-in-class technology with best-practice conservation strategies to find a solution to reduce risks to whales”—in late September deployed buoys equipped with Compact computer that can record whale sounds using an underwater microphone. The AI ​​system detects the sounds of certain species and sends the results to the researcher, so that the location of the animal – or the animals – Can be calculated by correlating data with water conditions and local records of whale watching. The whale locations are then informed to nearby ships so that they can be rerouted as necessary.

Ship collisions are a major cause of whale death – and many species are endangered. according to Research Carried out by the non-profit organization Friend of the Sea, ship strikes kill more than 20,000 whales each year. This is devastating to local ecosystems, as whales play an important role in capturing carbon from the atmosphere. One big whale can isolate Around 33 tons of carbon dioxide on average.

Image credits: Benioff Oceanographic Laboratory

Whale Safe currently has buoys deployed in the Santa Barbara Channel near the ports of Los Angeles and Long Beach. In the future, the project aims to install buoys in other US coastal regions including Seattle, Vancouver and San Diego.

Forest conservation is another area in which technology is being used. Surveys of forest land from above using lidar help estimate growth and other metrics, but the data they produce are not always easy to read. Lidar point clouds are just undifferentiated maps of elevation and distance – a forest is one large surface, not a collection of individual trees. These tend to be tracked by humans on Earth.

Bordeaux researchers We’ve created an algorithm (not fully AI but we’ll allow it this time) that converts a large set of 3D lidar data into individually segmented trees, allowing not only canopy and growth data to be collected but a good estimate of actual trees. It does this by calculating the most efficient path from a given point to the ground, which is the opposite of what nutrients in the tree can do. The results are completely accurate (after checking through a personal inventory) and can contribute to better tracking of forests and resources in the future.

Self-driving cars are popping up on our streets more frequently these days, even if they are primarily just beta tests. With their numbers increasing, how should policymakers and civil engineers accommodate them? Carnegie Mellon researchers have drawn up a policy brief He offers some interesting arguments.

Diagram showing how collaborative decision making in which a few cars choose a longer route actually makes it faster for most cars.

They argue that the main difference is that self-driving vehicles drive “altruistically,” that is, they deliberately accommodate other drivers — for example, by always allowing other drivers to blend in front of them. This kind of behavior can be benefited from, but at a policy level it should be rewarded, they argue, and utility vehicles should be given access to things like toll roads, high-space vehicles and bus lanes, since they wouldn’t use them “selfishly”. “

They also recommend that planning agencies take a realistic view when making decisions, including other types of transportation such as bicycles and scooters, and consider how to demand or increase inter-vehicle and inter-fleet communication. Could you Read the full 23-page report here (PDF).

Moving from traffic to translation, Meta last week announced a new system, Universal Speech Translator, designed for translating unwritten languages ​​such as Hokkien. As an Engadget piece on the system Notes, thousands of spoken languages ​​do not have a written component, which poses a problem for most machine learning translation systems, which usually need to convert speech into written words before translating the new language and back text to speech.

To overcome the lack of categorized examples of language, Universal Speech Translator converts speech into “sound units” and then generates waveforms. Currently, the system is somewhat limited in what it can do – it allows speakers of Hokkien, a language commonly used in southeastern China, to translate into English an entire sentence at a time. But the Meta research team behind Universal Speech Translator believes it will continue to improve.

Illustration of AlphaTensor

Elsewhere within the field of artificial intelligence, researchers at DeepMind explain in detail Alpha TensorThe Alphabet-backed lab claims to be the first artificial intelligence system to discover new, efficient, and “providably correct” algorithms. AlphaTensor was specifically designed to find new techniques for matrix multiplication, a computational process that is fundamental to the way modern machine learning systems work.

To take advantage of AlphaTensor, DeepMind has turned the problem of finding matrix multiplication algorithms into a single-player game where a “board” is a three-dimensional array of numbers called a tensor. According to DeepMind, AlphaTensor has learned to excel at it, improving an algorithm first discovered 50 years ago and discovering new algorithms with “state of the art” complexity. One of the algorithms the system discovered, optimized for devices like Nvidia’s V100 GPU, was 10% to 20% faster than algorithms commonly used on the same hardware.

Leave A Reply

Your email address will not be published.