Written by Jennifer Chu, MIT News Office
NASA hit a bullseye in late September with DART, the Double Asteroid Redirection Test, which flew a spacecraft straight at the heart of a nearby asteroid. The one-way kamikaze mission smashed into the stadium-sized space rock and successfully reset the asteroid’s orbit. DART was the first test of a planetary defense strategy, demonstrating that scientists could potentially deflect an asteroid headed for Earth.
Now MIT researchers have a tool that may improve the aim of future asteroid-targeting missions. The team has developed a method to map an asteroid’s interior structure, or density distribution, based on how the asteroid’s spin changes as it makes a close encounter with more massive objects like the Earth.
Knowing how the density is distributed inside an asteroid could help scientists plan the most effective defense. For instance, if an asteroid were made of relatively light and uniform matter, a DART-like spacecraft could be aimed differently than if it were deflecting an asteroid with a denser, less balanced interior.
“If you know the density distribution of the asteroid, you could hit it at just the right spot so it actually moves away,” says Jack Dinsmore ’22, who developed the new asteroid-mapping technique as an MIT undergraduate majoring in physics.
The team is eager to apply the method to Apophis, a near-Earth asteroid that is estimated to pose a significant hazard if it were to make impact. Scientists have ruled out the likelihood of a collision during Apophis’ next flybys for at least a century. Beyond that, their forecasts grow fuzzy.
“Apophis will miss Earth in 2029, and scientists have cleared it for its next few encounters, but we can’t clear it forever,” says Dinsmore, who is now a graduate student at Stanford University. “So, it’s good to understand the nature of this particular asteroid, because if we ever need to redirect it, it’s important to understand what it’s made of.”
Dinsmore and Julien de Wit, assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), detail their new method in a study appearing today in the Monthly Notices of the Royal Astronomical Society.
Spinning boiled versus raw
The seeds of the team’s asteroid-mapping method grew out of an MIT class Dinsmore took last year, taught by de Wit. The class, 12.401 (Essentials of Planetary Sciences), introduces the basic principles and formation mechanisms of planets, asteroids, and other objects in the solar system. As a final project, Dinsmore explored how an asteroid behaves during a close encounter.
In class, he wrote a code to simulate various shapes and sizes of asteroids as well as how their orbital and spin dynamics change when influenced by the gravitational pull of a more massive object like the Earth.
“I initially just tried to ask, what happens when an asteroid passes by Earth? Does it respond at all? Because I wasn’t sure,” Dinsmore recalls. “And the answer is, it does, in a way that depends very strongly on the shape and physical properties of the asteroid.”
That initial realization prompted another question: Could the dynamics of an asteroid’s close encounter be used to predict not just its shape and size, but also its internal makeup? To get at an answer, Dinsmore continued the project with de Wit, through the MIT Undergraduate Research Opportunities Program (UROP), which enables students to perform original research with a faculty member.
He and de Wit took a deeper dive into the dynamics of a close encounter, writing out a more complex code, which they used to simulate a zoo of different asteroids, each with a different size, shape, and internal composition, or distribution of density. They then ran the simulation forward to see how each asteroid’s spin should wobble or shift as it passes close to an object of a certain mass and gravitational pull.
“It’s similar to how you can tell the difference between a raw and boiled egg,” de Wit offers. “If you spin the egg, the egg responds and spins differently depending on its interior properties. The same goes for an asteroid during a close encounter: You can get a grasp of what’s happening on the inside just by looking on how it responds to the strong gravitational forces it experiences during a flyby.”
A close match
The team is presenting their results in a new software “toolkit,” which they name AIME, for Asteroid Interior Mapping from Encounters (the acronym also translates as “love” in French). The software can be used to reconstruct the internal density distribution of an asteroid, from observations of its spin change during a close encounter.
The researchers say that, if scientists can take more detailed measurements of asteroids and their spin dynamics during close encounters, these measurements could be used to improve AIME’s reconstructions of asteroid interiors.
Their best chance, they say, may come with Apophis. During its forthcoming close encounters, de Wit and Dinsmore hope astronomers will point their telescopes at the space rock to measure its size, shape, and spin evolution as it streaks past. They could then feed these measurements into AIME to find a match — a simulated asteroid with the same size, shape, and spin dynamics as Apophis, that also relates to a particular interior density distribution.
“Then, with AIME, you could publish a density map that most likely represents Apophis’ interior,” Dinsmore says.
“Understanding the interior properties of asteroids helps us understand the extent to which close encounters could be of concern, and how to deal with them, as well as where they formed and how they got here,” de Wit adds. “Now with this framework, there’s a new way of getting a look inside an asteroid.”
This research was supported, in part, by the MIT UROP office.
A far-sighted approach to machine learning
New system can teach a group of cooperative or competitive AI agents to find an optimal long-term solution
Written by Adam Zewe, MIT News Office
Picture two teams squaring off on a football field. The players can cooperate to achieve an objective, and compete against other players with conflicting interests. That’s how the game works.
Creating artificial intelligence agents that can learn to compete and cooperate as effectively as humans remains a thorny problem. A key challenge is enabling AI agents to anticipate future behaviors of other agents when they are all learning simultaneously.
Because of the complexity of this problem, current approaches tend to be myopic; the agents can only guess the next few moves of their teammates or competitors, which leads to poor performance in the long run.
Researchers from MIT, the MIT-IBM Watson AI Lab, and elsewhere have developed a new approach that gives AI agents a farsighted perspective. Their machine-learning framework enables cooperative or competitive AI agents to consider what other agents will do as time approaches infinity, not just over a few next steps. The agents then adapt their behaviors accordingly to influence other agents’ future behaviors and arrive at an optimal, long-term solution.
This framework could be used by a group of autonomous drones working together to find a lost hiker in a thick forest, or by self-driving cars that strive to keep passengers safe by anticipating future moves of other vehicles driving on a busy highway.
“When AI agents are cooperating or competing, what matters most is when their behaviors converge at some point in the future. There are a lot of transient behaviors along the way that don’t matter very much in the long run. Reaching this converged behavior is what we really care about, and we now have a mathematical way to enable that,” says Dong-Ki Kim, a graduate student in the MIT Laboratory for Information and Decision Systems (LIDS) and lead author of a paper describing this framework.
The senior author is Jonathan P. How, the Richard C. Maclaurin Professor of Aeronautics and Astronautics and a member of the MIT-IBM Watson AI Lab. Co-authors include others at the MIT-IBM Watson AI Lab, IBM Research, Mila-Quebec Artificial Intelligence Institute, and Oxford University. The research will be presented at the Conference on Neural Information Processing Systems.
More agents, more problems
The researchers focused on a problem known as multiagent reinforcement learning. Reinforcement learning is a form of machine learning in which an AI agent learns by trial and error. Researchers give the agent a reward for “good” behaviors that help it achieve a goal. The agent adapts its behavior to maximize that reward until it eventually becomes an expert at a task.
But when many cooperative or competing agents are simultaneously learning, things become increasingly complex. As agents consider more future steps of their fellow agents, and how their own behavior influences others, the problem soon requires far too much computational power to solve efficiently. This is why other approaches only focus on the short term.
“The AIs really want to think about the end of the game, but they don’t know when the game will end. They need to think about how to keep adapting their behavior into infinity so they can win at some far time in the future. Our paper essentially proposes a new objective that enables an AI to think about infinity,” says Kim.
But since it is impossible to plug infinity into an algorithm, the researchers designed their system so agents focus on a future point where their behavior will converge with that of other agents, known as equilibrium. An equilibrium point determines the long-term performance of agents, and multiple equilibria can exist in a multiagent scenario. Therefore, an effective agent actively influences the future behaviors of other agents in such a way that they reach a desirable equilibrium from the agent’s perspective. If all agents influence each other, they converge to a general concept that the researchers call an “active equilibrium.”
The machine-learning framework they developed, known as FURTHER (which stands for FUlly Reinforcing acTive influence witH averagE Reward), enables agents to learn how to adapt their behaviors as they interact with other agents to achieve this active equilibrium.
FURTHER does this using two machine-learning modules. The first, an inference module, enables an agent to guess the future behaviors of other agents and the learning algorithms they use, based solely on their prior actions.
This information is fed into the reinforcement learning module, which the agent uses to adapt its behavior and influence other agents in a way that maximizes its reward.
“The challenge was thinking about infinity. We had to use a lot of different mathematical tools to enable that, and make some assumptions to get it to work in practice,” Kim says.
Winning in the long run
They tested their approach against other multiagent reinforcement learning frameworks in several different scenarios, including a pair of robots fighting sumo-style and a battle pitting two 25-agent teams against one another. In both instances, the AI agents using FURTHER won the games more often.
Since their approach is decentralized, which means the agents learn to win the games independently, it is also more scalable than other methods that require a central computer to control the agents, Kim explains.
The researchers used games to test their approach, but FURTHER could be used to tackle any kind of multiagent problem. For instance, it could be applied by economists seeking to develop sound policy in situations where many interacting entitles have behaviors and interests that change over time.
Economics is one application Kim is particularly excited about studying. He also wants to dig deeper into the concept of an active equilibrium and continue enhancing the FURTHER framework.
This research is funded, in part, by the MIT-IBM Watson AI Lab.
Flocks of assembler robots show potential for making larger structures
Researchers make progress toward groups of robots that could build almost anything, including buildings, vehicles, and even bigger robots
Written by David L. Chandler, MIT News Office
Researchers at MIT have made significant steps toward creating robots that could practically and economically assemble nearly anything, including things much larger than themselves, from vehicles to buildings to larger robots.
The new work, from MIT’s Center for Bits and Atoms (CBA), builds on years of research, including recent studies demonstrating that objects such as a deformable airplane wing and a functional racing car could be assembled from tiny identical lightweight pieces — and that robotic devices could be built to carry out some of this assembly work. Now, the team has shown that both the assembler bots and the components of the structure being built can all be made of the same subunits, and the robots can move independently in large numbers to accomplish large-scale assemblies quickly.
The new work is reported in the journal Nature Communications Engineering, in a paper by CBA doctoral student Amira Abdel-Rahman, Professor and CBA Director Neil Gershenfeld, and three others.
A fully autonomous self-replicating robot assembly system capable of both assembling larger structures, including larger robots, and planning the best construction sequence is still years away, Gershenfeld says. But the new work makes important strides toward that goal, including working out the complex tasks of when to build more robots and how big to make them, as well as how to organize swarms of bots of different sizes to build a structure efficiently without crashing into each other.
As in previous experiments, the new system involves large, usable structures built from an array of tiny identical subunits called voxels (the volumetric equivalent of a 2-D pixel). But while earlier voxels were purely mechanical structural pieces, the team has now developed complex voxels that each can carry both power and data from one unit to the next. This could enable the building of structures that can not only bear loads but also carry out work, such as lifting, moving and manipulating materials — including the voxels themselves.
“When we’re building these structures, you have to build in intelligence,” Gershenfeld says. While earlier versions of assembler bots were connected by bundles of wires to their power source and control systems, “what emerged was the idea of structural electronics — of making voxels that transmit power and data as well as force.” Looking at the new system in operation, he points out, “There’s no wires. There’s just the structure.”
The robots themselves consist of a string of several voxels joined end-to-end. These can grab another voxel using attachment points on one end, then move inchworm-like to the desired position, where the voxel can be attached to the growing structure and released there.
Gershenfeld explains that while the earlier system demonstrated by members of his group could in principle build arbitrarily large structures, as the size of those structures reached a certain point in relation to the size of the assembler robot, the process would become increasingly inefficient because of the ever-longer paths each bot would have to travel to bring each piece to its destination. At that point, with the new system, the bots could decide it was time to build a larger version of themselves that could reach longer distances and reduce the travel time. An even bigger structure might require yet another such step, with the new larger robots creating yet larger ones, while parts of a structure that include lots of fine detail may require more of the smallest robots.
As these robotic devices work on assembling something, Abdel-Rahman says, they face choices at every step along the way: “It could build a structure, or it could build another robot of the same size, or it could build a bigger robot.” Part of the work the researchers have been focusing on is creating the algorithms for such decision-making.
“For example, if you want to build a cone or a half-sphere,” she says, “how do you start the path planning, and how do you divide this shape” into different areas that different bots can work on? The software they developed allows someone to input a shape and get an output that shows where to place the first block, and each one after that, based on the distances that need to be traversed.
There are thousands of papers published on route-planning for robots, Gershenfeld says. “But the step after that, of the robot having to make the decision to build another robot or a different kind of robot — that’s new. There’s really nothing prior on that.”
While the experimental system can carry out the assembly and includes the power and data links, in the current versions the connectors between the tiny subunits are not strong enough to bear the necessary loads. The team, including graduate student Miana Smith, is now focusing on developing stronger connectors. “These robots can walk and can place parts,” Gershenfeld says, “but we are almost — but not quite — at the point where one of these robots makes another one and it walks away. And that’s down to fine-tuning of things, like the force of actuators and the strength of joints. … But it’s far enough along that these are the parts that will lead to it.”
Ultimately, such systems might be used to construct a wide variety of large, high-value structures. For example, currently the way airplanes are built involves huge factories with gantries much larger than the components they build, and then “when you make a jumbo jet, you need jumbo jets to carry the parts of the jumbo jet to make it,” Gershenfeld says. With a system like this built up from tiny components assembled by tiny robots, “The final assembly of the airplane is the only assembly.”
Similarly, in producing a new car, “you can spend a year on tooling” before the first car gets actually built, he says. The new system would bypass that whole process. Such potential efficiencies are why Gershenfeld and his students have been working closely with car companies, aviation companies, and NASA. But even the relatively low-tech building construction industry could potentially also benefit.
While there has been increasing interest in 3-D-printed houses, today those require printing machinery as large or larger than the house being built. Again, the potential for such structures to instead be assembled by swarms of tiny robots could provide benefits. And the Defense Advanced Research Projects Agency is also interested in the work for the possibility of building structures for coastal protection against erosion and sea level rise.
The research team also included MIT-CBA student Benjamin Jenett and Christopher Cameron, who is now at the U.S. Army Research Laboratory. The work was supported by NASA, the U.S. Army Research Laboratory, and CBA consortia funding.
Study: Automation drives income inequality
New data suggest most of the growth in the wage gap since 1980 comes from automation displacing less-educated workers
Written by Peter Dizikes, MIT News
When you use self-checkout machines in supermarkets and drugstores, you are probably not — with all due respect — doing a better job of bagging your purchases than checkout clerks once did. Automation just makes bagging less expensive for large retail chains.
“If you introduce self-checkout kiosks, it’s not going to change productivity all that much,” says MIT economist Daron Acemoglu. However, in terms of lost wages for employees, he adds, “It’s going to have fairly large distributional effects, especially for low-skill service workers. It’s a labor-shifting device, rather than a productivity-increasing device.”
A newly published study co-authored by Acemoglu quantifies the extent to which automation has contributed to income inequality in the U.S., simply by replacing workers with technology — whether self-checkout machines, call-center systems, assembly-line technology, or other devices. Over the last four decades, the income gap between more- and less-educated workers has grown significantly; the study finds that automation accounts for more than half of that increase.
“This single one variable … explains 50 to 70 percent of the changes or variation between group inequality from 1980 to about 2016,” Acemoglu says.
The paper, “Tasks, Automation, and the Rise in U.S. Wage Inequality,” is being published in Econometrica. The authors are Acemoglu, who is an Institute Professor at MIT, and Pascual Restrepo PhD ’16, an assistant professor of economics at Boston University.
So much “so-so automation”
Since 1980 in the U.S., inflation-adjusted incomes of those with college and postgraduate degrees have risen substantially, while inflation-adjusted earnings of men without high school degrees has dropped by 15 percent.
How much of this change is due to automation? Growing income inequality could also stem from, among other things, the declining prevalence of labor unions, market concentration begetting a lack of competition for labor, or other types of technological change.
To conduct the study, Acemoglu and Restrepo used U.S. Bureau of Economic Analysis statistics on the extent to which human labor was used in 49 industries from 1987 to 2016, as well as data on machinery and software adopted in that time. The scholars also used data they had previously compiled about the adoption of robots in the U.S. from 1993 to 2014. In previous studies, Acemoglu and Restrepo have found that robots have by themselves replaced a substantial number of workers in the U.S., helped some firms dominate their industries, and contributed to inequality.
At the same time, the scholars used U.S. Census Bureau metrics, including its American Community Survey data, to track worker outcomes during this time for roughly 500 demographic subgroups, broken out by gender, education, age, race and ethnicity, and immigration status, while looking at employment, inflation-adjusted hourly wages, and more, from 1980 to 2016. By examining the links between changes in business practices alongside changes in labor market outcomes, the study can estimate what impact automation has had on workers.
Ultimately, Acemoglu and Restrepo conclude that the effects have been profound. Since 1980, for instance, they estimate that automation has reduced the wages of men without a high school degree by 8.8 percent and women without a high school degree by 2.3 percent, adjusted for inflation.
A central conceptual point, Acemoglu says, is that automation should be regarded differently from other forms of innovation, with its own distinct effects in workplaces, and not just lumped in as part of a broader trend toward the implementation of technology in everyday life generally.
Consider again those self-checkout kiosks. Acemoglu calls these types of tools “so-so technology,” or “so-so automation,” because of the tradeoffs they contain: Such innovations are good for the corporate bottom line, bad for service-industry employees, and not hugely important in terms of overall productivity gains, the real marker of an innovation that may improve our overall quality of life.
“Technological change that creates or increases industry productivity, or productivity of one type of labor, creates [those] large productivity gains but does not have huge distributional effects,” Acemoglu says. “In contrast, automation creates very large distributional effects and may not have big productivity effects.”
A new perspective on the big picture
The results occupy a distinctive place in the literature on automation and jobs. Some popular accounts of technology have forecast a near-total wipeout of jobs in the future. Alternately, many scholars have developed a more nuanced picture, in which technology disproportionately benefits highly educated workers but also produces significant complementarities between high-tech tools and labor.
The current study differs at least by degree with this latter picture, presenting a more stark outlook in which automation reduces earnings power for workers and potentially reduces the extent to which policy solutions — more bargaining power for workers, less market concentration — could mitigate the detrimental effects of automation upon wages.
“These are controversial findings in the sense that they imply a much bigger effect for automation than anyone else has thought, and they also imply less explanatory power for other [factors],” Acemoglu says.
Still, he adds, in the effort to identify drivers of income inequality, the study “does not obviate other nontechnological theories completely. Moreover, the pace of automation is often influenced by various institutional factors, including labor’s bargaining power.”
Labor economists say the study is an important addition to the literature on automation, work, and inequality, and should be reckoned with in future discussions of these issues.
For their part, in the paper Acemoglu and Restrepo identify multiple directions for future research. That includes investigating the reaction over time by both business and labor to the increase in automation; the quantitative effects of technologies that do create jobs; and the industry competition between firms that quickly adopted automation and those that did not.
The research was supported in part by Google, the Hewlett Foundation, Microsoft, the National Science Foundation, Schmidt Sciences, the Sloan Foundation, and the Smith Richardson Foundation.
Business & Economy7 months ago
NSE Academy Limited collaborates with HDFC Mutual Fund for financial awareness program
Edu News7 months ago
Technique protects privacy when making online recommendations
Business & Economy4 months ago
Using artificial intelligence to control digital manufacturing
Edu News7 months ago
Search reveals eight new sources of black hole echoes
Edu News6 months ago
Stronger security for smart devices
Edu News6 months ago
Astronomers discover a multiplanet system nearby
Edu News5 months ago
Jasudben ML School celebrated its first edition of Pride Month
Edu News7 months ago
Unpacking black-box models