Connect with us

Science & Technology

Physician, heal thyself?

Research shows doctors and their families are less likely to follow guidelines about medicine. Why do the medically well-informed comply less often?

Avatar

Published

on

Written by Peter Dizikes, MIT News Office

Following established guidelines about prescription drugs would seem to be an obvious course of action, especially for the professionals that do the prescribing. Yet doctors and their family members are less likely than other people to comply with those guidelines, according to a large-scale study co-authored by an MIT economist.

Depending on your perspective, that result might seem surprising or it might produce a knowing nod. Either way, the result is contrary to past scholarly hypotheses. Many experts have surmised that knowing more, and having easier communication with medical providers, leads patients to follow instructions more closely.

The new study is based on over a decade of population-wide data from Sweden and includes suggestive evidence about why doctors and their families may ignore medical advice. Overall, the research shows that the rest of the population adheres to general medication guidelines 54.4 percent of the time, while doctors and their families lag 3.8 percentage points behind that.

“There’s a lot of concern that people don’t understand guidelines, that they’re too complex to follow, that people don’t trust their doctors,” says Amy Finkelstein, a professor in MIT’s Department of Economics. “If that’s the case, you should see the most adherence when you look at patients who are physicians or their close relatives. We were struck to find that the opposite holds, that physicians and their close relatives are less likely to adhere to their own medication guidelines.”

The paper, “A Taste of Their Own Medicine: Guideline Adherence and Access to Expertise,” is published this month in the American Economic Review: Insights. The authors are Finkelstein, the John and Jennie S. MacDonald Professor of Economics at MIT; Petra Persson, an assistant professor of economics at Stanford University; Maria Polyakova PhD ’14, an assistant professor of health policy at the Stanford University School of Medicine; and Jesse M. Shapiro, the George Gund Professor of Economics and Business Administration at Harvard University.

Millions of data points

To conduct the study, the scholars examined Swedish administrative data from 2005 through 2016, as applied to 63 prescription drug guidelines. The data enabled the researchers to determine who is a doctor; the study largely defined close relatives as partners, parents, and children. All told, the research involved 5,887,471 people to whom at least one of the medication guidelines applied. Of these people, 149,399 were doctors or their close family members.

Using information on prescription drug purchases, hospital visits, and diagnoses, the researchers could see if people were adhering to medication guidelines by examining whether prescription drug decisions matched these patients’ medical circumstances. In the study, six guidelines pertained to antibiotics; 20 involved medication use by the elderly; 20 focused on medication attached to particular diagnoses; and 17 were about prescription drug use during pregnancy.

Some guidelines recommended use of a particular prescription drug, like a preference of narrow-spectrum antibiotics for an infection; other guidelines were about not taking certain medications, such as the recommendation that pregnant women avoid antidepressants.

Out of the 63 guidelines used in the study, doctors and their families followed the standards less often in 41 cases, with the difference being statistically significant 20 times. Doctors and their families followed the guidelines more often in 22 cases, with the difference being statistically significant only three times.

“What we found, which is quite surprising, is that they [physicians] are on average less adherent to guidelines,” says Polyakova, who received her PhD from MIT’s Department of Economics. “So, in this paper we are also trying to figure out what experts do differently.”

Ruling out other answers

Since doctors and their close relatives adhere to medical guidelines less often than the rest of the population, what exactly explains this phenomenon? While homing in on an answer, the research team examined and rejected several hypotheses.

First, the lower compliance by those with greater access to expertise is unrelated to socioeconomic status. In society overall, there is a link between income and adherence levels, but physicians and their families are an exception to this pattern. As the scholars write in the paper, special “access to doctors is associated with lower adherence despite, rather than because of, the high socioeconomic status” of those families.

Additionally, the researchers did not find any link between existing health status and adherence. They also studied whether a greater comfort with prescription medication — due to being a doctor or related to one — makes people more likely to take prescription drugs than guidelines recommend. This does not appear to be the case. The lower adherence rates for doctors and their relatives were similar in magnitude whether the guidelines pertained to taking medication or, alternately, not taking medication.

“There are a number of first-order alternative explanations that we could rule out,” Polyakova says.

Resolving a medical mystery

Instead, the researchers believe the answer is that doctors possess “superior information about guidelines” for prescription drugs — and then deploy that information for themselves. In the study, the difference in adherence to guidelines between experts and nonexperts is largest in the case of antibiotics: Doctors and their families are 5.2 percentage points less in compliance than everyone else.

Most guidelines in this area recommend starting patients off with “narrow-spectrum” antibiotics, which are more targeted, rather than “broader-spectrum” antibiotics. The latter might be more likely to eradicate an infection, but greater use of them also increases the chances that bacteria will develop resistance to these valuable medications, which can reduce efficacy for other patients. Thus for things like a respiratory tract infection, guidelines call for a more targeted antibiotic first.

The issue, however, is that what is good for the public in the long run — trying more targeted drugs first — may not work well for an individual patient. For this reason, doctors could be more likely to prescribe broader-spectrum antibiotics for themselves and their families.

“From a public-health perspective, what you want to do is kill it [the infection] off with the narrow-spectrum antibiotic,” Finkelstein observes. “But obviously any given patient would want to knock that infection out as quickly as possible.” Therefore, she adds, “You can imagine the reason doctors are less likely to follow the guidelines than other patients is because they … know there’s this wedge between what’s good for them as a patients and what’s good for society.”

Another suggestive piece of data comes from different types of prescription drugs that are typically avoided during pregnancies. For so-called C-Class drugs, where empirical evidence about the dangers of the drugs is slightly weaker, doctors and their families have an adherence rate 2.3 percentage points below other people (meaning, in this case, that they are more likely to take these medications during pregnancy). For so-called D-Class drugs with slightly stronger evidence of side effects, that dropoff is only 1.2 percentage points. Here too, doctors’ expert knowledge may be influencing their actions.

“The results imply that probably what’s going on is that experts have a more nuanced understanding of what is the right course of action for themselves, and how that might be different than what the guidelines suggest,” Polyakova says.

Still, the findings suggest some unresolved tensions in action. It could be, as Polyakova suggests, that guidelines about antibiotics should be more explicit about the public and private tradeoffs involved, providing more transparency for patients. “Maybe it’s better for the guidelines to be transparent and say they recommend this not because it is [always] the best course of action for you, but because it is the best for society,” she says.

Additional research could also aim to identify areas where lower expert adherence with guidelines may be associated with better health outcomes — to see how often doctors have a point, as it were. Or, as the researchers write in the paper, “An important avenue for further research is to identify whether and when nonadherence is in the patient’s best interest.”

The research was supported, in part, by the Population Studies and Training Center and the Eastman Professorship at Brown University, and the National Institute on Aging.

Science & Technology

Study: Superconductivity switches on and off in “magic-angle” graphene

A quick electric pulse completely flips the material’s electronic properties, opening a route to ultrafast, brain-inspired, superconducting electronics

EP Staff

Published

on

Written by Jennifer Chu, MIT News Office

With some careful twisting and stacking, MIT physicists have revealed a new and exotic property in “magic-angle” graphene: superconductivity that can be turned on and off with an electric pulse, much like a light switch.

The discovery could lead to ultrafast, energy-efficient superconducting  transistors for neuromorphic devices — electronics designed to operate in a way similar to the rapid on/off firing of neurons in the human brain.

Magic-angle graphene refers to a very particular stacking of graphene — an atom-thin material made from carbon atoms that are linked in a hexagonal pattern resembling chicken wire. When one sheet of graphene is stacked atop a second sheet at a precise “magic” angle, the twisted structure creates a slightly offset “moiré” pattern, or superlattice, that is able to support a host of surprising electronic behaviors.

In 2018, Pablo Jarillo-Herrero and his group at MIT were the first to demonstrate magic-angle twisted bilayer graphene. They showed that the new bilayer structure could behave as an insulator, much like wood, when they applied a certain continuous electric field. When they upped the field, the insulator suddenly morphed into a superconductor, allowing electrons to flow, friction-free.

That discovery gave rise to “twistronics,” a field that explores how certain electronic properties emerge from the twisting and layering of two-dimensional materials. Researchers including Jarillo-Herrero have continued to reveal surprising properties in magic-angle graphene, including various ways to switch the material between different electronic states. So far, such “switches” have acted more like dimmers, in that researchers must continuously apply an electric or magnetic field to turn on superconductivity, and keep it on.

Now Jarillo-Herrero and his team have shown that superconductivity in magic-angle graphene can be switched on, and kept on, with just a short pulse rather than a continuous electric field. The key, they found was a combination of twisting and stacking.

In a paper appearing today in Nature Nanotechnology, the team reports that, by stacking magic-angle graphene between two offset layers of boron nitride — a two-dimensional insulating material — the unique alignment of the sandwich structure enabled the researchers to turn graphene’s superconductivity on and off with a short electric pulse.

“For the vast majority of materials, if you remove the electric field, zzzzip, the electric state is gone,” says Jarillo-Herrero, who is the Cecil and Ida Green Professor of Physics at MIT. “This is the first time that a superconducting material has been made that can be electrically switched on and off, abruptly. This could pave the way for a new generation of twisted, graphene-based superconducting electronics.”

His MIT co-authors are lead author Dahlia Klein, Li-Qiao Xia, and David MacNeill, along with Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.

Flipping the switch

In 2019, a team at Stanford University discovered that magic-angle graphene could be coerced into a ferromagnetic state. Ferromagnets are materials that retain their magnetic properties, even in the absence of an externally applied magnetic field.

The researchers found that magic-angle graphene could exhibit ferromagnetic properties in a way that could be tuned on and off. This happened when the graphene sheets were layered between two sheets of boron nitride such that the crystal structure of the graphene was aligned to one of the boron nitride layers. The arrangement resembled a cheese sandwich in which the top slice of bread and the cheese orientations are aligned, but the bottom slice of bread is rotated at a random angle with respect to the top slice. The result intrigued the MIT group.

“We were trying to get a stronger magnet by aligning both slices,” Jarillo-Herrero says. “Instead, we found something completely different.”

In their current study, the team fabricated a sandwich of carefully angled and stacked materials. The “cheese” of the sandwich consisted of magic-angle graphene — two graphene sheets, the top rotated slightly at the “magic” angle of 1.1 degrees with respect to the bottom sheet. Above this structure, they placed a layer of boron nitride, exactly aligned with the top graphene sheet. Finally, they placed a second layer of boron nitride below the entire structure and offset it by 30 degrees with respect to the top layer of boron nitride.

The team then measured the electrical resistance of the graphene layers as they applied a gate voltage. They found, as others have, that the twisted bilayer graphene switched electronic states, changing between insulating, conducting, and  superconducting states at certain known voltages.

What the group did not expect was that each electronic state persisted rather than immediately disappearing once the voltage was removed — a property known as bistability. They found that, at a particular voltage, the graphene layers turned into a superconductor, and remained superconducting, even as the researchers removed this voltage.  

This bistable effect suggests that superconductivity can be turned on and off with short electric pulses rather than a continuous electric field, similar to flicking a light switch. It isn’t clear what enables this switchable superconductivity, though the researchers suspect it has something to do with the special alignment of the twisted graphene to both boron nitride layers, which enables a ferroelectric-like response of the system. (Ferroelectric materials display bistability in their electric properties.)

“By paying attention to the stacking, you could add another tuning knob to the growing complexity of magic-angle, superconducting devices,” Klein says. 

For now, the team sees the new superconducting switch as another tool researchers can consider as they develop materials for faster, smaller, more energy-efficient electronics.

“People are trying to build electronic devices that do calculations in a way that’s inspired by the brain,” Jarillo-Herrero says. “In the brain, we have neurons that, beyond a certain threshold, they fire. Similarly, we now have found a way for magic-angle graphene to switch superconductivity abruptly, beyond a certain threshold. This is a key property in realizing neuromorphic computing.”  

This research was supported in part by the Air Force Office of Scientific Research, the Army Research Office, and the Gordon and Betty Moore Foundation.

Continue Reading

Science & Technology

When should data scientists try a new technique?

A new measure can help scientists decide which estimation method to use when modeling a particular data problem

EP Staff

Published

on

Written by Adam Zewe, MIT News Office

If a scientist wanted to forecast ocean currents to understand how pollution travels after an oil spill, she could use a common approach that looks at currents traveling between 10 and 200 kilometers. Or, she could choose a newer model that also includes shorter currents. This might be more accurate, but it could also require learning new software or running new computational experiments. How to know if it will be worth the time, cost, and effort to use the new method?

A new approach developed by MIT researchers could help data scientists answer this question, whether they are looking at statistics on ocean currents, violent crime, children’s reading ability, or any number of other types of datasets.

The team created a new measure, known as the “c-value,” that helps users choose between techniques based on the chance that a new method is more accurate for a specific dataset. This measure answers the question “is it likely that the new method is more accurate for this data than the common approach?”

Traditionally, statisticians compare methods by averaging a method’s accuracy across all possible datasets. But just because a new method is better for all datasets on average doesn’t mean it will actually provide a better estimate using one particular dataset. Averages are not application-specific.

So, researchers from MIT and elsewhere created the c-value, which is a dataset-specific tool. A high c-value means it is unlikely a new method will be less accurate than the original method on a specific data problem.

In their proof-of-concept paper, the researchers describe and evaluate the c-value using real-world data analysis problems: modeling ocean currents, estimating violent crime in neighborhoods, and approximating student reading ability at schools. They show how the c-value could help statisticians and data analysts achieve more accurate results by indicating when to use alternative estimation methods they otherwise might have ignored.

“What we are trying to do with this particular work is come up with something that is data specific. The classical notion of risk is really natural for someone developing a new method. That person wants their method to work well for all of their users on average. But a user of a method wants something that will work on their individual problem. We’ve shown that the c-value is a very practical proof-of-concept in that direction,” says senior author Tamara Broderick, an associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Laboratory for Information and Decision Systems and the Institute for Data, Systems, and Society.

She’s joined on the paper by Brian Trippe PhD ’22, a former graduate student in Broderick’s group who is now a postdoc at Columbia University; and Sameer Deshpande ’13, a former postdoc in Broderick’s group who is now an assistant professor at the University of Wisconsin at Madison. An accepted version of the paper is posted online in the Journal of the American Statistical Association.

Evaluating estimators

The c-value is designed to help with data problems in which researchers seek to estimate an unknown parameter using a dataset, such as estimating average student reading ability from a dataset of assessment results and student survey responses. A researcher has two estimation methods and must decide which to use for this particular problem.

The better estimation method is the one that results in less “loss,” which means the estimate will be closer to the ground truth. Conder again the forecasting of ocean currents: Perhaps being off by a few meters per hour isn’t so bad, but being off by many kilometers per hour makes the estimate useless. The ground truth is unknown, though; the scientist is trying to estimate it. Therefore, one can never actually compute the loss of an estimate for their specific data. That’s what makes comparing estimates challenging. The c-value helps a scientist navigate this challenge.

The c-value equation uses a specific dataset to compute the estimate with each method, and then once more to compute the c-value between the methods. If the c-value is large, it is unlikely that the alternative method is going to be worse and yield less accurate estimates than the original method.

“In our case, we are assuming that you conservatively want to stay with the default estimator, and you only want to go to the new estimator if you feel very confident about it. With a high c-value, it’s likely that the new estimate is more accurate. If you get a low c-value, you can’t say anything conclusive. You might have actually done better, but you just don’t know,” Broderick explains.

Probing the theory

The researchers put that theory to the test by evaluating three real-world data analysis problems.

For one, they used the c-value to help determine which approach is best for modeling ocean currents, a problem Trippe has been tackling. Accurate models are important for predicting the dispersion of contaminants, like pollution from an oil spill. The team found that estimating ocean currents using multiple scales, one larger and one smaller, likely yields higher accuracy than using only larger scale measurements.

“Oceans researchers are studying this, and the c-value can provide some statistical ‘oomph’ to support modeling the smaller scale,” Broderick says.

In another example, the researchers sought to predict violent crime in census tracts in Philadelphia, an application Deshpande has been studying. Using the c-value, they found that one could get better estimates about violent crime rates by incorporating information about census-tract-level nonviolent crime into the analysis. They also used the c-value to show that additionally leveraging violent crime data from neighboring census tracts in the analysis isn’t likely to provide further accuracy improvements.

“That doesn’t mean there isn’t an improvement, that just means that we don’t feel confident saying that you will get it,” she says.

Now that they have proven the c-value in theory and shown how it could be used to tackle real-world data problems, the researchers want to expand the measure to more types of data and a wider set of model classes.

The ultimate goal is to create a measure that is general enough for many more data analysis problems, and while there is still a lot of work to do to realize that objective, Broderick says this is an important and exciting first step in the right direction.

This research was supported, in part, by an Advanced Research Projects Agency-Energy grant, a National Science Foundation CAREER Award, the Office of Naval Research, and the Wisconsin Alumni Research Foundation.

Continue Reading

Science & Technology

How to push, wiggle, or drill an object through sand

A method for quickly predicting the forces needed to push objects through soft, granular materials could help engineers drive robots or anchor ships

EP Staff

Published

on

Written by Jennifer Chu, MIT News Office

Pushing a shovel through snow, planting an umbrella on the beach, wading through a ball pit, and driving over gravel all have one thing in common: They all are exercises in intrusion, with an intruding object exerting some force to move through a soft and granular material.

Predicting what it takes to push through sand, gravel, or other soft media can help engineers drive a rover over Martian soil, anchor a ship in rough seas, and walk a robot through sand and mud. But modeling the forces involved in such processes is a huge computational challenge that often takes days to weeks to solve.

Now, engineers at MIT and Georgia Tech have found a faster and simpler way to model intrusion through any soft, flowable material. Their new method quickly maps the forces it would take to push, wiggle, and drill an object through granular material in real-time. The method can apply to objects and grains of any size and shape, and does not require complex computational tools as other methods do.

“We now have a formula that can be very useful in settings where you have to check through lots of options as fast as possible,” says Ken Kamrin, professor of mechanical engineering at MIT.

“This is especially useful for applications such as real-time path-planning for vehicles traveling through vast deserts and other off-road terrains, that cannot wait for existing slower simulation methods to decide their path,” adds Shashank Agarwal SM ’19, PhD ’22.

Kamrin and Agarwal detail their new method in a study appearing this week in the journal Proceedings of the National Academy of Sciences. The study also includes Daniel I. Goldman, professor of physics at Georgia Tech.

A fluid connection

In order to know how much to push on an object to move it through sand, one could go grain by grain, using discrete element modeling, or DEM — an approach that systematically calculates each individual grain’s motion in response to a given force. DEM is precise but slow, and it can take weeks to fully solve a practical problem involving just a handful of sand. As a faster alternative, scientists can develop continuum models, which simulate granular behavior in generalized chunks, or grain groupings. This more simplified approach can still generate a detailed picture of how grains flow, in a way that can shave a weeks-long problem down to days or even hours.

“We wanted to see if we could do even better than that and cut that process down to seconds,” Agarwal says.

The team looked to previous work by Goldman. In 2014, he was studying how animals and robots move through dry, granular material such as sand and soil. In looking for ways to quantitatively describe their movements, he found he could do so with a quick relationship that was originally meant to describe fluid swimmers.

The formulation, Resistive Force Theory (RFT), works by considering an object’s surface as a collection of small plates. (Imagine representing a sphere as a soccer ball.) As an object moves through a fluid, each plate experiences a force, and RFT claims that the force on each plate depends only on its local orientation and movement. The equation takes all this into account, along with the fluid’s individual characteristics, to ultimately describe how the object as a whole moves through a fluid.

Surprisingly, Goldman found this simple approach was also accurate when applied to granular intrusion. Specifically, it predicted the forces lizards and snakes exert to slither through sand, as well as how small, legged robots walk over soil. The question, Kamrin says, was why?

“It was this weird mystery why this theory, which was originally derived for moving through viscous fluid, would even work at all in granular media, which has completely different flow behavior,” he says.

Kamrin took a closer look at the math and found a connection between RFT and a continuum model he had derived to describe granular flow. In other words, the physics checked out, and RFT could indeed be an accurate way to predict granular flow, in a simpler and faster way than conventional models. But there was one big limitation: The approach was mainly workable for two-dimensional problems.

To model intrusion using RFT, one needs to know what will happen if one moves a plate every which way possible — a task that is manageable in two dimensions, but not in three. The team then needed some shortcut to simplify 3D’s complexity.

Wacky twist

In their new study, the researchers adapted RFT to 3D by adding an extra ingredient to the equation. That ingredient is a plate’s twist angle, measuring how plate orientation changes as the entire object is rotated. When they incorporated this extra angle, in addition to a plate’s tilt and direction of motion, the team had enough information to define the force acting on the plate as it moves through a material in 3D. Importantly, by exploiting the connection to continuum modeling, the resulting 3D-RFT  is generalizable, and can be easily recalibrated to apply to many dry granular media on Earth, and even on other planetary bodies.

The researchers demonstrated the new method using a variety of three-dimensional objects, from simple cylinders and cubes to more complex bunny- and monkey-shaped geometries. They first tiled the objects, representing them each as a collection of hundreds to thousands of tiny plates. Then they applied the tweaked RFT formula to each individual plate and calculated the forces that would be needed over time to drill each plate, and ultimately the entire object, down through a bed of sand.

“For more wacky objects, like the bunny, you can imagine having to consistently shift your loads to keep drilling it straight down,” Kamrin says. “And our method can even predict those little wiggles, and the distribution of force all around the bunny, in less than a minute.”

The new approach provides a fast and accurate way to model granular intrusion, which can be applied to a host of practical problems, from driving a rover through Martian soil, to characterizing the movement of animals through sand, and even predicting what it would take to uproot a tree.

“Can I predict how hard it is to uproot natural plants? You might want to know, is this storm going to knock over this tree?” Kamrin says. “Here is a way to get an answer fast.”

This research was supported, in part, by the Army Research Office, the U.S. Army DEVCOM Ground Vehicle Systems Center, and NASA.

Continue Reading

Trending