Research on human embryos – do we need to draw a new line in the sand?

Research on human embryos – do we need to draw a new line in the sand?

In 1984 a line was drawn in the sand; research on human embryos was limited to 14 days of development, effectively setting the point at which an embryo becomes too human to be experimented upon. Today, however, recent scientific advances have led to calls by some researchers for the rule to be relaxed.

The constraint on the experimentation on embryos to 14 days post-fertilization, the so called “14-day rule”, was first proposed in 1979 by the US ethics advisory board. Currently 12 countries around the world have enshrined the limit into law, while five others have 14 days written in their scientific guidelines. So there exists a solid international consensus, many nations and scientific advisory bodies have come together to agree on this cut-off, but why? What is so special about 14 days?

The first reason is a philosophical distinction; fourteen days old is the first point at which an embryo can be said to have an individual identity, as before this the embryos can still fuse and split to form monozygotic twins. Secondly, at this point a band of cells known as the primitive streak is formed, marking the beginnings of a head-to-toe axis. This is the beginning of the transition from a ball of cells towards something that begins to resemble a foetus. Researchers also like it because it’s easy to identify, providing a useful marker of this point in development since this might not always happen at the same speed.

Until recently the rule was not a concern for scientists. Rarely were researchers able to maintain embryos past 7 days outside of the womb. This changed last year when two ground-breaking studies reported new techniques to create a chemical mimic of the womb.  Using these methods they could grow the embryos for as long as 13 days, and for the first time found themselves butting up against the 14-day rule.

Now some scientists are calling on the U.K. government to look again at the rules, and extend the limit past 14 days. Once such advocate for change is Professor Simon Fishel, who was part of the team involved in the creation of the first IVF baby. He has proposed moving the limit to 28 days, speaking with the BBC he said:

“I believe the benefits we will gain by eventually moving forwards when the case is proven will be of enormous importance to human health”. In particular, many cite the impact that such work would have on our understanding and ability to treat fertility issues.

But there are some that are opposed to an extension to the limit, Professor Fishel concedes: “There are some religious groups that are fundamentally against IVF, let alone IVF research in any circumstances, and we have to respect their views.”

Indeed, some fear the extension of the 14-day rule could be the beginning of a slippery slope. Bioethicist and founder of the Centre of Bioethics and Emerging Technologies David Jones is one critic of the idea of an extension, he told the BBC:

“It would be a stepping stone to the culturing of embryos and even foetuses outside the womb. You are really beyond the stage when the embryo would otherwise implant and that is a step towards to creating womb like environment outside. People will then ask why can’t we shift it beyond 28 days.”

How then should we reconcile people’s ethical concerns with a need to update the rules on embryo research to the 21st century? One challenge will be reaching an international consensus on where a new line should be drawn.

The question still remains; whose responsibility is it to define the limits on human embryo research? Is it scientists and researchers or bioethicists and theologians?

This is ultimately a question for us all, for society as a whole. It is our responsibility for us to consider the views of these different groups, to weigh the benefits to society versus the moral considerations, and inevitably to ask ourselves where we feel comfortable setting the limit.

.

  1. Deglincerti,A. et al.Nature http://dx.doi. org/10.1038/nature17948 (2016).
  2. Shahbazi, M. N. et al. Nature Cell Biol. http://dx.doi. org/10.1038/ncb3347 (2016).

DRUGS, BRAINS, AND DOPAMINE

DRUGS, BRAINS, AND DOPAMINE

There are a number of things animals need to do to survive. To function they have to eat and drink, and for their genes to persist they must reproduce. As such it is crucial that animals are not only motivated to perform these actions, but that they prioritize them above all other things. Animals have therefore evolved neural mechanisms to reinforce rewarding behaviours, and a key aspect of this reinforcement is the organic chemical dopamine.

Dopamine is a signalling molecule in the brain, or neurotransmitter, that is released following a rewarding stimulus. A group of neurons in a part of the midbrain called the ventral tegmental area (VTA) send axons to an area called the nucleus accumbens (NAc), releasing dopamine. Drugs hijack this natural reward pathway. Though drugs have wide ranging and varied pharmacological effects on the body, every drug from cocaine to nicotine ultimately causes an increase in dopamine in the NAc.

But the strongest proof that dopamine is the key to addiction comes from a revolutionary technique in neuroscience called optogenetics.

Using optogenetics researchers can selectively switch on or off specific groups of neurons in the brain using light. By exciting pathways that release dopamine in the NAc it’s possible to reproduce the same addictive behaviours shown by mice when they are given drugs.

But how exactly does the release of dopamine result in addictive behaviours? There are several theories that try to explain this.

One idea is that dopamine release causes pleasure. This is known as the Hedonia hypothesis. It is very popular perhaps because it makes intuitive sense; rewarding stimuli make us feel good, so if dopamine release is the key to reward, dopamine must make us feel good.

However, a number of experiments in mice show that animals unable to produce dopamine still show a preference for rewarding stimuli, such as artificially sweetened water, to non rewarding ones. There is also the puzzling case of nicotine, which has an addictiveness that has been compared to that of heroin, despite producing little or no euphoria for the user.

One interesting observation of these experiments is that while animals lacking dopamine still prefer rewarding stimuli, they seem to lack the motivation to pursue them. The distinction here is between “liking” and “wanting”. And it’s an important distinction. As the casual drug user transitions from occasional user to full-blown addict they often report “enjoying” drugs less and less, and “needing” them more and more. The theory describing the wanting or craving caused by dopamine release in the NAc is incentive salience.

Drug addiction is a disease characterized by two things: firstly, a need to use the drug in spite of its negative consequences, and secondly, a pattern of chronic relapse that persists even after use of the drug has stopped. It’s known that if recovering addicts come in contact with cues associated with their former drug use it increases the likelihood of relapse. Incentive salience helps explain this phenomenon. Through the release of dopamine, rewarding and non-rewarding stimuli become associated, and in this way a drug-associated object such as a spoon becomes a “secondary reinforcer” of drug use.

But how are these associations made? Well, it seems dopamine might actually be important for reward learning. A set of experiments performed by Schultz and colleagues are key to our understanding of dopamine’s role in reward and addiction. They measured the firing of dopamine neurons in the brains of monkeys given sweet juice rewards. Unsurprisingly, when monkeys were given the reward their dopamine neurons showed a burst of activity. More interesting was the finding that this burst of activity comes only when a reward is unexpected. Furthermore if a reward is expected but doesn’t arrive, the activity of these neurons drops below baseline levels. These neurons therefore respond to differences between an expected and obtained reward, the so called “prediction error”.
The research shows us that when an outcome is better than expected there is a burst of activity and a release of dopamine. In this way actions performed by an animal that result in a good outcome are reinforced. Over time reinforced actions become habitual. Drugs are able to cause massive dopamine release and are therefore excellent habit formers.

However, in order to make use of our knowledge of the dopamine reward system to treat addiction we must first gain an insight into how this brain circuitry changes as people make the transition from occasional drug user to an addict.

A recent technique called fast scan cyclic voltammetry (FSCV) allows detection of dopamine release in vivo at a time scale of less than a second, and therefore allows us to accurately track the dynamics of dopamine release. One recent study measured the release of dopamine in rats over time as they self-administered cocaine. They found that surprisingly, dopamine release decreased as their rate of cocaine intake increased. Furthermore by administering a dopamine analogue (a compound with similar properties) called L-DOPA, the group found they could actually reverse the increase in drug use that occurred over time.

This finding was somewhat in contradiction to some modern theories of addiction which hypothesised that over time reward pathways in the addict might become hyper-sensitized, increasing the response to the drug and drug associated stimuli, and driving the process of the establishment of drug craving. It does, however, make sense with what we know about dopamine and its role in reward learning. After all, if the amount of dopamine released decreases the more a reward is expected, the more a drug would need to be administered to produce the same effect. Either way, thanks to more advanced techniques we are increasingly gaining an insight into the dynamics of the dopamine reward system.

With this understanding comes the exciting potential to target the system, in order to try and reduce drugs’ hold over addicts. Despite this knowledge however, current treatments for e.g. opioid and nicotine addiction are focused mainly on reduction or replacement of the drug. In the case of opioid addiction, analogues such as methadone are prescribed to try and reduce the withdrawal and cravings that are the key driver to relapse. Similarly, naltrexone is a drug used in the treatment of alcohol and opioid addiction that blocks opioid receptors, and although its exact mode of action is not know, it is likely its effects are ultimately through the dopaminergic pathways.

Therefore although there is now strong evidence that dopamine pathways in the midbrain have a casual role both in reward learning and in driving the addictive properties of drugs, this fact has yet to be exploited to target these systems directly for the treatment of addiction. New techniques in the field of neuroscience are shedding more and more light on the reward pathways in the brain, allowing us to target particular groups of neurons and track how the release of chemical signals in the brain such as dopamine control behaviour. The hope is that by using this knowledge to develop treatments which more directly target reward systems in the brain, we will be able to reduce the hold drugs can exert over our behaviour.

Scientists engineer detoxifying super grass

Scientists engineer detoxifying super grass

Varieties of common grass have been genetically engineered to break down toxic chemicals released into the environment by years of military testing. The results demonstrate the potential to decontaminate areas polluted by hazardous by-products.

Published in Plant Biotechnology Journal, the study focused on two chemicals, RDX and TNT, which are produced by military testing, manufacturing and the decommissioning of explosives. More than 100 military bases and explosive manufacturing locations in the USA are contaminated with these compounds. The cost of decontamination of these sites has been estimated to be between 16 and 165 billion US dollars.

As for the chemicals, human exposure to TNT can result in hepatitis and RDX is a neurotoxin which can cause seizures in humans and animals. Both are considered potential carcinogens by the Environmental Protection Agency (EPA).

The plants were produced by incorporating genes from bacteria able to produce enzymes that degrade and transform these toxic chemicals. Specific genes were isolated from a strain of the bacteria Rhodococcus rhodochrous, found to be able to grow in conditions where RDX is the sole nitrogen source. Similarly, the Enterobacter cloacae bacteria contains one of these genes, and is able to transform, and detoxify TNT. Expression of these genes together, enabled these plants to remove RDX and TNT from soil samples.

The researchers, from the University of Washington, created transgenic versions of the two most common grasses found on military ranges. Grasses are preferred candidates for this approach as they are fast growing and require little to no maintenance. Wild grasses have already been shown to take up RDX from groundwater, but these plants lack the ability to degrade the toxic compounds and ultimately die, releasing the contaminants back into the environment.

The grasses were then tested, and researchers found the best performing strains could remove all traces of RDX from the soil within two weeks, with no toxic products remaining in the plants themselves. Furthermore, a surprising benefit is that since the plants utilize these toxic compounds as a nitrogen source, they actually grow faster than wild type strains.

The next step for the researchers is to see how the plants perform in a real-world setting. The plan is to test the grasses on military training ranges but more widespread use in the future will require demonstrations that these genetic modifications don’t pose any threat to wild grass populations. If approved however, this advancement could have great implications for environmental efforts at decontamination.

Link to paper:

https://www.ncbi.nlm.nih.gov/pubmed/27862819

New scanning techniques for the detection of Alzheimer’s disease

Late diagnosis. This is one of the reasons neurodegenerative diseases such as Alzheimer’s are so difficult to treat. By the time patients show clinical symptoms and are diagnosed large amounts of neuronal death has occurred.  This makes the development of new techniques to detect the protein deposits that cause Alzheimer’s crucial. If these proteins can be detected sooner, and treatments begun to prevent or stop their spread, the prognosis of the diseases could be improved dramatically.

Several papers recently published have described technical advances in our ability to detect these protein clumps, or aggregates. A paper published in February in Nature Communications describes  the first use of antibodies to detect the aggregated amyloid β (Aβ) protein, one of the hall marks of the disease. Antibodies are used in the diagnosis of other diseases but haven’t been used in Alzeheimer’s  because the  blood brain barrier (BBB) prevents antibodies from crossing into the brain. By modifying the antibody to enable it to cross the BBB and using a live imaging technique called PET they could image protein aggregates in the brains of mice.

Another paper, this one published in Neuron in March describes the imaging of the second protein aggregate that defines Alzheimer’s disease, Tau. By using a compound called F-AV-1451 again in combination with PET scanning, the group imaged the brains of young and old healthy people as well as people with Alzhemier’s disease.

1-s2.0-S0896627316000532-gr1

Brain imaging of the brains of young and old healthy brains  and  the brains of those diagnosed with Alzhemer’s disease. Red shows deposits of a aggregated protein Tau, one of the hallmarks of the disease. (Scholl et al, 2016)

The group were looking for certain patterns of Tau deposition in the brain, called Braak staging. A way of classifying the severity of  Alzheimer’s disease described by Heiko Braak in 1991, it is performed by studying the protein deposits in the brain by autopsy following death. The group could use this scanning technique to identify the different Braak stages in the brain of live individuals.

By developing techniques to track the progression of Alzheimer’s disease in the brains of live patients could lead to better tailoring  of medications to the stage of the patient’s disease, and even to faster diagnosis of this disease and earlier intervention.

Article References:

Schöll M, Lockhart SN, Schonhaut DR, O’Neil JP, Janabi M, Ossenkoppele R, Baker SL, Vogel JW, Faria J, Schwimmer HD, Rabinovici GD, Jagust WJ. 2016. PET Imaging of Tau Deposition in the Aging Human Brain Neuron. 2016 Mar 2;89(5):971-82. doi: 10.1016/j.neuron.2016.01.028.

Sehlin D, Fang XT, Cato L, Antoni G, Lannfelt L, Syvänen S. (2016) Antibody-based PET imaging of amyloid beta in mouse models of Alzheimer’s disease. Nat Commun. 2016 Feb 19;7:10759. doi: 10.1038/ncomms10759