Drugs have been part of human life since the beginning of recorded history. It started with the discovery of the various medicinal properties of plants. Plants, as we all know, are capable of containing and synthesizing countless chemical compounds, from powerful stimulants such as cocaine and caffeine, to the deadly neurotoxins like curare. Today, plants still yield many valuable compounds and a large portion of the drugs on the market today are variants of naturally occurring chemicals. Over the past 100 or so years, drugs are more commonly invented rather than discovered.
Drug discovery in the past mostly consisted of observations of a plants effects (or a chemicals effect) when administered to a person or animal. The effects were noted and recorded with no knowledge as to how exactly that drug was working. Today, the opposite approach is taken in most cases. Instead of simply observing effects, modern drug development relies on knowledge as to how an alteration in a protein or pathway in the body could be effective against a disease. For example, if we wanted to come up with a medication to lower blood glucose levels, we wouldn’t have someone ingest a multitude of chemicals to find out which one works.
Using our knowledge of the human body, we may want to find a chemical that alters a protein that regulates insulin release which will therefore help blood sugars.This description of the drug development process is of course a generalization but it gives a good idea of our current thinking behind the process. Even in disease states in which we don’t know everything about the pathophysiology (cancer for example), researchers hypothesize how an alteration in something can bring about a positive effect. As new drugs are being developed, there are a variety of critical questions that must be answered through the course of clinical research to produce an effective drug.
Step 1: Finding A Suitable Chemical Compound
Most commonly, the first step of drug research is to screen millions of chemical compounds for their ability to interact with a specific molecular target or produce a specific biological response. This screening is mostly done by robotic systems that can process hundreds of thousands of chemicals. Chemicals that fit what is being searched for, or at least somewhat closely fit, are called “hits”.
It’s quite rare for a “hit” to actually become a future pharmaceutical drug. Most initial drug hits lack the absolute properties that are being looked for. In the majority of cases, chemists and pharmaceutical scientists have to synthesize derivatives of their “hit” chemical and optimize it so it interacts with its' target and alters its' function in the desired fashion. This of course comes with its’ own set of challenges. The newly synthesized drug must have a high attraction to its target, it must be able to be easily administered and easily absorbed in the body, it must be metabolized safely and it must be free from major side effects.
The chemical must also be able to be produced on a large scale and have the appropriate properties in terms of stability. Many times a chemical may have a deficiency in one area which prevents it from being realized into a marketable drug.
Step 2: Pre-Clinical & Toxicity Testing
After our new chemical comes from the lab, it goes through general toxicity testing in two or more species of animals for a considerable amount of time. When possible, the toxicity testing is not performed in animals but very often it is necessary. The chemical will be particularly tested for its’ carcinogenicity and reproductive toxicity. If toxicity is observed, the researchers need to determine if the toxicity is due to how the drug works mechanistically in the body or if the toxicity is due to drug acting upon an unpredictable part of the body.
Step 3: Investigation New Drug Approval & Clinical Trial Testing
At this stage in the process, the chemical that is being researched has been in process on average of 3-5 years. Only at this point can the drug company submit what is known as an Investigational New Drug application (IND) to the FDA. The IND is a request to the FDA for permission to test the drug in human subjects. The FDA has 30 days to review the application. The FDA can either approve it or disapprove it and request more data about initial safety and toxicity results. After the IND is approved, we finally move to the clinical trials. Clinical trials take place in four phases, the last of which is after the drug has already been approved by the FDA and can be marketed to patients. Below is a chart of the different phases of drug trials:
Phase I: Phase I clinical trials are typically conducted in healthy volunteers. The purpose of this part of the clinical trial is solely to get safety and tolerability data in the human population for the first time. This phase of the clinical trial is not intended to determine efficacy, or how well a drug works.
Phase II: This is the phase of the clinical trial where things really begin to pickup and the efficacy of the drug is trying to be established. Acceptable dosing ranges for safety and efficacy are beginning to be determined here as well. Phase II is the first time in a clinical trial that the drug in question is being tested in those in which the drug is intended for. If the medication is going to be used to lower blood pressure, it would be tested on those with high blood pressure. Phase II clinical trials are very small compared to phase III clinical trials.
Phase III: Phase III is the bread and butter of the drug testing process. It is where the most money is invested and it typically takes the longest amount of time to complete. Large scale statistical analysis is completed in this phase, typically with thousands of patients in different locations around the world. In order for FDA approval, a drug company must perform, in the words of the FDA, “adequate and well-controlled investigations”. Currently, the gold standard for clinical trials are randomized, placebo-controlled, double blind studies.
Even though the Phase III clinical trials are quite comprehensive, the requirements set forth for FDA approval has some severe disadvantages.
What Is A Proper Control?
Are placebos the best control? In many cases they are, especially if the drug being tested is a novel treatment for a disease. However, it can be somewhat misleading if an alternative treatment is available for comparison. Trials for new drugs very rarely are tested against an existing treatment for multiple reasons and unfortunately, they are often times in the best interest of the drug company. The main reason for testing against a placebo instead of an existing treatment is that the results are much more poignant.
For example, let’s say a new drug was being tested for blood pressure and a study found it lowered systolic blood pressure by an average of 14 points. This would be great news for a drug company, as it shows efficacy versus a placebo, which we assume would not lower blood pressure (or would only slightly lower it due to a placebo effect). However, if the drug company tested their new drug against an existing treatment, the results would be much less remarkable. Let’s again assume our new drug lowers blood pressure on average 14 points. Let’s also say an existing treatment lowered blood pressure an average of 12 points. The trial in this case would only show a very slight improvement vs. an old treatment.
Older medications tend to be drastically less expensive than newer medications. The fact that our new drug is only slightly more effective would spell bad news as the newer medication may very well be seen as not cost effective. Older drugs too have more safety data and physicians are sometimes unwilling or hesitant to prescribe newer drugs. Drug companies don’t want to pour in millions upon millions of dollars to a drug trial to determine their drug is only slightly better (or perhaps worse!) than an existing treatment. Again, it is very rare to see a comparative trial when a new drug is testing for FDA approval.
Disadvantage: Surrogate Endpoints
What actually is being measured in the clinical trials? For FDA approval, there is no set in stone requirement as to what the clinical trials must test for. Most commonly, a change in a numerical marker that is thought to be predictive of relevant clinical outcome, is measured. For example, the change in LDL ("bad") cholesterol as a predictor for heart attack may be measured. Another example could be how much a drug decreases blood pressure as a predictor of stroke. In both of these examples, LDL cholesterol and blood pressure would be the end surrogate marker the clinical trial is testing for. Clinical trial testing for changes in surrogate markers is commonplace, when perhaps they are not the most indicative of a drugs benefit and worth.
Use of surrogate end markers significantly reduces cost and time required to complete clinical trials but often, the true significance of the surrogate end markers to the disease or condition that the candidate drug is supposed to treat is questionable. Controlled trials would be much better off if they were to study whether or not the drug actually affects the disease state they are treating. There are some well known and highly publicized examples of why only measuring surrogate end markers, and not changes in disease state, is flawed.
Disadvantage: Value Of Drugs
The FDA does not consider the value of new drugs sent for approval, which is a common misconception people have. By law, FDA only considers safety and efficacy. They do not consider the drug's cost or whether it's needed. Because of this, drug manufacturers often try to invent new drugs that are only slightly different from existing products. These are known as "me-too" drugs. These me-too drugs usually do not have a significant advantage over existing products and can cost much more, thus their value is questioned.
In Phase IV clinical trials, postmarketing studies are conducted to find additional information, including the risks, unknown side effects, benefits, and optimal use. Phase IV clinical trials largely depend on the patient population to report any adverse events to the FDA via a program known as MedWatch. Medwatch is an FDA program for reporting serious side effects/reactions, product quality problems and therapeutic failures. If you are taking a medication and have had a serious reaction to a drug, you should fill out the FDA form and give it to your medical health professional or fill out the form online.
Drug safety is a complex issue. No drug is 100% safe and devoid of side effects. Medications can save & prolong lives as well as improve the quality of life in countless numbers of people. However, everything can produce unwanted or unexpected effects in some people at varying dosages. Many times these side effects happen with such scarcity, that they go unnoticed until the drug is available to the general population. The true likelihood of adverse events only becomes clear once the drug is released to a broader market. The dilemma with medications used to treat diseases is that certain severe adverse and negative reactions can be deemed acceptable if the effect and benefits of the drug are unique and valuable. This is obviously quite subjective. At which point is the sufficiency of therapeutic effect in the presence of sometimes severe adverse events acceptable? It's an intriguing debate.
The Pharmaceutical Industry
"Once again Merck is the most admired large corporation in America, according to Fortune magazine, the fourth consecutive year the giant pharmaceuticals company has taken the top spot in Fortune's annual popularity poll. "
The quote above is from a 1990 news article discussing the Fortune ranked "Most Admired Companies in Americas". Times have certainly changed. There currently a huge mistrust in the industry. Those critical contend that the public needs to be protected from greedy companies beset with fraud, misconduct and unethical behavior. While a small portion of actual drug development is government controlled, our system in the United States relies on public pharmaceutical companies that are in business to make a profit and oblige shareholders. It's a delicate balance between the mistrust of the industry and the realization that the new and better drugs can be hugely beneficial for the health good of both people and animals. While we won't discuss everything in this article, we do want to touch on three points: price, promotion and liability.
The price of prescription drugs is always a hot topic. As this article has touched on, the drug development process is extremely lengthy, expensive and highly risky. Only a small number of compounds that enter into development ever make it to the market and as such, drugs must be priced to recover the costs of invention, development and marketing (which is controversial in its' own right). Having said that, many new drugs are discovered with the help of government grants (taxpayer money). In addition, The United States typically has the highest drug costs in the world by a substantial margin meaning that Americans essentially subsidize drug costs for the rest of the world. These facts are causes of extreme irritation. As we have seen recently, there is, and has been a battle brewing between our free market system and the role of government in health care.
Patients want their medical care providers to learn all they need to know about drugs from medical literature, not pharmaceutical sales representatives. Instead, we have an extreme abundance of print advertising and visits from salespeople directed at physicians. We also have what is called "direct-to-consumer" advertising aimed at the public. There are more than 100,000 pharmaceutical sales representatives in the U.S. who visit both pharmacies and physicians on a regular basis. These marketing practices have garnered extreme criticism as the costs of such promotions can sometimes outpace the dollars that went into developing the drug. Currently, the United States is only one of two countries that allow direct-to-consumer advertising (New Zealand being the other). While many see the promotion of drug products unethical, it has become much more regulated than it has in the past. Accepting gifts or any compensation from a drug company to a physician for drug promotion is strictly forbidden in many different medical settings and in many different states. There has been a recent adoption of an enhanced code on relationships with U.S. healthcare professionals which prohibits the distribution of non-educational items, and prohibits company sales representatives from providing restaurant meals to healthcare professionals.
Like pricing and promotion, liability is a huge issue in the pharmaceutical industry. You would be hard pressed to find someone that has not seen a lawyer on TV discussing the legal prospects for patients harmed by medications. In essence, product liability laws are intended to protect consumers from defective products. Pharmaceutical companies can be sued for a variety of reasons including deceptive promotional practices, faulty design and failure to warn consumers of known risks for example. Of course, injured persons are entitled to purse legal actions when they are harmed by a drug but it's important to note the negative effects of product liability lawsuits. Firstly, the legal expenses can be astronomical and those costs are often shifted to the consumer. Secondly, pharmaceutical manufacturers may be overly cautious on new drugs and increase the length and number of trials conducted, therefore delaying access to new medications or potentially discarding beneficial drugs. Thirdly, there could be disincentive for companies to produce drugs that are used in small populations of patients. It may be tough to find side effects during clinical trials with small populations and the risk of adverse events may be too high for a pharmaceutical company to further develop and release the drug.
We hope this article gave a good overview of drug development,the clinical trial process and the pharmaceutical industry! Feel free to always email us any questions or comments you may have!