Electric and Magnetic Fields
For over 40 years, scientists have been investigating the possible effects of EMFs on human health. Hundreds of epidemiological studies have been conducted on various groups, including electric utility workers and the general public. In addition, numerous laboratory studies have been conducted on the effects of fields on the living cells of various animal species as well as humans.
To date, no studies have been able to show that fields at the levels found in the home or workplace have any clear effect. However, some doubt persists as to whether a relatively weak magnetic field (0.4 µT) could increase the risk of childhood leukemia. Data on the subject remain contradictory.
To assess the health risks of a given chemical or physical agent, the biological effects in a population exposed to a given level of the agent are observed and compared to those in a population that has not been exposed, normally called the "control group."
In experimental studies, care is taken to ensure that environmental factors that could influence the subjects’ behavior or metabolism, such as food, drinking water, room temperature and the laboratory’s cycle of light and dark periods, are the same in exposed populations and the control group. In addition, the level of the agent under study is controlled right in the laboratory, which makes it possible to expose selected groups to specific levels and to maintain those levels throughout the exposure period. This method lets researchers form control groups that are not exposed to the agent being tested.
There are three types of experimental studies:
In vitro studies involve exposing cells from the same cell culture and with the same genetic characteristics to a chemical or physical agent. The advantage of these studies is that they allow researchers to determine whether a cell element or cell type is sensitive to the agent tested. However, they also have a disadvantage: it is impossible to extrapolate the consequences of the effect observed and draw conclusions for the health of the living organism as a whole—and even less so for that of a human being.
Animals of the same strain (for instance, Sprague-Dawley rats) and generally provided by the same breeder are used for both the exposed groups and the control group or groups. As a result, the animals in all groups have the same genetic characteristics. Animal studies allow for very high exposure levels.
The effect observed can be extrapolated to humans, but only under certain conditions.
There are differences between exposure to EMFs and exposure to chemicals. A chemical can enter an organism by the respiratory, oral or dermal route. Depending on the route, different organs can be affected (for example, the lungs, the stomach or the skin). Moreover, a chemical that enters an organism is generally broken down into by-products, called metabolites, by enzymes in the liver. These metabolites vary depending on the species, and each has its own toxicity. As a result, significant differences in a chemical’s toxicity can sometimes be observed depending on whether it is found in an animal or a human.
Because EMFs are not chemicals, they cannot be converted into metabolites by liver enzymes. In addition, due to the physical nature of EMFs, all cells and organs are exposed to the same field intensity, whatever the organism in question. As a result, there is no reason to believe that the potential toxicity of EMFs differs in animals and humans.
Accordingly, it is very likely that the effects observed in animals exposed to EMFs can also be observed in humans, given the same exposure conditions. Conversely, if no effects are observed in animals, it is unlikely that any will be seen in humans.
Unlike in the animal kingdom, it is difficult to find human beings who have the same genetic makeup, unless the study subjects are exclusively identical twins. Since that is not always easy to do, one approach which can be used to limit the impact of genetic differences is to use volunteer recruits as both test subjects and control subjects. In such cases, the characteristics that interest researchers are measured in each subject before and after exposure to the physical or chemical agent to detect whether exposure has had any effect. Studies on human volunteers offer the advantage of allowing the effect of a given agent on human physiology to be measured directly in a controlled laboratory setting. However, they do not allow researchers to measure the effects of very high exposure levels.
Health studies on human populations, called epidemiological studies, involve comparing the risk of disease for individuals exposed to a certain chemical or physical agent in the general environment or a specific place (home, school, office, factory, etc.) to the risk for individuals who have not been exposed. The number of people in each group can be large, even in the thousands. Information about their state of health is gathered by means of questionnaires completed by the subjects, medical records from businesses or hospitals, and government statistics. Information about exposure levels is collected from questionnaires completed by the subjects, information gathered in the workplace and direct measurements. Since epidemiological studies do not involve carrying out experiments, the associations observed are subject to observational errors and consequently are not always causal in nature.
Indeed, contrary to the situation in experimental studies, researchers conducting epidemiological studies cannot control environmental factors that could influence the incidence of the disease in question. In this type of study, therefore, it is difficult to isolate the effect of a chemical or a physical agent from all the other factors that could determine whether a disease develops, unless information is obtained about the impact of these confounding factors on the health of the population in question.
There are two types of epidemiological studies:
Most epidemiological studies are retrospective when the purpose is to look at rare illnesses which appear several years after exposure, as with cancer. The principle behind these studies is to determine the past or present levels of exposure to a particular agent in groups of individuals suffering from a disease, as well as in a group of individuals with no such health problems. For this reason, retrospective studies are sometimes called "case-control studies." The association is quantified by means of an odds ratio (OR). If the individuals most affected by the health problem studied are also those who are or have been the most exposed to the agent in question (OR>1), this may indicate the existence of a link between the illness studied and exposure to the agent. On the other hand, if the individuals most affected are those who are or have been the least exposed (RC≤1), this may indicate that there is no such link.
Prospective studies are more difficult to carry out and take longer than retrospective studies. They involve tracking the incidence of a particular disease in a group of individuals exposed to the agent studied over a long period, sometimes for decades, and comparing it to the incidence in a group of people who have not been exposed to the agent but whose numbers, age and sex distribution and social and geographic origins are the same. If the study is conducted in a business, most of the employees are generally involved, either as exposed subjects or unexposed controls (for example, lineworkers and office staff in an electric utility). Prospective studies are often called "prospective cohort studies." The association is quantified by means of a relative risk (RR). If, during the course of the study, the exposed individuals are also those with the greatest incidence of the disease (RR>1), it could indicate the existence of a link between the disease and exposure to the agent in question. On the other hand, if the unexposed individuals are those most affected by the disease studied (RR≤1), it could indicate that no such link exists.
Epidemiological Study Analysis Techniques
When researchers set out to assess a risk, the available epidemiological studies are generally analyzed one by one. In recent years, however, certain analysis techniques have been developed to pool the results of several studies of the same disease. Those most often used are the pooled analysis and meta-analysis.
In this type of analysis, the raw data from a number of epidemiological studies are pooled and reanalyzed. This approach allows researchers to identify and quantify a risk which might otherwise have gone unnoticed if the studies had been compared one to one. However, it does not take account of qualitative differences between the selected studies in terms of the experimental protocols or subject selection techniques they use. Consequently, results obtained in this manner do not reflect the nuances associated with the different contexts in which the studies were conducted.
This approach is similar to that of the pooled analysis, but it looks exclusively at published data from several epidemiological studies.
Relative risk (RR) is the relationship between a disease's incidence in a group exposed to a given risk factor and the same disease's incidence in an unexposed group. It is the indicator most often used in prospective cohort studies. An RR of 1 means that the incidence of the disease is identical in the exposed group and in the unexposed group. If the RR is 2, the incidence of the disease in the exposed group is twice as high as in the unexposed group. On the other hand, if the RR is 0.5, the incidence of the disease in the exposed group is half as great as in the unexposed group.
The confidence interval (CI) represents the range of values in which the actual value of the indicator measured has a 95% chance of being found. It is an indicator of the measurement's degree of accuracy and thus, to some extent, its margin of error. The smaller the CI, the more accurate the measurement. In epidemiological studies, if the value 1 does not lie within the CI range, the measured risk (OR or RR) is not arbitrary and the risk is said to be significant. On the other hand, if the value 1 does fall within the CI range, this risk is said to be insignificant.
The odds ratio (OR) is a statistical measurement used to estimate risk, derived from comparing the proportion of exposed individuals to unexposed individuals among both the sick population (cases) and the healthy population (controls). This indicator is used in retrospective "control-case" studies. An OR of 1 means that the proportion of exposed to unexposed individuals is the same for the cases as it is for the controls. An OR of 2 means that the proportion of exposed individuals among the cases is twice as high as among the controls. On the other hand, if the OR has a value of 0.5, the proportion of exposed individuals among the sick cases is half as great as among the controls.
When a statistical association is observed in an epidemiological study, the association is more likely to be causal in nature if the following conditions are met (according to Hill, A. B., 1965, Proc R Soc Med 58, 295-300):
© Hydro-Québec, 1996-2017. All rights reserved.