Tag Archives: history

No, Circumcision Was Not a Mark of Slavery

Anti-circumcision activists (self-branded “intactivists”) claim that circumcision is a mark of slavery—specifically, that slave-owners used circumcision as a means of subjugating their slaves. Some of their memes specifically claim that white American men forced circumcision on their African American slaves. Nothing could be further from the truth! In reality, throughout history, slaves and subjugated races have either been required NOT to circumcise, or have been left alone. I could not find evidence that circumcision was forced on slaves.

 

 

Before America

Let’s step back a bit. Some intactivist sources will start out by specifically mentioning Egypt and claiming that Egyptians circumcised their slaves by force, that circumcision was a mark of slavery. But this is completely false.

In Ancient Egypt, it was recognized that there were hygiene and health benefits to being circumcised. It seemed to be primarily a practice of the middle class and wealthy, with nearly all pharaohs circumcised, and very few slaves (except the Jews, who practiced circumcision before becoming slaves to the Egyptians). Of course, there were certainly exceptions to every rule. But an important point to make is that the intactivist claim that circumcision was a mark of slavery in Egypt is just not true. In reality, nearly all of the pharaohs were circumcised, and those who chose not to undergo the procedure apparently did so to show their spiritual and political superiority over the priests, who performed all circumcisions [1]. If it was predominately a practice of the wealthy and the pharaohs, how could it be a mark of slavery?

Related image

Maccabeean Revolt

In fact, ironically, the opposite is typically true in human history. Circumcision has generally been prohibited of subjugated races rather than required. For example, the Jews were forced to stop circumcising when the white Greeks ruled over them. Although they initially stopped circumcising and circumcised in secret, their ultimate reaction was to fight back in the Maccabeean Revolt, and that battle is commemorated in Hanukkah [2]. As another example, the advanced civilizations of America, such as the Aztecs, practiced circumcision. When the white Spanish conquistadores instituted a systematic destruction of the indigenous cultures, part of their method was to prohibit circumcision, which is why Hispanics to this day do not circumcise—because white man took away their right to do so back in the 1500s [1].

In other words, rather than being forced to circumcise, underdog races have generally been left alone or forced NOT to circumcise.

 

Why Did White Americans Circumcise?

Before we can understand why circumcision might have been recommended for or required of African Americans, we must first understand what Americans thought of circumcision. Why did Americans, Europeans, and others start circumcising in recent centuries?

King Louis XVI

Phimosis has been recognized as a serious foreskin problem since ancient times—in fact, the Greek god Priapus, ironically a god of fertility, is depicted as having severe phimosis, which would have limited his own fertility. In the 1770s, French King Louis XVI suffered from phimosis so severe that he was infertile for the first 8 years of his marriage. After his brother-in-law, Austrian Emperor Joseph II, convinced him to get circumcised, he promptly fathered three children. This may have been the start of circumcision among European royalty, with apparently most of European royalty favoring circumcision, though it remained uncommon or rare among the common people [1].

As far back as the 1820s, it was recognized that circumcision reduced the risk of gonorrhea* [3]. It was also recognized by the 1850s to reduce the risk of syphilis [4] and since at least 1904, if not before, that circumcision reduces the risk of penile cancer [5]. Furthermore, during the 1800s, bacteria were identified as causes of disease, and hygiene was identified as a way to prevent bacterial infections, but bathing was still rare (a weekly event at best), and so hygiene with a foreskin was very difficult, as demonstrated by numerous medical publications on the subject in those days. Surgery was also becoming safer during this time period, so it was no longer seen as a last-ditch effort against death but rather as something one might do for preventive health. They also thought a circumcised penis performed better sexually. So the combination of recognized health benefits, poor hygiene, and a belief that circumcised men were sexually superior, along with advances in surgical technique that made surgery a much safer proposition, led to a gradual rise in the circumcision rate [1].

anti-masturbation device

Around that same time, some uncircumcised men proposed that it is impossible for circumcised men to masturbate. (Allow us a pause for laughter.) Circumcised scholars proved them wrong. Personally, I would have loved to see that scholarly convention. Nonetheless, for this and other reasons, a few people suggested that circumcision might prevent masturbation, which was at that time thought to cause mental illness. However, most sources promoting circumcision made no mention of masturbation, and most sources demonizing masturbation made no mention of circumcision, so this was obviously not a widely-, much less universally-, accepted theory [1].

The experiences of American, Canadian, Australian, and other soldiers in WWI and WWII—where uncircumcised soldiers developed horrific infections and required circumcision—led to a sudden, dramatic rise in the circumcision rate that mere concerns about health and hygiene could not affect [6]. Thus, in the U.S., England, Australia, New Zealand, and Canada, circumcision became popular. By 1949, the circumcision rates in the U.S. vs. England were 45% vs. 50% for poor boys and 94% vs. 85% for rich boys. In Australia and New Zealand, there were no such class distinctions, and by 1950, circumcision was nearly universal for whites [1, 9].

However, circumcision has always been less common for the poor and minority races. So how do intactivists get the idea that circumcision became a mark of slavery for African Americans?

 

Circumcision and African Americans

Now on to the question of circumcision in American black slavery.

All of the intactivist articles I’ve read fail to provide any pre-Civil War sources. In other words, they provide absolutely no references to African American circumcision before slavery was abolished. So I’m not sure how they can claim that it was a mark of black slavery committed against blacks by whites. Then again, intactivist sources are known to lie shamelessly…

On the other hand, after the Civil War, there were several publications or speeches suggesting that forcibly castrating black men would protect vulnerable white women from rape. At the same time that uncircumcised men thought circumcised men couldn’t masturbate, they also thought circumcised men were less likely to commit rape. So at least one person suggested that circumcision would be a kinder and more humane method than castration, especially given the proven health benefits of circumcision, as there were no known health benefits to castration.

Furthermore, there were discussions in the early 1900s about the rising rate of syphilis among the black population, and because it was known that circumcision lowered the risk of syphilis and was already recommended to whites for that reason, it made sense to recommend it to blacks as well. In this case, it was not suggested that they should force it on black men; it simply said, “As regards personal prophylaxis, all male babies should be circumcised,” which is similar language to that in discussions of white circumcision of the time period. There were also many other recommendations, including condoms (“prophylactic packages”), addressing cocaine and alcohol addiction (since substance use was involved in many rapes), home studies to prevent overcrowding, curfews, making syphilis a legally reportable condition (as it is today, and as were smallpox, measles, pertussis, and other communicable diseases in those days), provisions for the medical care of children born with syphilis, improving care in-hospital (see quote below), improving care in clinics, and more. Altogether, there was exactly one sentence on circumcision as a preventive, and it took up less than four lines of text; the other recommendations took up 26 sentences and over 80 lines of text** [7]. Note also that this was in the days before antibiotics, so there was no really effective treatment for syphilis; thus, most energy was expended on prevention.

“The way that syphilis is treated in the average ward or outpatient department is a disgrace. [….] If a factory turned out goods in the slipshod way that the average hospital hands out syphilitic medication, it would soon go to the wall.” [7]

But again, there is no evidence that circumcision was actually forced on African Americans as a routine measure, either as a mark of slavery or as a means of racial subjugation.

In short, intactivists have drummed up a number of articles that were apparently in the minority opinion and which were never followed-through on. In these articles or speeches, various racists and non-racists alleged that circumcision would benefit the African American male (or others) for a variety of reasons. The racist reasons included preventing black rape of white women. The non-racist reasons included prevention of STDs. The racist ones rarely called for compulsory castration and circumcision of African American males. The non-racist ones called for recommending circumcision to African American males or parents. Speeches on the subject were even given at African American conventions, such as the Coloured Physicians’ Association in 1889 [8]. However, intactivists have failed to present evidence that male circumcision was forced on African Americans at any point, much less that it was a mark of African American slavery.

 

Conclusion

In conclusion, I could find no evidence that circumcision has ever (much less predominately) in the history of mankind been a mark of slavery. Rather, slaves and subjugated races have been forced by white man not to circumcise in more than one instance. While there were certainly propositions that circumcision should be recommended for the prevention of various ills (for both racist and non-racist reasons) in the African American male, I can find no evidence that it was ever forced on African Americans. Rather, it seems mostly to have been withheld from them due to the difference in socioeconomic status, as circumcision was predominately a practice of the wealthy and African Americans have long been economically disadvantaged and oppressed.

 

~~

 

FOOTNOTES

*Modern research indicates that might be false, but this was considered a medical fact back then.

**I actually was quite surprised by this article. The author went to great lengths to emphasize that there are many African Americans who have made well for themselves and are physicians, lawyers, etc., and that there is no concern about syphilis among this group; that many European cities have higher illegitimate birth rates than do African Americans, so it’s not a uniquely African American problem at all; and that many white children have deplorable morals compared to African American children, etc., and almost apologetically reiterated that nonetheless, African Americans were for some reason more affected by syphilis than were whites. Until reading this article, I was under the impression that political correctness did not exist in the early 1900s! He proved me wrong. Nonetheless, intactivists contend that this article is an example of stereotyping. It seems they didn’t bother to read the entire article.

 

REFERENCES

[1] Cox, G., & Morris, B. J. (2012). Chapter 21: Why circumcision: From prehistory to the twenty-first century. In Surgical Guide to Circumcision.

[2] History of Hanukkah: https://www.facebook.com/CircumcisionResource/photos/a.735986419837365.1073741827.712201812215826/896702953765710/?type=3

[3] Abernethy, J. (1828). The Consequences of Gonorrhoea. Lectures on Anatomy, Surgery and Pathology: Including Observations on the nature and treatment of local diseases; delivered at St. Bartholomew’s and Christ’s Hospitals, Chapter XXII (pp. 315-316). 163, The Strand, London: James Bulcock.

[4] Hutchinson, J. (1855). On the influence of circumcision in preventing syphilis. Medical Times Gazette, 2:542-543.

[5] Sutherland, D. W. (1904). The Middlesex Hospital Cancer Research Laboratories. Archives of the Middlesex Hospital, 3:84. https://books.google.com/books?id=7o5MAQAAMAAJ&printsec=frontcover#v=onepage&q&f=false

[6] https://www.facebook.com/CircumcisionResource/photos/a.735986419837365.1073741827.712201812215826/764754116960595/?type=3

[7] Hazen, H. H. (1914). Syphilis in the American negro. Journal of the American Medical Association, 63(3):463-468.

[8] At least, according to an intactivist website. I was unable to locate the source they cited.

[9] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2051968/?page=3

Advertisements

A Brief History of Pertussis Vaccines

Previously, I wrote about the dangers of attempting to protect an infant by cocooning (vaccinating all of his adult contacts against whooping cough), demonstrating how doing so actually increases the risk to the infant rather than decreasing it. I discussed how I’ve never been a fan of influenza or HPV vaccination, but how, due to research published primarily in the last couple years, I’ve come to feel similarly about pertussis vaccination.

Whooping cough deaths and cases dropped dramatically prior to introduction of the vaccine. They continued to drop after the introduction of the vaccine, decreasing by about 99% between the mid-1940s and 1970. Vaccination rates fell in concert with rising concerns about the safety of the DTP vaccine in the 1970s-1990s. However, vaccination rates have steadily risen since then and are now at an all-time high. Nevertheless, since the 1980s, the incidence has steadily increased in spite of simultaneously increasing rates of pertussis vaccination.

As I was reading studies and articles about the many possible explanations for this paradoxical increase, I came across what was to me a fascinating and detailed (and apparently award-winning) article authored by Dr. Geier, a former researcher at the National Institutes of Health (NIH) and advisor to the Centers for Disease Control and Prevention (CDC), about the history of pertussis vaccines. After reading it, I’m amazed at how much disinformation abounds on the internet about this topic! You may not be as fascinated by the topic as am I—in which case, you can skip this one and wait for my next blog post—but I found it so interesting that I summarized the article and filled in the few blanks from a few other sources. So without further ado, I present to you a brief history of pertussis vaccination.

 

And So It Starts

In 1906, researchers Bordet and Gengou developed a technique to grow B. pertussis in a laboratory, which paved the way for the creation of a pertussis vaccine. The first whole cell pertussis (wP) vaccine was produced by Bordet and Gengou in 1912 and by 1914, there were six U.S. manufacturers of pertussis vaccines. Pertussis vaccines sans formal testing were used sporadically between 1914 and 1925. The first clinical trials of wP vaccines were published in 1925 and 1933, with the 1933 study reporting serious adverse effects for the first time in its listing of two deaths that occurred within 48 hours of vaccination. The first modern wP vaccine, which was combined with diphtheria and tetanus toxoids, was created in 1942 by Dr. Pearl Kendrick. Because the wP vaccine does not inactivate endotoxin or pertussis toxin, it may be associated with some or all the side effects of pertussis infection from fever to seizures, shock, and death. Evidence of the dangerous side effects of the wP vaccine as compared to the aP vaccine were reported as early as the 1930s and considered conclusive by the 1950s, with the first deaths reported in 1933 and the first published reports of irreversible brain damage appearing in 1947 and 1948. By 1948, there were a dozen manufacturers of DPT. The “mouse toxicity test,” which essentially determined the toxicity of the vaccine by seeing how many mice died from it, was introduced to ensure licensure of safer vaccines; however, researchers concluded in 1963 that there was no correlation between mouse safety and human safety. From the late 1940s to the early 1960s, physicians continued to use wP vaccines because they had no other choice on the market and because manufacturers hid the presence of endotoxin in the vaccine and its associated risk. Vaccine manufacturers began a successful lobbying campaign of pediatric societies and state legislators in the 1940s, ultimately resulting in legislation requiring DTP vaccination prior to school entry in most states by the mid-1960s. However, with such widespread vaccination came the first published reports of irreversible brain damage and deaths resulting from the vaccine, with these reports being published almost every year from the early 1950s through the early 2000s, with additional published reports coming out of other countries. This causal relationship was deemed definite by a report from the National Institutes of Health (NIH) in 1963. Criticism of the wP vaccine due to its high rate of adverse effects, cited at 93% in a 1984 study, increased through the 1970s and peaked in the 1980s.

 

A Better Option?

The first aP vaccine was created in the 1920s and it was obvious from at least the 1930s that it was associated with fewer adverse events than the wP vaccine. Lederle Laboratories patented a new aP vaccine in 1937, which was shown clinically to be 94% protective against disease, making it significantly more effective than the wP vaccine, and was used widely in the 1940s. However, new federal laws were passed which would require expensive and labor-intensive efficacy testing of aP vaccines, and so Lederle ceased production of its more expensive but more effective and less reactogenic aP vaccine in 1948 and began producing a wP vaccine instead. Another aP vaccine was produced in 1954 but never licensed or marketed in the U.S. due to the higher cost of production and increased clinical trial requirements. Eli Lilly Company created an aP vaccine and named it Tri-Solgen. Tri-Solgen was associated with significantly fewer adverse reactions compared to wP vaccines and was sold widely from 1962-1977, at one point capturing up to 65% of the U.S. market for pertussis vaccines. Merck Sharp & Dohme produced another aP vaccine in 1960 which was found to be both safer and more effective than the wP vaccines, but ceased production by 1963 due to the cost. The following year, 1964, Merck also removed all wP vaccines from the market citing a fear of lawsuits due to damages caused by its wP vaccine because they had a safer and more effective aP product that didn’t sell. Many other aP vaccines were produced but never marketed due to their cost and to similar concerns about legal liability due to having a safer and more effective product (the aP vaccine) but continuing to sell the more dangerous and less effective product (the wP vaccine). Due to these concerns, the market severely contracted and only four manufacturers were still producing DTP vaccines by the 1970s. Lilly ceased production of all biologic products in 1975 and sold the rights to its high quality aP vaccine Tri-Solgen to Wyeth. However, the yield was low and when Wyeth reformulated it to increase its yield, the government required new safety and efficacy trials. Wyeth determined the cost, both financial and legally, wasn’t worth it and ceased production of Tri-Solgen; specifically, Wyeth’s concerns were the same as Merck’s had been—that the studies would show the aP Tri-Solgen to be safer and more effective than Wyeth’s wP vaccine, making them legally liable for continuing to market an inferior product. Hence, the only aP vaccine on the market became unavailable after 1977. By 1984, Wyeth also completely stopped production of pertussis vaccines, again due to concerns of legal liability from its failure to produce its safer product. The end result was that only two pertussis vaccine manufacturers remained in the U.S., and both produced only the wP vaccine.

 

Trouble in Paradise

In 1975, two babies in Japan died from DPT vaccination, and these were two of 37 SIDS deaths linked to vaccination; in response, the Japanese government initially banned the DTP vaccine, but later in the year resumed vaccination in children over age 2. The following year, 1976, the government sent Dr. Sato to the NIH to study aP vaccine production. His aP vaccine was tested between 1978 and 1981 and found to be nearly 100% effective and significantly less reactogenic, and so the Japanese government mandated switching to aP vaccination in 1981. During this period, infant deaths plummeted, bringing Japan from a high 17th place in world comparison of infant mortality rates to 1st place with the lowest infant mortality rate in the world. (Coincidentally, when they reintroduced vaccinations in children as young as 3 months of age in 1988, their SIDS rate quadrupled.)

Also in the 1970s, rising awareness of vaccine adverse effects led to a reduction in the pertussis vaccine compliance rate. Pertussis is an epidemic disease–i.e., there are periodic outbreaks every 3.3 years with low disease rates in the interepidemic periods–but the interepidemic period that correlated to the lowest pertussis vaccine compliance rates was an unusually long interepidemic period with the lowest whooping cough incidence on record. In the 1970s, the U.K. determined that the benefits of continued use of wP vaccination outweighed its risks, while Sweden determined the opposite, pointing out that no one had died from pertussis since 1970 and that the causal relationship between wP vaccines and encephalopathies was too great to ignore, and banned the wP vaccine. Most studies of efficacy look only at the ability of the vaccine to produce an antibody response—termed by some “research efficacy.” However, because the presence of antibodies does not necessarily correlate to immunity, a study of actual disease rates may be used to determine the ability of the vaccine to prevent disease—termed by some “clinical efficacy.” The wP vaccines were determined to be 45-48% clinically effective while the Japanese aP vaccines when tested in Sweden were found to be 55-69% clinically effective. Even when the Swedish scientists compared a two-dose regimen of aP vaccines to a five-dose regimen of wP vaccines, the aP vaccines were found to be more effective.

In the 1970s and 1980s in the U.S., several factors contributed to consideration of abandoning wP vaccination, including: the relative absence of whooping cough in the population; improvements in medical treatment of whooping cough; the serious adverse effects of the wP vaccine, which led to health clinics requiring parents to sign an informed consent prior to receiving a wP vaccine; several SIDS deaths in 1979 which the CDC deemed to be caused by a particular lot of the wP vaccine, causing the FDA to order a recall of the defective lot, followed by a reversal of the recall and efforts by manufacturers to prevent future recalls (e.g., Wyeth began spreading lots out across the country rather than sending an entire lot to one area so that adverse effects of any one lot would not be noticed as quickly in the future); and numerous lawsuits beginning in 1981 which were ironically successful because it was argued that the manufacturers had known how to produce a safer aP product but chose not to. (Unsuccessful lawsuits had been filed previously.) In 1982, a television program about the adverse effects of DPT vaccination raised parental awareness so much that attorneys trying the cases were flooded with hundreds of requests for representation. The vaccine manufacturers attempted to stop the cases by harassing the expert witnesses, leading at least one to file a suit against them. Nevertheless, by 1985, 219 such lawsuits had been filed. Pressure from parents and especially from a group formed in 1982 called Dissatisfied Parents Together led the American Academy of Pediatrics (AAP) to conduct over 8 months of hearings to develop recommendations for the creation of a federal compensation program for vaccine-injured children. Due to the AAP’s recommendations and to the large-scale civil litigation against vaccine manufacturers, Congress introduced the National Compensation Act in 1983, which sought to limit liability for vaccine injuries. One manufacturer agreed to settle out of court for $26 million and then cite its case as an example of why the act was needed. In 1986, the U.S. Congress passed the National Vaccine Injury Act, which established, among other things, the National Vaccine Injury Compensation Program (NVICP) and essentially ended litigation against vaccine manufacturers. However, with the threat of litigation gone, manufacturers were no longer under pressure to produce a safer aP vaccine. Foreseeing that this would happen, the Congress also stipulated in the Act that the IOM hold hearings and make recommendations for improving vaccines in general and the pertussis vaccine specifically.

 

Safety Wins

As previously stated, the causal link between DPT and neurological sequelae was deemed definite by the NIH in 1963. However, after receiving several generous donations from vaccine manufacturers and being staffed and/or headed by former and current employees of vaccine manufacturers, the AAP and the Pediatric Neurology Society “mysteriously” reported in 1992 that there was no such link. This was followed by several heavily manufacturer-funded researchers publishing articles that also attempted to deny the link. Backing up a few years, we’ll examine what the government saw. In 1985, the Institute of Medicine (IOM) published a report stating, among other things, that in spite of its initially higher costs, the aP vaccine saves on overall medical costs as compared to the wP vaccine, and the United States would save millions of dollars if the wP vaccine was replaced by the aP vaccine due to the high rate of adverse reactions; it advised that the highest priority should be given to making the switch. However, this recommendation was put on the back shelf and when another IOM committee convened in 1990, only five years later, they were surprised to learn that data presented in the meeting came from their own archives. Nevertheless, the evidence against the wP vaccine was so overwhelming that, regardless of the opinions of those bought by the manufacturers, the IOM determined that the causal link between wP vaccination and encephalitis was definite. The IOM convened a third time in 1993 to again discuss the DTP vaccine and determined that it definitely causes permanent brain damage. Even the AAP failed to argue the point, instead merely notifying its members of the IOM’s position. In 1992, the FDA approved the use of aP for the boosters given at 18 months and 6 years of age. In 1996, the FDA approved the use of aP for the entire schedule. Finally, by the beginning of 2001, the wP vaccine had been removed from the U.S. market, though American manufacturers continue to produce the cheaper (in every sense of the word) wP vaccines for sale in the third world.

 

“The development and acceptance of acellular pertussis vaccine in the United States demonstrates that scientific evidence alone is not always enough to change harmful medical practices. Given the powerful resistance to change demonstrated by the pharmaceutical industry, it took years of litigation, consumer advocacy, international scientific development, and congressional action to create a new norm for childhood immunization. It would seem that open discussion of vaccine problems in the scientific and medical communities, along with policies that preclude those with a conflict of interest from determining vaccine policy, might help to prevent similar difficulties in the future in the rapidly expanding vaccination field.” (Geier & Geier, 2002, p. 284]

 

 

References

Centers for Disease Control and Prevention (1997). “Vaccination: Use of acellular pertussis vaccines among infants and young children recommendations of the Advisory Committee on Immunization Practices (ACIP).” Morbidity and Mortality Weekly Report, 46(RR-7):1-25. Retrieved from <http://www.cdc.gov/mmwr/preview/mmwrhtml/00048610.htm >.

Fine, P.E.M., & Clarkson, J.A. (1982). “The recurrence of whooping cough: Possible implications for assessment of vaccine efficacy.” The Lancet, 319(8273):666-669. doi: 10.1016/S0140-6736(82)92214-0.

Geier, D., & Geier, M. (2002). “The true story of pertussis vaccination: A sordid legacy?” Journal of the History of Medicine, 57:249-284. Retrieved from <http://www.researchgate.net/publication/11177062_The_true_story_of_pertussis_vaccination_A_sordid_legacy >.

Hieb, L. (2015). “How vaccine hysteria could spark totalitarian nightmare.” WND. Retrieved from <http://www.wnd.com/2015/02/how-vaccine-hysteria-could-spark-totalitarian-nightmare/ >.

Howson, C.P., Howe, C.J., Fineberg, H.V., eds. (1991). “B pertussis and rubella vaccines: A brief chronology.” In Adverse Effects of Pertussis and Rubella Vaccines: A Report of the Committee to Review the Adverse Consequences of Pertussis and Rubella Vaccines. Institute of Medicine Committee to Review the Adverse Consequences of Pertussis and Rubella Vaccines. Retrieved from <http://www.ncbi.nlm.nih.gov/books/NBK234365/ >.