By Elaine Zhu
Australia has been in the global spotlight after bushfires ravaged millions of acres of land since early 2020, and the situation only seems to be getting worse. Long periods of severe drought and temperatures up to 105.6 degrees Fahrenheit have exacerbated the bushfires. Not only have more than eleven million acres of forest and parks been damaged, but the fires have also killed at least thirty-three people and almost five hundred million animals, causing exorbitant amounts of damage to Australian ecosystems.
Wildfires are not an uncommon occurrence in Australia. They usually occur every autumn, burning millions of acres every year. In fact, some species of Australian native plants rely on these fires to regenerate and Australians often use these types of fires for land management and agriculture. Bushfires typically start because of a variety of factors, including the presence of forests, greenery, shrubs, and oxygen, but the speed and intensity of the fire can be influenced by other factors. These factors can include temperature, soil condition, moisture levels, and wind speed. However, the Australian bushfires are on a completely different level compared to previous isolated wildfire events. The Australian bushfires have burned almost eight times more land than the 2018 fires in California—the worst in California’s history.
So what exactly is causing these increasingly dangerous wildfires? To many, the Australian fires are a clear indication of the relationship between climate change and worsening extreme weather patterns. Research from the 2017 Climate Science Special Report has shown that decreases in soil moisture content from higher temperatures have been traced back to human actions and influences and will only make the risk of fires worse. Stepfan Rahmstorf, the main author on the United Nations’ Intergovernmental Panel on Climate Change’s Fourth Assessment Report, states that “due to enhanced evaporation in warmer temperatures, the vegetation and the soils dry out more quickly, so even if the rainfall didn’t change, just the warming in itself would already cause a drying of vegetation and therefore increased fire risk.” As the climate becomes warmer and creates drier conditions, it is much easier for fires to start; these dry conditions only help the fire spread to other areas. This drying trend also affects the amount of rainfall that Australia receives. The Climate Council released a press briefing explaining that the regions of Tenterfield and Stanthorpe in Australia had 77 percent less rainfall than the usual annual average. Overall, the southeastern regions of Australia have had a 15 percent decline in the rainfall during the autumn months and a 25 percent decline in the rainfall in the months of April and May. Rainfall is especially important during the bushfire seasons, as lower levels of rainfall will lengthen the duration of the bushfires.
These bushfires haven’t just caused detrimental effects to the people and animals of Australia, but have also had lasting effects that will contribute to the global climate change problem—one of the factors that exacerbated the Australian bushfires in the first place—and thus create a positive feedback loop. Mark Parrington, senior scientist for the European Centre for Medium-Range Weather Forecasts, states that from September to January 2019, “the wildfires released around 400 million tons of CO2, which is roughly the same amount the the United Kingdom emits in an entire year.” The acceleration of carbon dioxide in the atmosphere will only exacerbate climate change and its global effects, including the melting rate of glaciers. The soot from the fires can latch onto the glaciers, creating an almost brown caramel color on the surface. This change in color decreases the reflectivity of the ice, which makes the glaciers melt even faster, increasing the ocean temperature and decreasing the amount of sunlight reflected back into the atmosphere.
With the detrimental effects that the bushfires have had on Australia, it wouldn’t be too far of a leap to assume that Australian leadership might take action to reduce climate change and prevent more bushfires from happening. However, the Australian Prime Minister, Scott Morrison, has had a history of opposing action to reduce carbon emissions and has downplayed the effects of climate change on the intensity of the Australian bushfires. In the past, he has spoken out against taxing carbon emissions and has promoted and protected coal mining. The Australian bushfires are just one of the consequences of anthropogenic effects on climate change and the environment. In order to decrease the severity of bushfires and the rise of global climate change, we must recognize the harmful effects that climate change has caused and will continue causing to future generations. We must take action to reduce carbon emissions worldwide.
By Clare Nimura
A hundred years ago, you would not have much hope of survival if one of your organs suddenly stopped working. Today, however, there exists a vast system of organ donation. Surgeons perform complex medical procedures to remove healthy organs from willing donors and to replace missing or damaged ones in recipients. Problem solved! But not quite….
There are currently more than 113,000 people on the national waiting list for organ transplants, with a new person added to the list every 10 minutes. These patients have diseases like cardiomyopathy, diabetes, cystic fibrosis, or cirrhosis and are in need of working replacements for their kidneys, livers, hearts, lungs, or other vital organs. The number of people in need of transplants greatly outweighs the number of transplants performed each year (less than 40,000), and about 20 people die each day while waiting for a new organ. How did such a large discrepancy arise? The answer is twofold: under-registration of donors and a broken donation system.
On the surface, organ donation seems simple—just replace a dysfunctional organ with a healthy one—but, in reality, there are many more variables at play. Most people are unaware that only 3 out of 1000 people die in a way that leaves their organs viable for donation, or that a single donor can save up to eight lives if they are able to donate their heart, lungs, liver, pancreas, kidneys, and intestines. If you do the math, this does not yield nearly enough viable organs for the thousands on the waiting list. Additionally, though 95% of adults in the United States support organ donation, only 58% are actually registered. These dismal statistics are compounded with a highly corrupt and ineffective system for organ recovery, which wastes thousands of viable organs daily. How did this critical system end up in such a terrible condition?
In a perfect world, an organ donation would be orchestrated as follows: a potential donor would be identified, proper consent would be obtained, and the organ would be harvested and stored in sterile packaging to be transported to the recipient’s transplant hospital, where the next patient on the waiting list would finally get their new organ. In reality, this is far from the sequence of events. Not only are there channels by which the very wealthy can get added to multiple waiting lists, there is corruption in the organ donation process itself, which results in tens of thousands of healthy organs left untouched.
Organ Procurement Organizations (OPOs) are, by federal law, the only group that can recover organs from deceased donors for transplantation. There are 58 such groups in the United States; they are non-profit contractors who are responsible for coordinating donations. The problem is that not only do these groups hold monopolies over their designated areas of service, they also follow an internal evaluation system that does not incentivize the pursuit of every viable organ. One recent study demonstrated that with reforms to increase efficiency and effectiveness of the organ donation system, there is the potential to recover up to 28,000 more organs per year and to save billions of dollars in the process. When OPOs fail to show up, people die.
How can we heal this broken system? First, you can sign up to be an organ donor at https://www.organdonor.gov/register.html. And second, you can encourage your congressional representatives to push for transparency in OPO metrics and for improved accountability. Removing perverse incentives and initiating external audits are two potential improvements. At this moment, there are viable lungs, hearts, livers, and other organs ready for transplant, and there is a sick patient somewhere who has been waiting for those organs, maybe for years, but may never get them because of the corruption in the organ donation process. This should not be the case, and there are simple solutions that would improve the situation greatly. Every small improvement is worth it; every organ recovered is another life potentially saved.
By Vivian Liu
The software revolution has taken the world by storm. Many things that we see or use in our everyday lives are automated: from data processing to virtual machine assistants, computer programs are helping us complete tasks that would otherwise be extremely resource consuming.
Questions that would have taken significant human capital can be pipelined to a computer program or inputted to a function for a quick output. Software is particularly helpful in allowing us to answer large numbers of objective questions en masse. For example, TurboTax is a well-established computer program that combines user interface and computation to help humans calculate their taxes more efficiently.
Although such advances have clearly benefited society, it is also important to take a step back to scrutinize the demographics of the programmers behind the code. The technology industries suffer from an acute lack of diversity: only 10% of researchers in Artificial Intelligence (AI) at Facebook are women, and 4.5% of workers at Google are African-American. Currently, women hold only 25% of all jobs in tech in Silicon Valley, and this number has been on the decline over the last few years. This issue also extends to the degrees awarded each year: according to a study conducted by the U.S. Equal Employment Opportunity Commission, each year men receive at least 70% of all degrees in computer science, mathematics, and engineering. In the 1980’s, women earned 37% of all computer science degrees, but today, that number has dropped down to 18%.
There are many reasons for such a stark disparity in gender and racial diversity in the technology sector. Part of the inequality stems from unequal education opportunities: from a young age, women and minority groups generally get less exposure to such fields, and are therefore in an unfair position to compete with those who regularly participate in and learn about STEM-related activities. Even those who do receive degrees in tech-related fields are discouraged to continue pursuing a career in a field that is so largely male and white-dominated. In addition, women in the technology industry are just as victimized by the “80 cents to the dollar” inequity that permeates the American workplace.
This lack of ethnic and gender diversity in AI causes issues that augment the marginalization of minority groups. Due to sexism in the workplace, which is as pervasive as this technology is, there is a significant chunk of the population being left out of the development and progress of this field. These disparities in demographic have immense implications in Artificial Intelligence, a huge field of computer science which has been steadily growing in presence.
In particular, one of the tasks that can be handled by AI is asking a machine a subjective question. Whereas a tax calculation system can punch some numbers into some objective function and get an instantaneous answer, a machine trying to answer a subjective question must learn to simulate how a human being would answer a question. This requires AI, which involves writing code that can be trained through data processing to perform tasks that are not as easy as a simple calculation.
For example, there is no easy mathematical function that can allow a machine to perform the subjective job of an application reviewer. Instead of a straightforward operation, the goal of a programmer would be to create a machine to simulate the reviewer as closely as possible. By inputting data consisting of a human application reviewer’s decisions under different conditions, a machine can be taught to model a human’s actions in reviewing an application.
Amazon famously set about creating such a program in 2014. Programmers gathered data from past hires—given different components of a resume (university degrees, GPA, experiences, and so on), a human reviewer would pipe their hiring result into a machine learning program that created a complex mathematical model that outputted a score based on the applicant’s resume. In principle, this program would provide a score to an applicant based on the strength of an application so that human reviewers can automatically screen applications below a certain threshold score and give special consideration to applications with scores above some threshold. However, what people found was that implicit biases in the human reviewer data that the programmers used in their model tended to discriminate against women—the most alarming of the indicators was that an occurence of “women’s college” in the applicant’s resume would automatically result in a score reduction. In fact, anything involving “women,” like the phrase “women’s chess club” would result in a score reduction. This is an especially concerning occurence of data that reflects harmful biases being implemented and magnified in AI applications.
In addition to this application reviewing program, another example of this problem in AI is in facial recognition AI software. Accordingly, for the same computer program, facial recognition works better for white males than for any dark-skinned individuals. The difference in facial recognition accuracy between fair-skinned and dark-skinned individuals does not necessarily come from bad intentions, but it is an unfortunate byproduct of a skewed set of programmers, and therefore, a skewed set of training data and code.
People are recognizing the magnitude of the issue and many are fighting to remedy this problem. An example is Stanford AI Professor Fei-Fei Li’s efforts to encourage young women to pursue AI-related careers—in 2015, she founded a summer program called “AI4ALL” which draws girls from all over the world to study computer science at Stanford University. In addition, outreach programs such as STEM Starters at Columbia University are connecting with schools in the inner-city to spark interest among students in STEM, helping to equip the next generation to overcome the imbalance in the tech industry.
This is AI’s diversity crisis. The computer programs that help us with our everyday lives have real programmers behind them, and this community suffers from an acute lack of diversity. Extremely sensitive software such as surveillance and facial analysis are currently created by predominantly male and people who are not of color. Without a diversification of perspectives behind computer code, the same biases and inequities that affect society today will be magnified and perpetuated by computer programs. As AI continues to advance and become a larger presence in our lives, continuing this current trend of AI diversity will have dire consequences for society.
By Victoria Comunale
Sleep and memory-formation share a fickle relationship. We are told to get plenty of rest before a test because sleep plays a crucial role in learning by aiding memory consolidation. Yet, sometimes, after a night of rest, we can completely forget about something that was on our minds the night before. Sleep’s role in memory consolidation and memory loss has been explored by neuroscientists before, but the balance between these two dueling occurrences, as well as the role of the brain waves in this phenomenon, still remain somewhat of a mystery.
Researchers at the University of California San Francisco (UCSF), recently published a study examining the role of different kinds of brain waves in memory consolidation and forgetfulness in rats. In this experiment, the researchers trained rats to control a feeding tube. This control was a skill that was gradually learned by the rat, and memory consolidation played a crucial role in this process. Successfully completing the task required the rat to move the tube from point A to point B within 15 seconds. Since this learning process involved the motor cortex, the researchers studied the brain waves in this area of the brain during non-REM sleep.
There are several stages of non-REM sleep that can be differentiated by distinct patterns of brain wave activity. These stages are arranged from lightest to deepest sleep, and the waves themselves arise from the neural activity of a region of the brain known as the thalamus. The first stage of non-REM sleep, the lightest stage, is characterized by both alpha and theta waves. In stage two, theta waves dominate brain activity but are interrupted by brief bursts of higher frequency brain waves known as sleep spindles. The third and fourth stages of non-REM sleep, the deepest stages, also feature sleep spindles, but are predominantly characterized by delta waves and slow oscillation waves, which researchers were most interested in. In order to study the effects of these waves, they employed a recent technique that has grown in popularity in the field of neuroscience: optogenetics. Utilizing this technique, the researchers were able to directly interfere with the activity of neurons in the brain and interrupt the activity of the targeted brain waves, thus establishing a causal rather than correlational relationship.
The role of slow oscillation waves has long been suspected of playing a role in memory consolidation, yet the function of delta waves, which are more prevalent than slow oscillation waves, is still unknown. When the researchers used optogenetics to interfere with the slow oscillation waves, the success rate of the rats in the tube moving task was much lower than that of the control group. Yet, interfering with delta waves had the opposite effect. The rats were able to complete the task with a higher success rate than the control group. From these remarkable results, the researchers concluded that these dueling brain waves have opposite effects during sleep. This conclusion was unexpected, since these brain waves are found in the same stage. Therefore, memories both strengthened and weakened during non-REM sleep. The difference in memory consolidation between delta waves and slow oscillations is stark and undeniable, contradicting prior speculations that they may have similar functions.
The researchers also focused on specific bursts of activity associated with slow wave oscillations called sleep spindles. These spindles are already known to play an essential role in sensory processing and long term memory consolidation. Because delta waves, which are associated with memory loss, are more prevalent during non-REM sleep, the researchers postulated that these spindles, coupled with the effects of slow oscillating waves, aid in balancing memory consolidation against the memory loss associated with delta waves.
These results can also have implications for phenomena observed in humans, especially aging. It has been experimentally demonstrated that slow-wave activity has lower amplitude in the elderly. If the results from the UCSF group can be translated to the human brain, this means that the elderly are less likely to have consolidation benefits during sleep because of a reduction in slow oscillations and spindles. It is still unclear if this newfound information concerning the dueling brain waves can be applied as a useful tool in strengthening human memories, considering that the weakening of less important memories is necessary for memories of higher importance to be strengthened. Nevertheless, these results help bring us one step closer in understanding the complicated activities of our brains.
By Dapo Lapite
Illustration by Lizka Vaintrob
The Roman empire, the Mayan Empire, and the Chinese empires. Every single one of these civilizations succumbed to the spread of pathogens, and in today’s world, there is a chance of this disaster repeating. The changing climate and highly advanced modes of international transportation have led to a spread of mosquitoes, ticks, and other organisms carrying dangerous pathogens. Due to the potential spread of an array of diseases, the United States must fund research aimed at finding vaccines, cures, and other medical advances in order to prevent a medical crisis.
It is imperative to examine the history of plagues in order to understand how diseases have impacted civilizations in the past. On average, somewhere in the world, a new infectious disease has “emerged every year for the past 30 years.” Recently, the Ebola crisis indicated what could recur. Ebola became a heavily discussed topic in 1976 when a new illness emerged in Yambuku, in the Democratic Republic of the Congo. At this time, Jean-Jacques Muyembe was the only virologist in the Congo. Muyembe shipped blood samples to the Centers for Disease Control and Prevention in Atlanta, where the scientists then identified the virus. Ebola is a disease that can kill “not just the very young, old, and sick,” but also the strong and fit, by triggering a violent immune response. In 2014, Ebola truly caused mass chaos as hospitals ran out of beds, cities, and coffins.
The best example of the United States’ lack of preparation is the Asian longhorned tick. The Asian longhorned tick is the first invasive tick to spread to the United States in around 80 years. It is native to China, Japan, Russia, and the Korean Peninsula and has also found its way to Australia and New Zealand. In Asia, the tick carries a virus that causes human hemorrhagic fever, which kills around 30 percent of its victims. In 2013, South Korea reported 36 cases and 17 fatalities.
This particular virus is not found in the United States, but it is extremely similar to the Heartland Virus, another life-threatening tick-borne disease cycling through the United States. Diseases spread by ticks are typically underreported. As a result, there are no proven measures that can be used to control several vector-borne diseases transferred by the black-legged tick, which spreads at least seven human pathogens in the United States, including bacteria that cause Lyme disease.
Climate change is increasingly becoming a problem because it is a factor in the emergence of infectious diseases. The warming temperatures make the environment in the United States more hospitable for the ticks and other vectors. And according to the Baylor College of Medicine, as climates warm and habitats are altered, diseases can spread into unforeseen geographic areas. Both ticks and mosquitos are prime examples of species that have expanded their range into regions where they have not been seen. Illness from mosquito, tick, and flea bites “more than tripled” in the United States from 2004 to 2016.
Other parts of the world also fall victim to these diseases through the high amounts of travel taking place. For example, the chikungunya disease, an insect-borne disease, was previously confined to tropical regions around the Indian Ocean. Yet there have been several cases of chikungunya disease imported into the United States by international travelers, including one case in Louisiana. Similarly, Severe Acute Respiratory Syndrome (SARS) first appeared in China in 2002 and quickly spread to other countries near China, and it made it as far as Canada because of air travel. Ultimately, SARS infected 8000 people and killed 800 people before an unprecedented global response halted the disease. The major underlying causes of the increase in vector-borne diseases are growing travel, trade, urbanization, population growth, and increasing temperature. The United States is prone to be shortsighted and forgetful when it comes to the influx of diseases and this trend has only continued in recent years.
Currently, if a massive spread of diseases hit the United States, mass panic would ensue due to the lack of preparation and funding. There was a development of vaccines and antimicrobial drugs throughout the last decade that created hope that infectious diseases could be controlled, but there was a realization that infectious diseases continue to emerge and re-emerge. This provides challenges for infectious disease research. At first, it seemed that the United States would aim at acting proactively when they committed one billion dollars to the effort in 2014. But that now looks uncertain; President Trump’s budget for 2019 cut 67 percent from the current annual funding. With less funding, the CDC will be forced to withdraw from several countries, resulting in a loss of jobs and a need for vital medical knowledge in these regions. With all the evidence pointing towards another plague, it is imperative that the United States begin to reinvest in the fight against potential diseases.
By Elaine Zhu
Selfishness is the act of not caring about others, only thinking about getting ahead, and profiting at the expense of others. While most people think of selfishness as an acquired human trait, research has shown that some human genes can also act in a selfish or even parasitic manner in order to increase the gene’s chances of passing on its genetic material to its offspring. These parasitic genes don’t benefit the body’s overall fitness but instead methodically increase their own chances of transmission. Recent studies have investigated the mechanisms used by these selfish and parasitic genes and their potential applications.
In a study conducted by Nicole Nuckolls, María Angélica Bravo Núñez, and Sarah Zanders, a gene called wtf4 in Schizosaccharomyces kambucha fission yeast was identified as one of these selfish genes. The researchers discovered that the wtf4 gene in yeast actually acts simultaneously as a poison and an antidote. During fission, the wtf4 gene cleverly produces a specifically timed molecular poison during meiosis that is spread to all growing gametes, both the gametes that inherited the wtf4 gene and the ones that didn’t. Before the walls of the spores have formed, the molecular poison spreads to every offspring of the cell.
However, the cells with the wtf4 gene also carry the antidote to the poison it created. The antidote for the poison is made in the later stages of cell development and division after the spore walls have completely formed. Therefore, the gametes that did inherit the gene are guarded against the effects of the poison, but the cells that do not have the gene are left to suffer with the poison and eventually die off. Thus, the gametes that did not inherit wtf4 are left with no protection from the poison and are vulnerable, thereby selecting for the wtf4 gene. By coloring the proteins, Dr. Zanders and her colleagues found the two specific RNA message molecules that the wtf4 gene uses to encode for the poison and antidote. After imaging the cells while they underwent meiosis, the scientists were able to clearly confirm that the wtf4 poison was spread in every cell, but the antidote was only in the spores with the wtf4 gene.
Further research is currently being conducted to identify more selfish genes with the hope that the mechanisms for these genes can be applied in other scientific disciplines. These selfish genes can also provide insight into human infertility, since the “cheating” methods that such genes use can bias natural selection and even directly cause infertility. That is, a selfish gene could produce spores with an incorrect number of chromosomes, which can actually be detrimental to the survival of the daughter cells. Research has shown that chromosomal abnormalities are one of the leading explanations for why miscarriages happen in humans. In an interview conducted by the National Institute of General Medical Sciences, Dr. Zanders stated that “learning general principles about selfish genes in simple models will guide future searches for selfish genes that could be contributing to human infertility.”
Another exciting potential application of these selfish genes is to gene drives. Gene drives are a type of genetic engineering technology that can spread a desired set of genes to a population by increasing their probability of being inherited. These selfish gene mechanisms can potentially lead to the creation of a type of gene drive that may curb or even eradicate problematic insect populations, such as malaria or dengue fever transmitting mosquitoes. These selfish genes use a variety of methods to overpower other genes, and scientists may one day be able to utilize these mechanisms to ultimately improve the quality of human life.
By Vicky Communale
Rapidly melting glaciers, a loss of species diversity, and rising sea levels—the delicate balance of our planet is in chaos. While many deny these and other effects of global warming, or think that the consequences are distant and intangible, these ramifications are much more connected to our personal health than one would think. In that regard, a new study has found a very pressing concern—the effects of global warming will become directly intertwined with our neurological health in the near future.
Through research conducted by a group at Dalhousie University in Canada, it is predicted that in less than 80 years, over 96 percent of the world’s population will not have access to an essential component to brain health—docosahexaenoic acid. This lack of access is tied directly to the increasing water temperatures brought on by global warming.
Docosahexaenoic acid, also known as DHA, is an omega-3 fatty acid that has numerous benefits, including reducing heart disease risk and reducing inflammation. A healthy, functioning brain requires a high level of DHA: studies conducted in the 1990’s found that DHA is vital to the brain development of infants. For formula-fed infants, adding DHA to the formula was shown to improve cognitive and visual development. On the other hand, a lack of DHA has been implicated in numerous neurological disorders. One study, conducted by a group from the New England Medical Center in 2006, found that the brain tissue of people afflicted with Alzheimer’s Disease had significantly lower levels of DHA when compared to healthy human brain tissue. Furthermore, the scientists conducting this study also found that people with high blood levels of this fatty acid were half as likely to develop dementia as those with lower levels.
Given that DHA has been established as a vital component of healthy brain development and function, the depletion of this compound would be massively detrimental to all of us. Additionally, our bodies do not produce much DHA, so we must obtain it through our diet. The most abundant source of DHA is fish, and fish acquire DHA through their consumption of algae. Algae change the proportion of different fatty acids in their cellular membranes in accordance with the surrounding temperature. When water temperatures are cold, algae need to ensure that their cell membranes remain flexible. They do so by increasing their membranes’ proportion of polyunsaturated fatty acids, a group which includes DHA. On the molecular level, this occurs because the multiple double bonds in the fatty acid tails prevent them from packing tightly with each other, thus preventing freezing. With rising temperatures, however, the algae replace the polyunsaturated fatty acids with saturated fatty acids to promote more packing of the fatty acids, countering the heat but consequently reducing the presence of DHA. Therefore, the Dalhousie University group’s study predicted that due to warming environments, from region to region, algal production of DHA will decrease by anywhere from 10 to 58 percent. Because of this, the DHA found in fish will be significantly reduced, and consequently, our access to the compound will likewise be depleted.
While the study predicts that countries with small populations and prominent fishing industries, such as Norway and Chile, will still be able to maintain adequate access to DHA, the same cannot be said for other countries around the world. Countries with rapid population growth, such as China and Indonesia, are predicted to face severe shortages. Landlocked countries will also suffer greatly from the shortage, and the intake of DHA by their populations will fall below recommended levels.
There is some hope that future scientific endeavors may help alleviate the effects of the shortage. Several initiatives are trying to directly farm algae as a source of DHA and others are trying to genetically engineer plants that produce high amounts of DHA, all with the goal of compensating for the damaging effects of a DHA shortage on our neurological health. At this time, however, it is unclear if these endeavors will become a permanent solution to solving this issue. Even if they do, it will only be a band-aid solution to the myriad other problems that arise from global warming.
By Ellen Alt
Warning: This content contains a discussion of consent and sexual abuse.
Late 2017 proved crucial for the newest wave of consent in America. The Harvey Weinstein case broke, and his firing had ripple effects on society and survivors of sexual harassment, encouraging them to come forward and share their stories as the #MeToo movement grew. Although it is unclear if coming forward will result in justice, as seen with the confirmation of Justice Brett Kavanaugh in October 2018, the country’s understanding of consent has shifted: sexual abusers such as Kevin Spacey, Matt Lauer, Bill Cosby, Jeffrey Epstein, and Olympic gymnastics doctor Larry Nassar have begun to be held accountable. Although the gymnastics doctor was convicted, all of medicine should apply standards of accountability and consent with the same vigor as the media industry. Medical procedures that lack consent exhibit this need for accountability in medicine.
Imagine going through childbirth only to have your husband betray and abuse you—yes, that’s right, betray and abuse you—by asking your doctor to do something to you without your knowledge, presumably for his own sexual satisfaction: the doctor adds an extra stitch or two when they reach the vaginal laceration point of their 12-point inspection of the new mother. This is called the husband stitch. The typical inspection involves surgical restoration of urination and stool disposal, while the deeper suture of the husband stitch joins the perineal muscles, which are “most important for sexual function.” Although it is commonly perceived that the effects of childbirth decrease heterosexual sexual pleasure for men due to women’s loosened tissue after giving birth, long-term studies have found otherwise: “Delivery method has no long-term effect on female sexual function,” which includes pleasure for both partners as well as the woman’s ability to conceive. Even if this misconception that loosened vaginal tissue decreases sexual pleasure was true, it is not the best method to address the issue; if women find that their vaginal tissue is not as toned as it was before childbirth, pelvic floor physical therapy exercises are the best method of restoration. Doctors, husbands, and spouses should not execute power and authority over their partner’s body, not only since adding the extra stitch only causes pain to the recipient woman and does not improve sexual pleasure, but also because the lack of consent is abhorrent. According OB-GYN’s and long-term studies, vaginal tissue is sure to be stretched after giving birth, but will return to normal without an extra stitch—so why not ask for consent and avoid taking advantage of a woman’s body?
Aside from the crudely named husband stitch, another major yet under-discussed abuse of consent in medicine is non-consensual pelvic exams. In training hospitals where fresh-out-of-medical-school doctors fulfil their residency, doctors who are their superiors sometimes ask these students to go against bioethics: women under anesthesia act as cadavers on which students practice pelvic exams. Pelvic exams provide an understanding of the vulva and internal gynecological organs via the external, speculum, bimanual, and rectovaginal sections of the exam. The nature of a medical exam includes “a blend of communication, respect, and technical skill,” whereas “the act of putting fingers into an orifice for the sake of education can actually do harm.” Through a survey of five Philadelphia medical schools, 90% of students reported the practice of non-consensual pelvic exams. Some of these women under anesthesia are undergoing a gynecological procedure, but non-consensual pelvic exams are conducted in unrelated surgeries as well, such as stomach surgery. Regardless, these women have not consented to this procedure conducted on their bodies. In medical ethics, autonomy is understood as “one’s ability to self-govern, to act in accord with one’s values, goals, and desires,” which includes self-governance over one’s own body. Should a patient choose to undergo a specific procedure, they are consenting to the procedure within their autonomy; but in this case, a pelvic exam is not within the understanding of a the agreed-upon procedure, and the patient’s autonomy is violated. Although the technical skill of performing a pelvic exam may be necessary for students in the future, Friesen and other medical professionals argue that the practice does more harm than good. Non-consensual pelvic exams directly counter medical ethics and consent, especially with the new wave of consent awareness in America.
Considering medical ethics, the case for consent in medicine should be an obvious one. However, legislation fails us: there is no law regulating the husband stitch, and non-consensual pelvic exams are legal in all but six states. In a field that revolves around the health of bodies, we should treat these bodies with respect, and should have been doing so even before the #MeToo movement normalized speaking out about sexual abuse. Medicine should adopt the same stringency as does the media with large figures in entertainment and business. Individuals who have influence over women’s bodies, such as OB-GYN’s post-birth, residents, and doctors instructing residents should be held accountable. Even in the absence of legislation, these individuals should contribute to the cultural shift of increased respect, respecting medical ethics and the autonomy of women and female bodies.
Friesen, Phoebe. “Educational pelvic exams on anesthetized women: Why consent matters.” Wiley Bioethics. vol.32. pp. 298–307. 2018.
Ghorat, F.; Esfehani, R. J.; Sharifzadeh, M.; Tabarraei, Y.; Aghahosseini, S. S.“Long term effect of vaginal delivery and cesarean section on female sexual function in primipara mothers.” Electron Physician. vol. 9, iss. 3. pp. 3991-3996. Mar. 2017.
Herman, Christine. “#MeToo? Some Hospitals Allow Pelvic Exams Without Explicit Consent.” Side Effects: Public Media. Jan. 8, 2019.
Planned Parenthood. “What is a pelvic exam?” Planned Parenthood: Health & Wellness. n. d.
Rupe, Heather, DO. “An OB Weighs in on the ‘Husband Stitch’.” WebMD: WebMD Blogs. Mar. 16, 2018.
The Daily. “When #MeToo Went on Trial.” The New York Times. Oct. 4. 2019.
By Vivian Liu
It may be hard to believe, but the ride-share conglomerate Uber actually has a profit margin in the red. After subtracting driver costs and overhead costs from revenue, last year, Uber reported a net loss of $3 billion. The obvious question is: why? How can such a popular company be losing money?
In the modern world, artificial intelligence (AI) is undeniable: it’s in our phones, in our healthcare system, on our roads, and is helping us explore new frontiers in space. Big Silicon Valley technology companies such as Google, Uber, and Facebook are racing to funnel resources into AI research and development.
The term “artificial intelligence” refers to intelligence—vision, speech recognition, and translation, among others—displayed by a machine. Whereas humans organically develop and store knowledge in neurons, machines have to “learn how to learn” through carefully coded syntax. The specific field of AI is very new, as the term “artificial intelligence” was only coined in 1956. Since its inception, the field has experienced exponential growth—from the chatbots that automate the customer service experience to Google Translate’s natural language processing algorithm, to Alpha Go’s legendary world champion-beating function, AI has accomplished some amazing feats. The infinite potential of AI has also taken pop culture by storm. For example, the eerily realistic robo-women in Ex Machinaand the lovable Baymax from Big Hero Sixdisplay our dichotomous perceptions of AI in the media.
Specifically, a field of AI that has gained a lot of traction in recent years is computer vision (CV). Computer vision refers to the broad field of using machine learning to process and interpret images to provide useful information for humans. For example, the classic computer vision application is “training” a program to identify a cat from an image. When you see a picture of a cat, your brain doesn’t have to work very hard to make the association between pixels on a screen and the physical object. However, it is much harder for a computer system to establish this connection. There are two main issues to tackle—first, the system must identify the location of the cat in the image. Second, the system must be able to differentiate between a cat and other objects.
This is where big data comes in. Big data is an extremely large set of images—with a size ranging from the hundreds of thousands to millions—that are hand-marked by humans as positive (meaning they contain the object in question) or negative (which indicates that they do not contain the object). The human then divides the images into two types of data—training and test data—and writes code that outputs whether or not the computer thinks the object is in a given image.
After the programmer has set up the data and parameters, they adjust the inputs to maximize the accuracy of the program on the training data. By adjusting the parameters of the training data, the programmer can observe what values of inputs bring about the highest rate of identification success. The goal is to maximize the number of images correctly marked in the set. Once the programmer is satisfied with the accuracy, the program is then evaluated on the test data to see how the model will fit on a different set of data. After a final round of adjustments, the program is ready to classify images that have not been pre-marked by a human.
One of the most popular applications of computer vision is in the development of self- driving cars. Designing cars that can drive without human intervention depends on computer vision. For example, below is an image taken of a busy street that has been marked up by a computer program through computer vision:
The program has even been trained to tell the difference between a “car” identifier and a “truck” using computer vision. Once these markers have been laid on the image, programmers write code to analyze the machine’s course of action given these on-screen parameters. For example, once the program marks the “traffic light” as “red,” the car is programmed to apply the brakes a certain amount. Now imagine doing this for image analysis continuously, with a constantly moving setting as you drive down the street. This is what companies such as Uber and Zoox have to deal with in order to deliver a product that will be able to handle the many perils of the open road.
Back to Uber: its ultimate goal is to be able to deliver a self-driving product such that it can eliminate the necessity of a human driver, and instead “employ” much cheaper and autonomous drivers. This way, instead of having to continuously pay for human labor, they can pay a fixed cost on the self-driving cars and then a significantly lower cost of fueling them.
Currently, the advancement of computer vision in the application of self-driving cars is not quite complete: much is left to be done to improve the safety of the software, accuracy of the image analysis, and incorporation into modern roads. But lawmakers and city planners have started to plan for the incorporation of computer vision technologies in our daily lives. The Silicon Valley-based companyZoox already has a fully autonomous vehicle which has been cleared to be tested in a limited capacity.
AI is an extremely exciting field—its implications in our daily lives are far-reaching and will only grow in the next decade. So next time you are waiting in traffic, look around and see whether there is a driver in the car next to you.
By Elifsu Gencer
If one were asked to list some of post-impressionist painter Vincent Van Gogh’s most famous pieces of artwork, the still life Sunflowersseries is surely among the first to come to mind. While living in France during the 1880’s, Van Gogh painted his first set of sunflowers, depicted lying across a tabletop, and later, his second, more well-known bouquet of sunflowers propped up in a vase. These works are known for their stunning yellow hues, aptly referred to as “Van Gogh Yellow,” which Van Gogh began to use more frequently in his paintings during the latter half of his career. While the exact reason for this interesting artistic flair remains a mystery, there has been much speculation about Van Gogh’s affinity for the color yellow. Some of these claims include a medical condition known as xanthopsia, or yellow vision, caused by glaucoma; thujone poisoning from absinthe consumption that skewed his perception of colors; or effects of the prescription drug digitalis (more commonly known as foxglove) that he was taking to treat his temporal lobe epilepsy. Whatever the reason may be, however, a tragic reality has recently emerged that surely would have deeply saddened Van Gogh: though not visible to the naked eye just yet, the once-vibrant yellow pigment that Van Gogh cherished so much has begun to fade to a dull brown.
A team of researchers from the University of Naples Federico II has undertaken the mission of investigating this unexpected phenomenon in these paintings by studying Van Gogh Yellow, formally known as a family of lead chromate pigments that are often mixed with lead sulfate to create different yellow shades. While the “browning” observed in the Sunflowersseries was revealed to be due to pigment degradation, a closer spectroscopic analysis of these affected areas demonstrated high sulfur contents. This finding has in turn led Muñoz-Garcia’s team to determine the particular mechanism underlying pigment degradation, which remarkably stems from the pigment’s own composition. Specifically, the composition of chromates and sulfates has made Van Gogh Yellow unstable and has possibly resulted in the separation of the compounds. The separated clusters of sulfates are thought to absorb UV light, providing sufficient energy for the reduction of the chromate ions into chromic oxide, a compound green in color that contributes to the browning effect seen in the dulling yellow sunflowers. Unfortunately, because the pigment composition itself is unstable, the color degradation in the Sunflowersseries is currently not preventable.
A recent study by Frederik Vanmeert of the University of Antwerp in 2018 also examined the composition of Van Gogh Yellow using macroscopic x-ray powder diffraction imaging, a new non-invasive method that can reveal specific chemical distributions at the macroscale. Their analysis, in line with Muñoz-Garcia’s findings, supported the observation that areas rich in the mixture of chromate and sulfate were prone to the darkening of the yellow pigments. Although Vanmeert agreed that the pigment degradation cannot be prevented at this point in time, he suggested that the process could be slowed by establishing optimal lighting conditions with minimal to no UV light. However, while UV light is known to be damaging to many forms of art, ensuring its elimination from museums can prove to be especially difficult because it cannot be detected by the human eye. In addition, natural UV light emission during the daytime makes it challenging to determine just the right amount of light that would allow visitors to enjoy the artwork on display without simultaneously damaging it. With that said, curators at the Van Gogh Museum have already chosen to lower the lighting of the museum and further review the lighting conditions of artwork displays. Although determining the optimal conditions is a painstaking task with countless variables to consider, it is necessary to preserve not only the iconic Sunflowersseries, but also other paintings and forms of art, such as artifacts. Studies conducted by the Muñoz-Garcia and Vanmeert groups and the subsequent development of non-invasive imaging techniques provide strong scientific foundations for optimizing conservation strategies which are also applicable to the study of other artworks.