By Elaine Zhu
Illustration by Lizka Vaintrob
For the average college student, each week seems to be a battle in trying to balance a boatload of classes, homework, a social life, and other extracurricular activities. With the amount of activities students need to manage, it is no surprise that according to an assessment completed by the American College Health Association, 63% of college students felt overwhelming anxiety in 2018. This trend does not end after college either—there are some 40 million adults in the United States who have an anxiety disorder. Along with these responsibilities, sleeping the full, recommended eight hours a night is also something that is essential to a student’s schedule. However, the Center for Disease Control estimates that at least 35% of adults, including college students, are also not getting enough sleep at night. Research has shown that a lack of sleep and anxiety disorders often go hand in hand. Sleep disruption is linked with the development of anxiety disorders and is also present in post-traumatic stress disorder, panic disorder, and social anxiety disorder.
A recent study conducted by researchers at the University of California, Berkeley showed a strong neural link between sleep and anxiety. This study, published recently in Nature Human Behavior, found a strong relationship between particular areas in the brain and anxiety from sleep disturbances. Their results also demonstrated that non-rapid eye movement or slow-wave sleep actually has an anxiolytic effect, something that reduces anxiety in the brain and in the body.
The researchers aimed to answer three questions:
In order to answer these questions, the researchers conducted experiments using functional MRI and polysomnography, which records brain wave activity, oxygen levels in the blood, heart rate, and breathing. They asked adults to view emotional video clips and then scanned their brains using functional MRI, once after a full night of sleep and then again after one night without sleep. They measured each individual’s anxiety level after each night with a questionnaire and used the functional MRI results to show the most active areas of the brain while the test subjects were watching the video clips. After scanning each subject’s brain, the results signified that after the sleep-deprived night, the experimentees did not display much activity in the medial prefrontal cortex, which is typically associated with keeping anxiety under control. Furthermore, after the night with sleep deprivation, 50% of the test subjects conveyed anxiety levels that exceeded the clinical threshold for anxiety disorders. One of the researchers, Matthew Walker PhD, stated, “Without sleep, it’s almost as if the brain is too heavy on the emotional accelerator pedal, without enough brake.”
The researchers also found that after the night with undisturbed sleep, the subjects’ anxiety levels dropped significantly. This drop correlated with more slow-wave or non-rapid eye movement (NREM) sleep, which is a sleep stage when blood pressure is typically low and breathing and heart rates are slower and more regular. These results point to NREM sleep being an anxiolytic, or something that can reduce anxiety.
Researchers also found that subtle changes or reductions in sleep can increase anxiety levels. Their results showed that varying quality in sleep from one night to the next can negatively impact anxiety and can increase the elevation of anxiety in the following day. This finding also builds upon previous research that shows lower sleep efficiency in people with an anxiety disorder.
Previous research has also demonstrated a link in the opposite direction—anxiety disorders can also affect sleep and cause sleep deprivation, creating a cycle of increasing anxiety and sleep deprivation. As new findings emerge on the relationship between sleep and anxiety, it’s important that society emphasizes these results to students and adults. If people do not recognize the importance of sleep, it will only be harder to reduce the prevalence of anxiety in adults today.
Learning From Mistakes: How The Decoupling Between Confidence and Action in Patients With OCD Affects The Decision-Making Process
By Victoria Comunale
If one is not confident in their knowledge before a test, they usually study. This trend—that one’s confidence and one’s actions are often linked—is true for countless other scenarios concerning decision-making and learning. Someone with obsessive-compulsive disorder, OCD, however, may find themselves in striking defiance of the trend, often to their detriment. Such scenarios can range from washing their hands countless times even though they know it isn’t improving their hygiene to checking in on loved ones excessively to ensure their safety despite countless reassurances. The compulsive actions of people affected by OCD are disproportionate, especially concerning their apparent confidence level, leading scientists to postulate that such individuals’ actions and confidence may not be linked during learning.
In 2017, researchers at the University of Cambridge published a study that revealed a dissociation between the actions and confidence of patients with OCD. In the study, researchers designed an experiment to test how one’s confidence affects one’s actions in a decision-making process by assessing participants’ performance in an online game. The game involved asking participants to use a bucket to catch particles that came from a central source. The bucket was placed along a ring around the source, and the particles would be found in a general area for a period of time before moving to another area on the ring. The position of a particle generally followed a pattern, but once in a while a particle would appear in a location that deviated from the general pattern.
The method of assessing the correlation of confidence and action relies upon the difference in responses to the prediction error associated with these randomly placed particles. Prediction error refers to a mistake made in regard to the occurrence of an event based on previous data. Here, it would be the error of placing the bucket in a wrong location, based on the general pattern, when the random particles would be placed.
Returning to the correlation of confidence and actions, when one is uncertain about a situation, new information can be very influential. In contrast, if someone is very confident, they are less influenced by new information. Researchers found that patients with OCD were more likely to place the bucket where the last particle landed in comparison to the control group—in other words, they overreacted to their prediction error. The groups did not differ significantly in how many particles were caught, but OCD patients’ prediction error prompted them to overcompensate in the subsequent placement of the bucket. Such patients moved the bucket in a way that overemphasized the prediction error, rather than following the general trend. Participants in the control group did not pay significant attention to such changes unless a substantial shift took place in the general trend of where the particles continued to land.
An interesting outcome was that confidence ratings were very similar between patients with OCD and with the control group. This suggests that both groups developed a sense of what was going on, yet participants with OCD did not use that knowledge to inform their bucket placement. The computed learning rates of the trial indicated that in extreme cases, some OCD patients disregarded the accumulated information and updated the position of the bucket in accordance with the most recent outcome.
Ultimately, it was found that the degree of decoupling between confidence and action is correlated with the severity of OCD in patients, which suggests that decoupling is an underlying factor behind the irrational behavior that characterizes OCD. In the future, studies like this may help improve our understanding of human learning for those with OCD and thus better support individuals with the disorder.
By Elaine Zhu
Australia has been in the global spotlight after bushfires ravaged millions of acres of land since early 2020, and the situation only seems to be getting worse. Long periods of severe drought and temperatures up to 105.6 degrees Fahrenheit have exacerbated the bushfires. Not only have more than eleven million acres of forest and parks been damaged, but the fires have also killed at least thirty-three people and almost five hundred million animals, causing exorbitant amounts of damage to Australian ecosystems.
Wildfires are not an uncommon occurrence in Australia. They usually occur every autumn, burning millions of acres every year. In fact, some species of Australian native plants rely on these fires to regenerate and Australians often use these types of fires for land management and agriculture. Bushfires typically start because of a variety of factors, including the presence of forests, greenery, shrubs, and oxygen, but the speed and intensity of the fire can be influenced by other factors. These factors can include temperature, soil condition, moisture levels, and wind speed. However, the Australian bushfires are on a completely different level compared to previous isolated wildfire events. The Australian bushfires have burned almost eight times more land than the 2018 fires in California—the worst in California’s history.
So what exactly is causing these increasingly dangerous wildfires? To many, the Australian fires are a clear indication of the relationship between climate change and worsening extreme weather patterns. Research from the 2017 Climate Science Special Report has shown that decreases in soil moisture content from higher temperatures have been traced back to human actions and influences and will only make the risk of fires worse. Stepfan Rahmstorf, the main author on the United Nations’ Intergovernmental Panel on Climate Change’s Fourth Assessment Report, states that “due to enhanced evaporation in warmer temperatures, the vegetation and the soils dry out more quickly, so even if the rainfall didn’t change, just the warming in itself would already cause a drying of vegetation and therefore increased fire risk.” As the climate becomes warmer and creates drier conditions, it is much easier for fires to start; these dry conditions only help the fire spread to other areas. This drying trend also affects the amount of rainfall that Australia receives. The Climate Council released a press briefing explaining that the regions of Tenterfield and Stanthorpe in Australia had 77 percent less rainfall than the usual annual average. Overall, the southeastern regions of Australia have had a 15 percent decline in the rainfall during the autumn months and a 25 percent decline in the rainfall in the months of April and May. Rainfall is especially important during the bushfire seasons, as lower levels of rainfall will lengthen the duration of the bushfires.
These bushfires haven’t just caused detrimental effects to the people and animals of Australia, but have also had lasting effects that will contribute to the global climate change problem—one of the factors that exacerbated the Australian bushfires in the first place—and thus create a positive feedback loop. Mark Parrington, senior scientist for the European Centre for Medium-Range Weather Forecasts, states that from September to January 2019, “the wildfires released around 400 million tons of CO2, which is roughly the same amount the the United Kingdom emits in an entire year.” The acceleration of carbon dioxide in the atmosphere will only exacerbate climate change and its global effects, including the melting rate of glaciers. The soot from the fires can latch onto the glaciers, creating an almost brown caramel color on the surface. This change in color decreases the reflectivity of the ice, which makes the glaciers melt even faster, increasing the ocean temperature and decreasing the amount of sunlight reflected back into the atmosphere.
With the detrimental effects that the bushfires have had on Australia, it wouldn’t be too far of a leap to assume that Australian leadership might take action to reduce climate change and prevent more bushfires from happening. However, the Australian Prime Minister, Scott Morrison, has had a history of opposing action to reduce carbon emissions and has downplayed the effects of climate change on the intensity of the Australian bushfires. In the past, he has spoken out against taxing carbon emissions and has promoted and protected coal mining. The Australian bushfires are just one of the consequences of anthropogenic effects on climate change and the environment. In order to decrease the severity of bushfires and the rise of global climate change, we must recognize the harmful effects that climate change has caused and will continue causing to future generations. We must take action to reduce carbon emissions worldwide.
By Clare Nimura
A hundred years ago, you would not have much hope of survival if one of your organs suddenly stopped working. Today, however, there exists a vast system of organ donation. Surgeons perform complex medical procedures to remove healthy organs from willing donors and to replace missing or damaged ones in recipients. Problem solved! But not quite….
There are currently more than 113,000 people on the national waiting list for organ transplants, with a new person added to the list every 10 minutes. These patients have diseases like cardiomyopathy, diabetes, cystic fibrosis, or cirrhosis and are in need of working replacements for their kidneys, livers, hearts, lungs, or other vital organs. The number of people in need of transplants greatly outweighs the number of transplants performed each year (less than 40,000), and about 20 people die each day while waiting for a new organ. How did such a large discrepancy arise? The answer is twofold: under-registration of donors and a broken donation system.
On the surface, organ donation seems simple—just replace a dysfunctional organ with a healthy one—but, in reality, there are many more variables at play. Most people are unaware that only 3 out of 1000 people die in a way that leaves their organs viable for donation, or that a single donor can save up to eight lives if they are able to donate their heart, lungs, liver, pancreas, kidneys, and intestines. If you do the math, this does not yield nearly enough viable organs for the thousands on the waiting list. Additionally, though 95% of adults in the United States support organ donation, only 58% are actually registered. These dismal statistics are compounded with a highly corrupt and ineffective system for organ recovery, which wastes thousands of viable organs daily. How did this critical system end up in such a terrible condition?
In a perfect world, an organ donation would be orchestrated as follows: a potential donor would be identified, proper consent would be obtained, and the organ would be harvested and stored in sterile packaging to be transported to the recipient’s transplant hospital, where the next patient on the waiting list would finally get their new organ. In reality, this is far from the sequence of events. Not only are there channels by which the very wealthy can get added to multiple waiting lists, there is corruption in the organ donation process itself, which results in tens of thousands of healthy organs left untouched.
Organ Procurement Organizations (OPOs) are, by federal law, the only group that can recover organs from deceased donors for transplantation. There are 58 such groups in the United States; they are non-profit contractors who are responsible for coordinating donations. The problem is that not only do these groups hold monopolies over their designated areas of service, they also follow an internal evaluation system that does not incentivize the pursuit of every viable organ. One recent study demonstrated that with reforms to increase efficiency and effectiveness of the organ donation system, there is the potential to recover up to 28,000 more organs per year and to save billions of dollars in the process. When OPOs fail to show up, people die.
How can we heal this broken system? First, you can sign up to be an organ donor at https://www.organdonor.gov/register.html. And second, you can encourage your congressional representatives to push for transparency in OPO metrics and for improved accountability. Removing perverse incentives and initiating external audits are two potential improvements. At this moment, there are viable lungs, hearts, livers, and other organs ready for transplant, and there is a sick patient somewhere who has been waiting for those organs, maybe for years, but may never get them because of the corruption in the organ donation process. This should not be the case, and there are simple solutions that would improve the situation greatly. Every small improvement is worth it; every organ recovered is another life potentially saved.
By Vivian Liu
The software revolution has taken the world by storm. Many things that we see or use in our everyday lives are automated: from data processing to virtual machine assistants, computer programs are helping us complete tasks that would otherwise be extremely resource consuming.
Questions that would have taken significant human capital can be pipelined to a computer program or inputted to a function for a quick output. Software is particularly helpful in allowing us to answer large numbers of objective questions en masse. For example, TurboTax is a well-established computer program that combines user interface and computation to help humans calculate their taxes more efficiently.
Although such advances have clearly benefited society, it is also important to take a step back to scrutinize the demographics of the programmers behind the code. The technology industries suffer from an acute lack of diversity: only 10% of researchers in Artificial Intelligence (AI) at Facebook are women, and 4.5% of workers at Google are African-American. Currently, women hold only 25% of all jobs in tech in Silicon Valley, and this number has been on the decline over the last few years. This issue also extends to the degrees awarded each year: according to a study conducted by the U.S. Equal Employment Opportunity Commission, each year men receive at least 70% of all degrees in computer science, mathematics, and engineering. In the 1980’s, women earned 37% of all computer science degrees, but today, that number has dropped down to 18%.
There are many reasons for such a stark disparity in gender and racial diversity in the technology sector. Part of the inequality stems from unequal education opportunities: from a young age, women and minority groups generally get less exposure to such fields, and are therefore in an unfair position to compete with those who regularly participate in and learn about STEM-related activities. Even those who do receive degrees in tech-related fields are discouraged to continue pursuing a career in a field that is so largely male and white-dominated. In addition, women in the technology industry are just as victimized by the “80 cents to the dollar” inequity that permeates the American workplace.
This lack of ethnic and gender diversity in AI causes issues that augment the marginalization of minority groups. Due to sexism in the workplace, which is as pervasive as this technology is, there is a significant chunk of the population being left out of the development and progress of this field. These disparities in demographic have immense implications in Artificial Intelligence, a huge field of computer science which has been steadily growing in presence.
In particular, one of the tasks that can be handled by AI is asking a machine a subjective question. Whereas a tax calculation system can punch some numbers into some objective function and get an instantaneous answer, a machine trying to answer a subjective question must learn to simulate how a human being would answer a question. This requires AI, which involves writing code that can be trained through data processing to perform tasks that are not as easy as a simple calculation.
For example, there is no easy mathematical function that can allow a machine to perform the subjective job of an application reviewer. Instead of a straightforward operation, the goal of a programmer would be to create a machine to simulate the reviewer as closely as possible. By inputting data consisting of a human application reviewer’s decisions under different conditions, a machine can be taught to model a human’s actions in reviewing an application.
Amazon famously set about creating such a program in 2014. Programmers gathered data from past hires—given different components of a resume (university degrees, GPA, experiences, and so on), a human reviewer would pipe their hiring result into a machine learning program that created a complex mathematical model that outputted a score based on the applicant’s resume. In principle, this program would provide a score to an applicant based on the strength of an application so that human reviewers can automatically screen applications below a certain threshold score and give special consideration to applications with scores above some threshold. However, what people found was that implicit biases in the human reviewer data that the programmers used in their model tended to discriminate against women—the most alarming of the indicators was that an occurence of “women’s college” in the applicant’s resume would automatically result in a score reduction. In fact, anything involving “women,” like the phrase “women’s chess club” would result in a score reduction. This is an especially concerning occurence of data that reflects harmful biases being implemented and magnified in AI applications.
In addition to this application reviewing program, another example of this problem in AI is in facial recognition AI software. Accordingly, for the same computer program, facial recognition works better for white males than for any dark-skinned individuals. The difference in facial recognition accuracy between fair-skinned and dark-skinned individuals does not necessarily come from bad intentions, but it is an unfortunate byproduct of a skewed set of programmers, and therefore, a skewed set of training data and code.
People are recognizing the magnitude of the issue and many are fighting to remedy this problem. An example is Stanford AI Professor Fei-Fei Li’s efforts to encourage young women to pursue AI-related careers—in 2015, she founded a summer program called “AI4ALL” which draws girls from all over the world to study computer science at Stanford University. In addition, outreach programs such as STEM Starters at Columbia University are connecting with schools in the inner-city to spark interest among students in STEM, helping to equip the next generation to overcome the imbalance in the tech industry.
This is AI’s diversity crisis. The computer programs that help us with our everyday lives have real programmers behind them, and this community suffers from an acute lack of diversity. Extremely sensitive software such as surveillance and facial analysis are currently created by predominantly male and people who are not of color. Without a diversification of perspectives behind computer code, the same biases and inequities that affect society today will be magnified and perpetuated by computer programs. As AI continues to advance and become a larger presence in our lives, continuing this current trend of AI diversity will have dire consequences for society.
By Victoria Comunale
Sleep and memory-formation share a fickle relationship. We are told to get plenty of rest before a test because sleep plays a crucial role in learning by aiding memory consolidation. Yet, sometimes, after a night of rest, we can completely forget about something that was on our minds the night before. Sleep’s role in memory consolidation and memory loss has been explored by neuroscientists before, but the balance between these two dueling occurrences, as well as the role of the brain waves in this phenomenon, still remain somewhat of a mystery.
Researchers at the University of California San Francisco (UCSF), recently published a study examining the role of different kinds of brain waves in memory consolidation and forgetfulness in rats. In this experiment, the researchers trained rats to control a feeding tube. This control was a skill that was gradually learned by the rat, and memory consolidation played a crucial role in this process. Successfully completing the task required the rat to move the tube from point A to point B within 15 seconds. Since this learning process involved the motor cortex, the researchers studied the brain waves in this area of the brain during non-REM sleep.
There are several stages of non-REM sleep that can be differentiated by distinct patterns of brain wave activity. These stages are arranged from lightest to deepest sleep, and the waves themselves arise from the neural activity of a region of the brain known as the thalamus. The first stage of non-REM sleep, the lightest stage, is characterized by both alpha and theta waves. In stage two, theta waves dominate brain activity but are interrupted by brief bursts of higher frequency brain waves known as sleep spindles. The third and fourth stages of non-REM sleep, the deepest stages, also feature sleep spindles, but are predominantly characterized by delta waves and slow oscillation waves, which researchers were most interested in. In order to study the effects of these waves, they employed a recent technique that has grown in popularity in the field of neuroscience: optogenetics. Utilizing this technique, the researchers were able to directly interfere with the activity of neurons in the brain and interrupt the activity of the targeted brain waves, thus establishing a causal rather than correlational relationship.
The role of slow oscillation waves has long been suspected of playing a role in memory consolidation, yet the function of delta waves, which are more prevalent than slow oscillation waves, is still unknown. When the researchers used optogenetics to interfere with the slow oscillation waves, the success rate of the rats in the tube moving task was much lower than that of the control group. Yet, interfering with delta waves had the opposite effect. The rats were able to complete the task with a higher success rate than the control group. From these remarkable results, the researchers concluded that these dueling brain waves have opposite effects during sleep. This conclusion was unexpected, since these brain waves are found in the same stage. Therefore, memories both strengthened and weakened during non-REM sleep. The difference in memory consolidation between delta waves and slow oscillations is stark and undeniable, contradicting prior speculations that they may have similar functions.
The researchers also focused on specific bursts of activity associated with slow wave oscillations called sleep spindles. These spindles are already known to play an essential role in sensory processing and long term memory consolidation. Because delta waves, which are associated with memory loss, are more prevalent during non-REM sleep, the researchers postulated that these spindles, coupled with the effects of slow oscillating waves, aid in balancing memory consolidation against the memory loss associated with delta waves.
These results can also have implications for phenomena observed in humans, especially aging. It has been experimentally demonstrated that slow-wave activity has lower amplitude in the elderly. If the results from the UCSF group can be translated to the human brain, this means that the elderly are less likely to have consolidation benefits during sleep because of a reduction in slow oscillations and spindles. It is still unclear if this newfound information concerning the dueling brain waves can be applied as a useful tool in strengthening human memories, considering that the weakening of less important memories is necessary for memories of higher importance to be strengthened. Nevertheless, these results help bring us one step closer in understanding the complicated activities of our brains.
By Elaine Zhu
Selfishness is the act of not caring about others, only thinking about getting ahead, and profiting at the expense of others. While most people think of selfishness as an acquired human trait, research has shown that some human genes can also act in a selfish or even parasitic manner in order to increase the gene’s chances of passing on its genetic material to its offspring. These parasitic genes don’t benefit the body’s overall fitness but instead methodically increase their own chances of transmission. Recent studies have investigated the mechanisms used by these selfish and parasitic genes and their potential applications.
In a study conducted by Nicole Nuckolls, María Angélica Bravo Núñez, and Sarah Zanders, a gene called wtf4 in Schizosaccharomyces kambucha fission yeast was identified as one of these selfish genes. The researchers discovered that the wtf4 gene in yeast actually acts simultaneously as a poison and an antidote. During fission, the wtf4 gene cleverly produces a specifically timed molecular poison during meiosis that is spread to all growing gametes, both the gametes that inherited the wtf4 gene and the ones that didn’t. Before the walls of the spores have formed, the molecular poison spreads to every offspring of the cell.
However, the cells with the wtf4 gene also carry the antidote to the poison it created. The antidote for the poison is made in the later stages of cell development and division after the spore walls have completely formed. Therefore, the gametes that did inherit the gene are guarded against the effects of the poison, but the cells that do not have the gene are left to suffer with the poison and eventually die off. Thus, the gametes that did not inherit wtf4 are left with no protection from the poison and are vulnerable, thereby selecting for the wtf4 gene. By coloring the proteins, Dr. Zanders and her colleagues found the two specific RNA message molecules that the wtf4 gene uses to encode for the poison and antidote. After imaging the cells while they underwent meiosis, the scientists were able to clearly confirm that the wtf4 poison was spread in every cell, but the antidote was only in the spores with the wtf4 gene.
Further research is currently being conducted to identify more selfish genes with the hope that the mechanisms for these genes can be applied in other scientific disciplines. These selfish genes can also provide insight into human infertility, since the “cheating” methods that such genes use can bias natural selection and even directly cause infertility. That is, a selfish gene could produce spores with an incorrect number of chromosomes, which can actually be detrimental to the survival of the daughter cells. Research has shown that chromosomal abnormalities are one of the leading explanations for why miscarriages happen in humans. In an interview conducted by the National Institute of General Medical Sciences, Dr. Zanders stated that “learning general principles about selfish genes in simple models will guide future searches for selfish genes that could be contributing to human infertility.”
Another exciting potential application of these selfish genes is to gene drives. Gene drives are a type of genetic engineering technology that can spread a desired set of genes to a population by increasing their probability of being inherited. These selfish gene mechanisms can potentially lead to the creation of a type of gene drive that may curb or even eradicate problematic insect populations, such as malaria or dengue fever transmitting mosquitoes. These selfish genes use a variety of methods to overpower other genes, and scientists may one day be able to utilize these mechanisms to ultimately improve the quality of human life.
By Vicky Communale
Rapidly melting glaciers, a loss of species diversity, and rising sea levels—the delicate balance of our planet is in chaos. While many deny these and other effects of global warming, or think that the consequences are distant and intangible, these ramifications are much more connected to our personal health than one would think. In that regard, a new study has found a very pressing concern—the effects of global warming will become directly intertwined with our neurological health in the near future.
Through research conducted by a group at Dalhousie University in Canada, it is predicted that in less than 80 years, over 96 percent of the world’s population will not have access to an essential component to brain health—docosahexaenoic acid. This lack of access is tied directly to the increasing water temperatures brought on by global warming.
Docosahexaenoic acid, also known as DHA, is an omega-3 fatty acid that has numerous benefits, including reducing heart disease risk and reducing inflammation. A healthy, functioning brain requires a high level of DHA: studies conducted in the 1990’s found that DHA is vital to the brain development of infants. For formula-fed infants, adding DHA to the formula was shown to improve cognitive and visual development. On the other hand, a lack of DHA has been implicated in numerous neurological disorders. One study, conducted by a group from the New England Medical Center in 2006, found that the brain tissue of people afflicted with Alzheimer’s Disease had significantly lower levels of DHA when compared to healthy human brain tissue. Furthermore, the scientists conducting this study also found that people with high blood levels of this fatty acid were half as likely to develop dementia as those with lower levels.
Given that DHA has been established as a vital component of healthy brain development and function, the depletion of this compound would be massively detrimental to all of us. Additionally, our bodies do not produce much DHA, so we must obtain it through our diet. The most abundant source of DHA is fish, and fish acquire DHA through their consumption of algae. Algae change the proportion of different fatty acids in their cellular membranes in accordance with the surrounding temperature. When water temperatures are cold, algae need to ensure that their cell membranes remain flexible. They do so by increasing their membranes’ proportion of polyunsaturated fatty acids, a group which includes DHA. On the molecular level, this occurs because the multiple double bonds in the fatty acid tails prevent them from packing tightly with each other, thus preventing freezing. With rising temperatures, however, the algae replace the polyunsaturated fatty acids with saturated fatty acids to promote more packing of the fatty acids, countering the heat but consequently reducing the presence of DHA. Therefore, the Dalhousie University group’s study predicted that due to warming environments, from region to region, algal production of DHA will decrease by anywhere from 10 to 58 percent. Because of this, the DHA found in fish will be significantly reduced, and consequently, our access to the compound will likewise be depleted.
While the study predicts that countries with small populations and prominent fishing industries, such as Norway and Chile, will still be able to maintain adequate access to DHA, the same cannot be said for other countries around the world. Countries with rapid population growth, such as China and Indonesia, are predicted to face severe shortages. Landlocked countries will also suffer greatly from the shortage, and the intake of DHA by their populations will fall below recommended levels.
There is some hope that future scientific endeavors may help alleviate the effects of the shortage. Several initiatives are trying to directly farm algae as a source of DHA and others are trying to genetically engineer plants that produce high amounts of DHA, all with the goal of compensating for the damaging effects of a DHA shortage on our neurological health. At this time, however, it is unclear if these endeavors will become a permanent solution to solving this issue. Even if they do, it will only be a band-aid solution to the myriad other problems that arise from global warming.
By Ellen Alt
Warning: This content contains a discussion of consent and sexual abuse.
Late 2017 proved crucial for the newest wave of consent in America. The Harvey Weinstein case broke, and his firing had ripple effects on society and survivors of sexual harassment, encouraging them to come forward and share their stories as the #MeToo movement grew. Although it is unclear if coming forward will result in justice, as seen with the confirmation of Justice Brett Kavanaugh in October 2018, the country’s understanding of consent has shifted: sexual abusers such as Kevin Spacey, Matt Lauer, Bill Cosby, Jeffrey Epstein, and Olympic gymnastics doctor Larry Nassar have begun to be held accountable. Although the gymnastics doctor was convicted, all of medicine should apply standards of accountability and consent with the same vigor as the media industry. Medical procedures that lack consent exhibit this need for accountability in medicine.
Imagine going through childbirth only to have your husband betray and abuse you—yes, that’s right, betray and abuse you—by asking your doctor to do something to you without your knowledge, presumably for his own sexual satisfaction: the doctor adds an extra stitch or two when they reach the vaginal laceration point of their 12-point inspection of the new mother. This is called the husband stitch. The typical inspection involves surgical restoration of urination and stool disposal, while the deeper suture of the husband stitch joins the perineal muscles, which are “most important for sexual function.” Although it is commonly perceived that the effects of childbirth decrease heterosexual sexual pleasure for men due to women’s loosened tissue after giving birth, long-term studies have found otherwise: “Delivery method has no long-term effect on female sexual function,” which includes pleasure for both partners as well as the woman’s ability to conceive. Even if this misconception that loosened vaginal tissue decreases sexual pleasure was true, it is not the best method to address the issue; if women find that their vaginal tissue is not as toned as it was before childbirth, pelvic floor physical therapy exercises are the best method of restoration. Doctors, husbands, and spouses should not execute power and authority over their partner’s body, not only since adding the extra stitch only causes pain to the recipient woman and does not improve sexual pleasure, but also because the lack of consent is abhorrent. According OB-GYN’s and long-term studies, vaginal tissue is sure to be stretched after giving birth, but will return to normal without an extra stitch—so why not ask for consent and avoid taking advantage of a woman’s body?
Aside from the crudely named husband stitch, another major yet under-discussed abuse of consent in medicine is non-consensual pelvic exams. In training hospitals where fresh-out-of-medical-school doctors fulfil their residency, doctors who are their superiors sometimes ask these students to go against bioethics: women under anesthesia act as cadavers on which students practice pelvic exams. Pelvic exams provide an understanding of the vulva and internal gynecological organs via the external, speculum, bimanual, and rectovaginal sections of the exam. The nature of a medical exam includes “a blend of communication, respect, and technical skill,” whereas “the act of putting fingers into an orifice for the sake of education can actually do harm.” Through a survey of five Philadelphia medical schools, 90% of students reported the practice of non-consensual pelvic exams. Some of these women under anesthesia are undergoing a gynecological procedure, but non-consensual pelvic exams are conducted in unrelated surgeries as well, such as stomach surgery. Regardless, these women have not consented to this procedure conducted on their bodies. In medical ethics, autonomy is understood as “one’s ability to self-govern, to act in accord with one’s values, goals, and desires,” which includes self-governance over one’s own body. Should a patient choose to undergo a specific procedure, they are consenting to the procedure within their autonomy; but in this case, a pelvic exam is not within the understanding of a the agreed-upon procedure, and the patient’s autonomy is violated. Although the technical skill of performing a pelvic exam may be necessary for students in the future, Friesen and other medical professionals argue that the practice does more harm than good. Non-consensual pelvic exams directly counter medical ethics and consent, especially with the new wave of consent awareness in America.
Considering medical ethics, the case for consent in medicine should be an obvious one. However, legislation fails us: there is no law regulating the husband stitch, and non-consensual pelvic exams are legal in all but six states. In a field that revolves around the health of bodies, we should treat these bodies with respect, and should have been doing so even before the #MeToo movement normalized speaking out about sexual abuse. Medicine should adopt the same stringency as does the media with large figures in entertainment and business. Individuals who have influence over women’s bodies, such as OB-GYN’s post-birth, residents, and doctors instructing residents should be held accountable. Even in the absence of legislation, these individuals should contribute to the cultural shift of increased respect, respecting medical ethics and the autonomy of women and female bodies.
Friesen, Phoebe. “Educational pelvic exams on anesthetized women: Why consent matters.” Wiley Bioethics. vol.32. pp. 298–307. 2018.
Ghorat, F.; Esfehani, R. J.; Sharifzadeh, M.; Tabarraei, Y.; Aghahosseini, S. S.“Long term effect of vaginal delivery and cesarean section on female sexual function in primipara mothers.” Electron Physician. vol. 9, iss. 3. pp. 3991-3996. Mar. 2017.
Herman, Christine. “#MeToo? Some Hospitals Allow Pelvic Exams Without Explicit Consent.” Side Effects: Public Media. Jan. 8, 2019.
Planned Parenthood. “What is a pelvic exam?” Planned Parenthood: Health & Wellness. n. d.
Rupe, Heather, DO. “An OB Weighs in on the ‘Husband Stitch’.” WebMD: WebMD Blogs. Mar. 16, 2018.
The Daily. “When #MeToo Went on Trial.” The New York Times. Oct. 4. 2019.
By Vivian Liu
It may be hard to believe, but the ride-share conglomerate Uber actually has a profit margin in the red. After subtracting driver costs and overhead costs from revenue, last year, Uber reported a net loss of $3 billion. The obvious question is: why? How can such a popular company be losing money?
In the modern world, artificial intelligence (AI) is undeniable: it’s in our phones, in our healthcare system, on our roads, and is helping us explore new frontiers in space. Big Silicon Valley technology companies such as Google, Uber, and Facebook are racing to funnel resources into AI research and development.
The term “artificial intelligence” refers to intelligence—vision, speech recognition, and translation, among others—displayed by a machine. Whereas humans organically develop and store knowledge in neurons, machines have to “learn how to learn” through carefully coded syntax. The specific field of AI is very new, as the term “artificial intelligence” was only coined in 1956. Since its inception, the field has experienced exponential growth—from the chatbots that automate the customer service experience to Google Translate’s natural language processing algorithm, to Alpha Go’s legendary world champion-beating function, AI has accomplished some amazing feats. The infinite potential of AI has also taken pop culture by storm. For example, the eerily realistic robo-women in Ex Machinaand the lovable Baymax from Big Hero Sixdisplay our dichotomous perceptions of AI in the media.
Specifically, a field of AI that has gained a lot of traction in recent years is computer vision (CV). Computer vision refers to the broad field of using machine learning to process and interpret images to provide useful information for humans. For example, the classic computer vision application is “training” a program to identify a cat from an image. When you see a picture of a cat, your brain doesn’t have to work very hard to make the association between pixels on a screen and the physical object. However, it is much harder for a computer system to establish this connection. There are two main issues to tackle—first, the system must identify the location of the cat in the image. Second, the system must be able to differentiate between a cat and other objects.
This is where big data comes in. Big data is an extremely large set of images—with a size ranging from the hundreds of thousands to millions—that are hand-marked by humans as positive (meaning they contain the object in question) or negative (which indicates that they do not contain the object). The human then divides the images into two types of data—training and test data—and writes code that outputs whether or not the computer thinks the object is in a given image.
After the programmer has set up the data and parameters, they adjust the inputs to maximize the accuracy of the program on the training data. By adjusting the parameters of the training data, the programmer can observe what values of inputs bring about the highest rate of identification success. The goal is to maximize the number of images correctly marked in the set. Once the programmer is satisfied with the accuracy, the program is then evaluated on the test data to see how the model will fit on a different set of data. After a final round of adjustments, the program is ready to classify images that have not been pre-marked by a human.
One of the most popular applications of computer vision is in the development of self- driving cars. Designing cars that can drive without human intervention depends on computer vision. For example, below is an image taken of a busy street that has been marked up by a computer program through computer vision:
The program has even been trained to tell the difference between a “car” identifier and a “truck” using computer vision. Once these markers have been laid on the image, programmers write code to analyze the machine’s course of action given these on-screen parameters. For example, once the program marks the “traffic light” as “red,” the car is programmed to apply the brakes a certain amount. Now imagine doing this for image analysis continuously, with a constantly moving setting as you drive down the street. This is what companies such as Uber and Zoox have to deal with in order to deliver a product that will be able to handle the many perils of the open road.
Back to Uber: its ultimate goal is to be able to deliver a self-driving product such that it can eliminate the necessity of a human driver, and instead “employ” much cheaper and autonomous drivers. This way, instead of having to continuously pay for human labor, they can pay a fixed cost on the self-driving cars and then a significantly lower cost of fueling them.
Currently, the advancement of computer vision in the application of self-driving cars is not quite complete: much is left to be done to improve the safety of the software, accuracy of the image analysis, and incorporation into modern roads. But lawmakers and city planners have started to plan for the incorporation of computer vision technologies in our daily lives. The Silicon Valley-based companyZoox already has a fully autonomous vehicle which has been cleared to be tested in a limited capacity.
AI is an extremely exciting field—its implications in our daily lives are far-reaching and will only grow in the next decade. So next time you are waiting in traffic, look around and see whether there is a driver in the car next to you.