text
stringlengths 219
516k
| score
float64 2.78
4.96
|
---|---|
The second generation Titans Prometheus (the son of Iapetus and a cousin of Zeus) and Epimetheus were initially not punished, as their relatives such as Atlas who participated in the Titanomachy, a fight with the father of all Gods Zeus. Prometheus and his brother Epimetheus (“gifted with afterthought”) were asked of forming man from water and earth (Prometheus formed the shape while Athena breathed life into the figure), which he did, but in the process, became fonder of men than Zeus had anticipated. Zeus didn't share Prometheus' feelings and wanted to keep men from having power, especially of fire. The fire symbolizes knowledge and knowledge and intelligence together with arrogance (“Blessed are the poor in spirit”) would probably make Zeus and the other gods obsolete.
Prometheus cared more for man than for the wrath of the increasingly powerful and autocratic Zeus, so he stole fire from Zeus' lightning, concealed it in a hollow stalk of fennel, and brought it to man. He also stole skills from Hephaestus and Athena to gave to man.
Zeus reacted by presenting to man Pandora (also called Anesidora “sender up of gifts” ), the first woman. While Prometheus may have crafted man, woman was a different sort of creature. She came from the forge of Hephaestus, beautiful as a goddess and beguiling.
Note that she is not like her Jewish counterpart: Eve was created to soothe Adam's loneliness, and to help him as a partner. Pandora, the first woman in Greek myth, was created as a punishment to mankind. Her name does not mean "giver of all gifts" - rather "she to whom all gifts were given" - the gods gave her beauty (Aphrodite), skill (Athena), while Hermes gave her a doglike (bitch-like?) mind and a thieving nature. "All (pantes) the gods gave her gifts (dora), a sorrow to men who live on bread." Zeus wanted to punish mankind for Prometheus' theft of fire - he decided to give them a "beautiful evil" (kalon kakon) "to pay for fire" (anti pyros). Hephaestus makes the woman out of earth and water, to look like a goddess. Thus Pandora - and through her all women, who are her descendants - has a beautiful exterior, but is worthless inside.
Prometheus, whose name means “forethought,” or “he who thinks ahead,” is a figure whom Steiner refers to as the Greek Lucifer. Prometheus awakened a consciousness in humans that was too dangerous in the eyes of Zeus, so Zeus had Prometheus chained to a rock in the Caucasus mountains. But Prometheus is patient, for he knows a secret that is not known to Zeus. In the future, Zeus will lie with a mortal woman, Io, and she will give birth to a son, who will start a line of descent leading to the birth of Hercules or Heracles, meaning “he who is called by Hera.” This great hero, whom Steiner indicates is a portent of Christ Jesus, will grow up to succeed Zeus in his position of authority as Law-giver in the heavens. Heracles will also kill the vulture that eats Prometheus' liver, and then liberate the great Greek Lucifer. Tom Mellett ALBERT EINSTEIN'S THEORY OF RELATIVITY AS RUDOLF STEINER'S FINAL “RIDDLE OF Philosophy, Journal for Anthroposophy, Number 60, Spring 1995 issue, pp. 51-63)
Zeus presented her as a bride to Prometheus' brother Epimetheus (whom Prometheus, expecting retribution for his audacity, had warned against accepting gifts from Zeus), along with a box that they were instructed to keep closed. Epimetheus was dazzled by Pandora and forgot the advice of his prescient brother. Unfortunately, one day while her husband was away, Pandora opened the box. In the process, she unleashed all the evils now known to man, but retaining Hope . No longer could man loll about all day, but he would have to work and would succumb to illnesses. So for the Greeks the woman was crated as a punishment for man. What is the reason of the creation of the woman according to the Bible? Why did God produce male and female versions of all animals but only a man Adam initially. Why did he not create also Eve separately and why did he use material from Adam? Both stories, the Greek and the Christian version are mythological stories to explain the superiority of mens over women. The Greek version says that people were created by throwing stones, in the Bible Eve and Adam had two sons but how were their children produced?
Her act ends the Golden Age. She serves the same mythic function as Eve in Genesis. In both the Bible and Greek myth, humanity pays a price for knowledge: loss of innocence, peace and loss of paradise. “While saving him from mental darkness, Prometheus brought to man all the tortures which accompany self-consciousness: the knowledge of his responsibility to the whole of nature; the painful results of all wrong choices made in the past, since free-will and the power of choice go hand in hand with self-consciousness; all of the sorrows and sufferings -- physical, mental and moral -- to which thinking man is heir. Prometheus accepted these tortures as inevitable under the Law, knowing that the soul can develop only through its own experience, willing to pay the price for every experience gained”. THEOSOPHY, Vol. 27, No. 12, October, 1939
Deucalion and Pyrrha , Metamorphoses Bk I: 367-415 , Solis
Eventually Hercules rescued Prometheus, and Zeus and the Titan were reconciled. Zeus became angry enough with conditions on earth that he summoned other gods to a conference at his heavenly palace. Zeus (like the Christian God) decided to destroy humankind and to provide the earth with a new race of mortals more worthy of life and more reverent to him. Zeus feared that the destruction of humankind by fire might set heaven itself aflame, so he called for assistance from a god of the sea, and humankind was instead swept away by a great flood.
The great flood is a common mythological story in many cultures, for example the myth of Noah in the Old Testament, and probably there was once such a great flood in reality, an event that was incrorporated in various mythological stories.
Meanwhile, Prometheus had sired the human Deucalion (some say as his son with Clymene or Celaeno) one of the noble couple whom Zeus had spared when he caused the creatures of the earth to be destroyed by a flood. Deucalion was married to his cousin Pyrrha, the daughter of Epimetheus and Pandora. During the flood, Deucalion and Pyrrha had stayed safe on a boat. When all the other evil humans had been destroyed Zeus caused the waters to recede so that Deucalion and Pyrrha could come to land on Mount Parnassus. While they had each other for company, and they could produce new children, they were lonely and sought help from the oracle of Themis. He was told to “toss the bones of his mighty mother over his shoulder”, that he and Pyrrha understood to be “Gaia” the mother of all living things and the bones to be rocks. Following the advice, they threw stones over their shoulders. From those thrown by Deucalion sprang men and from those thrown by Pyrrha came women.
That is why people are called laoi, from laas, "a stone." [Apollodorus 1.7.2]
"...I remember in Plutarch's works, what is worth relating that I read there, that by the Pigeon sent forth of the Ark, in Deucalions flood, was shown, that the waters were sunk down, and the storms past...."
Then they had their own child, a boy whom they called Hellen and after whom the Greeks were named Hellenes...
Other children were Thyia, and the Leleges all forming the various groups of Greeks (see Map below).
For Greeks it is left as an exercise to find out the corresponding group they belong and their corresponding root! For others, i.e. barbarians :-) see their corresponding myth (Noah,....,etc).
From myth back to history. Between 1900 and 1600 BC Greeks or Hellenes, a branch of the Indo-European speaking people, were simple nomadic herdsmen. They came from the east of the Caspian Sea and entered the Greek peninsula from the north in small groups. The first invaders were the fair-haired Achaeans of whom Homer wrote. The Dorians came 3-4 centuries later and subjugated their Achaean kinsmen. Other tribes, the Aeolians and the Ionians, found homes chiefly on the islands in the Aegean Sea and on the coast of Asia Minor. The land that these tribes invaded was the site of a well-developed civilization. The people who lived there had cities and palaces. They used gold and bronze and made pottery and paintings. The Greek invaders were still in the barbarian stage. They plundered and destroyed the Aegean cities. Gradually, as they settled and intermarried with the people they conquered, they absorbed some of the Aegean culture.
About the etymology of Prometheus it could be derived from Pramanthas, a member of the Vedic family of fire-worshipping priests of the fire god Agni.
Adam. In Greek this word is compounded of the four initial letters of the cardinal quarters:
Arktos, [Greek: arktos]. north.
The Hebrew word ADM forms the anagram of A [dam], D [avid], M [essiah].
Adam, how made. God created the body of Adam of Salzal, i.e. dry, unbaked clay, and left it forty nights without a soul. The clay was collected by Azrael from the four quarters of the earth, and God, to show His approval of Azrael's choice, constituted him the angel of death.—Rabadan.
Adam, Eve, and the Serpent. After the fall Adam was placed on mount Vassem in the east; Eve was banished to Djidda (now Gedda, on the Arabian coast); and the Serpent was exiled to the coast of Eblehh.
After the lapse of 100 years Adam rejoined Eve on mount Arafaith [place of Remembrance], near Mecca.—D'Ohsson.
Death of Adam. Adam died on Friday, April 7, at the age of 930 years. Michael swathed his body, and Gabriel discharged the funeral rites. The body was buried at Ghar'ul-Kenz [the grotto of treasure], which overlooks Mecca.
Apollodorus. The Library, Sir James G. Frazer (transl.), Harvard University Press, Cambridge, 1921, 1976.
| 3.39018 |
MDH Fact Sheet/Brochure
Why Does My Water Smell Like Rotten Eggs?
HYDROGEN SULFIDE AND SULFUR BACTERIA IN WELL
Hydrogen sulfide gas (H2S) can occur in wells anywhere in
Minnesota, and gives the water a characteristic "rotten egg" taste or odor.
This brochure provides basic information about hydrogen sulfide gas and sulfur
bacteria and discusses actions that you can take to minimize their effects.
What are the sources of hydrogen
sulfide in well water and the water distribution system?
Hydrogen sulfide gas can result from a number of different sources. It can
occur naturally in groundwater. It can be produced by certain "sulfur bacteria"
in the groundwater, in the well, or in the water distribution system. It can be
produced also by sulfur bacteria or chemical reactions inside water heaters. In
rare instances, it can result from pollution. The source of the gas is
important when considering treatment options.
Are sulfur bacteria or hydrogen sulfide
In most cases, the rotten egg smell does not relate to the sanitary quality of
the water. However, in rare instances the gas may result from sewage or other
pollution. It is a good idea to have the well tested for the standard sanitary
tests of coliform bacteria and nitrate. Sulfur bacteria are not harmful, but
hydrogen sulfide gas in the air can be hazardous at high levels. It is
important to take steps to remove the gas from the water, or vent the gas to the
atmosphere so that it will not collect in low-lying spaces, such as well pits,
basements, or enclosed spaces, such as well houses. Only qualified people who
have received special training and use proper safety procedures should enter a
well pit or other enclosed space where hydrogen sulfide gas may be present.
Are there other problems associated
with sulfur bacteria or hydrogen sulfide?
Yes. Sulfur bacteria produce a slime and can promote
the growth of other bacteria, such as iron bacteria. The slime can clog wells,
plumbing, and irrigation systems. Bacterial slime may be white, grey, black or
reddish brown if associated with iron bacteria. Hydrogen sulfide gas in water
can cause black stains on silverware and plumbing fixtures. It can also corrode
pipes and other metal components of the water distribution system.
What causes hydrogen sulfide gas to
form in groundwater?
Decay of organic matter such as vegetation, or chemical reactions with some
sulfur-containing minerals in the soil and rock, may naturally create hydrogen
sulfide in gas in groundwater. As groundwater moves through soil and rock
formations containing minerals of sulfate, some of these minerals dissolve in
the water. A unique group of bacteria, called "sulfur bacteria" or
"sulfate-reducing bacteria" can change sulfate and other sulfur containing
compounds, including natural organic materials, to hydrogen sulfide gas.
How is hydrogen sulfide gas produced in
a water heater?
A water heater can provide an ideal environment for the conversion of sulfate to
hydrogen sulfide gas. The water heater can produce hydrogen sulfide gas in two
ways - creating a warm environment where sulfur bacteria can live, and
sustaining a reaction between sulfate in the water and the water heater anode.
A water heater usually contains a metal rod called an "anode," which is
installed to reduce corrosion of the water heater tank. The anode is usually
made of magnesium metal, which can supply electrons that aid in the conversion
of sulfate to hydrogen sulfide gas. The anode is 1/2 to 3/4 inches in diameter
and 30 to 40 inches long.
How can I find the source of a hydrogen
sulfide problem, and what can I do to eliminate it?
The odor of hydrogen sulfide gas can be detected in water at a very low level.
Smell the water coming out of the hot and cold water faucets. Determine which
faucets have the odor. The "rotten egg" smell will often be more noticeable
from the hot water because more of the gas is vaporized. Your sense of smell
becomes dulled quickly, so the best time to check is after you have been away
from your home for a few hours. You can also have the water tested for hydrogen
sulfide, sulfate, sulfur bacteria, and iron bacteria at an environmental testing
laboratory. The cost of testing for hydrogen sulfide ranges from $20 to $50
depending on the type of test.
- If the smell is only from the hot water
faucet the problem is likely to be in the water heater.
- If the smell is in both the hot and cold
faucets, but only from the water treated by a water softener and not in the
untreated water the problem is likely to be sulfur bacteria in the water
- If the smell is strong when the water in
both the hot and cold faucets is first turned on, and it diminishes or goes
away after the water has run, or if the smell varies through time the
problems is likely to be sulfur bacteria in the well or distribution system.
- If the smell is strong when the water in
both the hot and cold faucets is first turned on and is more or less
constant and persists with use the problem is likely to be hydrogen sulfide
gas in the groundwater.
What can I do about a problem water
Unless you are very familiar with the operation and maintenance of the water
heater, you should contact a water system professional, such as a plumber, to do
- Replace or remove the magnesium anode.
Many water heaters have a magnesium anode, which is attached to a plug
located on top of the water heater. It can be removed by turning off the
water, releasing the pressure from the water heater, and unscrewing the
plug. Be sure to plug the hole. Removal of the anode, however, may
significantly decrease the life of the water heater. You may wish to
consult with a reputable water heater dealer to determine if a replacement
anode made of a different material, such as aluminum, can be installed. A
replacement anode may provide corrosion protection without contributing to
the production of hydrogen sulfide gas.
- Disinfect and flush the water heater
with a chlorine bleach solution. Chlorination can kill sulfur bacteria,
if done properly. If all bacteria are not destroyed by chlorination, the
problem may return within a few weeks.
- Increase the water heater temperature to
160 degrees Fahrenheit (71 degrees Celsius) for several hours. This
will destroy the sulfur bacteria. Flushing to remove the dead bacteria
after treatment should control the odor problem.
CAUTION: Increasing the water heater
temperature can be dangerous. Before proceeding, consult with the manufacturer
or dealer regarding an operable pressure relief valve, and for other
recommendations. Be sure to lower the thermostat setting and make certain the
water temperature is reduced following treatment to prevent injury from scalding
hot water and to avoid high energy costs.
What if sulfur bacteria are present in the
well, the water distribution system, or the water softener?
- Have the well and distribution system
disinfected by flushing with a strong chlorine solution (shock chlorination)
as indicated in the "Well Disinfection" fact sheet from the Minnesota
Department of Health (MDH). Sulfur bacteria can be difficult to remove once
established in a well. Physical scrubbing of the well casing, use of
special treatment chemicals, and agitation of the water may be necessary
prior to chlorination to remove the bacteria, particularly if they are
associated with another type of bacteria known as "iron bacteria". Contact
a licensed well contractor or a Minnesota Department of Health (MDH) well
specialist for details.
- If the bacteria are in water treatment
devices, such as a water softener, contact the manufacturer, the installer,
or the MDH for information on the procedure for disinfecting the treatment
What if hydrogen sulfide gas is in the
The problem may only be eliminated by drilling a well into different
formation capable of producing water that is free of hydrogen sulfide gas or
connecting to an alternate water source, if available. However, there are
several options available for treatment of water with hydrogen sulfide gas.
- Install an activated carbon filter.
This option is only effective for low hydrogen sulfide levels, usually less
than 1 milligram per liter (mg/L).* The gas is trapped by the carbon filter
is saturated. Since the carbon filter can remove substances in addition to
hydrogen sulfide gas, it is difficult to predict its service life. Some
large carbon filters have been known to last for years, while some small
filters may last for only weeks or even days.
- Install an oxidizing filter, such as a
"manganese greensand" filter. This option is effective for hydrogen
sulfide levels up to about 6 mg/L. Manganese greensand filters are often
used to treat iron problems in water. The device consists of manganese
greensand media, which is sand coated with manganese dioxide. The hydrogen
sulfide gas in the water is changed to tiny particles of sulfur as it passes
through the filter. The filter must be periodically regenerated, using
potassium permanganate, before the capacity of the greensand is exhausted.
- Install an oxidation-filtration system.
This option is effective for hydrogen sulfide levels up to and exceeding 6
mg/L. These systems utilize a chemical feed pump to inject an oxidizing
chemical, such as chlorine, into the water supply line prior to a storage or
mixing tank. When sufficient contact time is allowed, the oxidizing
chemical changes the hydrogen sulfide to sulfur, which is then removed by a
particulate filter, such as a manganese greensand filter. Excess chlorine
can be removed by activated carbon filtration.
Other related references that are available
from MDH include:
Iron Bacteria in Well Water
Sulfate in Well Water
Well Owner's Handbook
If you have any questions, please contact a
licensed well contractor, a reputable water treatment company, or a well
specialist at one of the following offices of the MDH:
Minnesota Department of Health
Well Management Section
PO Box 64975
St. Paul, Minnesota 55164-0975
Source: Minnesota Department of Health
Fact Sheet/Brochure "Why Does My Water Smell Like
Rotten Eggs? Hydrogen Sulfide and Sulfur Bacteria in Well Water"
| 3.546027 |
Sources & References
According to the American Psychiatric Association, a phobia is an uncontrollable, irrational, and persistent fear of a specific object, situation, or activity. The fear experienced by people with phobias can be so great that some individuals go to extreme lengths to avoid the source of their fear. One extreme response to the source of a phobia can be a panic attack.
Every year, approximately 19 million Americans experience one or more phobias that range from mild to severe. Phobias can occur in early childhood, but usually are first evident between the ages of 15 and 20 years. They affect both genders equally, although men are more likely to seek treatment for phobias.
Research suggests that both genetic and environmental factors contribute to the onset of phobias. Specific phobias have been associated with a fearful first encounter with the phobic object or situation. The question still exists, however, whether this conditioning exposure is necessary or if phobias can develop in genetically predisposed individuals.
|Specific phobia||What is specific phobia?|
Specific phobia is characterized by extreme fear of an object or situation that is not harmful under general conditions.
Examples may include a fear of the following:
- Flying (fearing the plane will crash)
- Dogs (fearing the dog will bite/attack)
- Closed-in places (fear of being trapped)
- Tunnels (fearing a collapse)
- Heights (fear of falling)
What are the characteristics of specific phobia?
People with specific phobias know that their fear is excessive, but are unable to overcome their emotion. The disorder is diagnosed only when the specific fear interferes with daily activities of school, work, or home life.
Approximately 19 million American adults ages 18 to 54, in a given year, have some type of specific phobia. There is no known cause, although they seem to run in families and are slightly more prevalent in women. If the object of the fear is easy to avoid, people with phobias may not feel the need to seek treatment. Sometimes, however, they may make important career or personal decisions to avoid a situation that includes the source of the phobia.
Treatment for specific phobia
There is currently no proven drug treatment for specific phobias, however, in some cases, certain medications may be prescribed to help reduce anxiety symptoms before someone faces a phobic situation.
When phobias interfere with a person's life, treatment can help, and usually involves a kind of cognitive-behavioral therapy called desensitization or exposure therapy. In this, patients are gradually exposed to what frightens them until the fear begins to fade. Relaxation and breathing exercises also help to reduce anxiety symptoms.
|Social phobia||What is social phobia?|
Social phobia is an anxiety disorder in which a person has significant anxiety and discomfort related to a fear of being embarrassed, humiliated, or scorned by others in social or performance situations. Even when they manage to confront this fear, persons with social phobia usually:
- Feel very anxious before the event/outing
- Feel intensely uncomfortable throughout the event/outing
- Have lingering unpleasant feelings after the event/outing
Social phobia frequently occurs with the following:
- Public speaking
- Meeting people
- Dealing with authority figures
- Eating in public
- Using public restrooms
What are the characteristics of social phobia?
Although this disorder is often thought of as shyness, the two are not the same. Shy people can be very uneasy around others, but they do not experience the extreme anxiety in anticipating a social situation, and, they do not necessarily avoid circumstances that make them feel self-conscious. In contrast, people with social phobia are not necessarily shy at all, but can be completely at ease with some people most of the time.
Most people experiencing social phobia will try to avoid situations that provoke dread or otherwise cause them much distress.
Diagnosing social phobia
Social phobia is diagnosed when the fear or avoidance significantly interferes with normal, expected routines, or is excessively upsetting.
Social phobia disrupts normal life, interfering with career or social relationships. It often runs in families and may be accompanied by depression or alcoholism. Social phobia often begins around early adolescence or even younger. Approximately 15 million American adults ages 18 to 54 experience social phobia in a given year.
Treatment for social phobia
People who suffer from social phobia often find relief from their symptoms when treated with cognitive-behavioral therapy, or medications, or a combination of the two.
|Agoraphobia||What is agoraphobia?|
Agoraphobia is a Greek word that literally means "fear of the marketplace." This anxiety disorder involves the fear of experiencing a panic attack in a place or situation from which escape may be difficult or embarrassing.
The anxiety associated with agoraphobia is so severe that panic attacks are not unusual, and individuals with agoraphobia typically try to avoid the location or cause of their fear. Agoraphobia involves fear of situations such as, but is not limited to, the following:
- Being alone outside his or her home
- Being at home alone
- Being in a crowd
- Traveling in a vehicle
- Being in an elevator or on a bridge
People with agoraphobia typically avoid crowded places like streets, crowded stores, churches, and theaters.
What are the characteristics of agoraphobia?
Most people with agoraphobia develop the disorder after first suffering a series of one or more panic attacks. The attacks occur randomly and without warning, and make it impossible for a person to predict what situations will trigger the reaction. This unpredictability of the panic causes the person to anticipate future panic attacks and, eventually, fear any situation in which an attack may occur. As a result, they avoid going into any place or situation where previous panic attacks have occurred.
People with the disorder often become so disabled that they literally feel they cannot leave their homes. Others who have agoraphobia, do go into potentially "phobic" situations, but only with great distress, or when accompanied by a trusted friend or family member.
Persons with agoraphobia may also develop depression, fatigue, tension, alcohol or drug abuse problems, and obsessive disorders, making seeking treatment crucial. Approximately 1.8 million American adults ages 18 to 54 experience agoraphobia in a given year.
Click here to view the
Online Resources of Mental Health Disorders
| 3.703596 |
How would our lives be different if we could only buy foods native to New York State? This and similar questions are the focus of a museum-wide quest where students work in teams to gather data about the things they use each day. After collecting the data, each team creates a graphic representation of its findings and leads a discussion about interdependence around the globe.
Please divide your students into three different research teams and assign each team at least one chaperon or teacher. Each team will be going to a separate area of the museum to gather data: the “foods” team will work in Super Kids Market; the “fashions” team in the museum’s collections; and the “fads” team in TimeLab. After collecting data, each team will prepare a brief presentation to share its findings. All three groups will gather together for the presentations.
Lesson extensions for before or after your visit
The following activities are designed for your class to enjoy before or after your museum visit. Familiarizing students with the lesson concepts can enrich your museum experience.
How do the choices we make reflect the concept of globalization? Review maps of the United States and the world prior to your visit. Here’s one way to do that:
Post the four cardinal directions in your room: North, South, East, and West. Give each student an index card and the instruction to write the name of one of the 50 states on the card. When everyone is ready, ask students to stand where they think their state would be if the room was the U.S. Allow students to make adjustments to their positions after talking with each other. Provide a real map for students to use as a reference if needed. Once in position, ask students to describe their thinking in deciding where to stand. Do the same thing using continents, oceans, and countries with a map of the world.
Ask students to spend one week keeping a “globalization journal.” They should record each piece of evidence they see to suggest that globalization is occurring. This evidence may appear in items they have at home or have seen at stores; things they’ve heard about in news stories; music they listen to; food they eat at restaurants; holiday customs their family has adopted; or other sources. Some examples of what they might find include:
- imported cheese in the grocery store
- imported CDs in a music store
- a television program produced in another country
- a television news story about international business or another topic related to globalization
Post a large map of the world in the classroom. Invite students to collect objects and images of things they like that represent different countries around the world. Post the objects and images on the map and use the on-going project as a springboard for discussion each week. Have students generate their own questions about the data they collect and invite them to do research to further their understanding of globalization.
| 3.598064 |
Recovery is a process, beginning with diagnosis and eventually moving into successful management of your illness. Successful recovery involves learning about your illness and the treatments available, empowering yourself through the support of peers and family members, and finally moving to a point where you take action to manage your own illness by helping others.
Untreated Mental Illness: A Needless Human Tragedy
Severe mental illnesses are treatable disorders of the brain. Left untreated, however, they are among the most disabling and destructive illnesses known to humankind.
Millions of Americans struggling with severe mental illnesses, such as schizophrenia, bipolar disorder, and major depression, know only too well the personal costs of these debilitating illnesses. Stigma, shame, discrimination, unemployment, homelessness, criminalization, social isolation, poverty, and premature death mark the lives of most individuals with the most severe and persistent mental illnesses.
Mental Illness Recovery: A Reality Within Our Grasp
The real tragedy of mental illness in this country is that we know how to put things right. We know how to give people back their lives, to give them back their self-respect, to help them become contributing members of our society. NAMI's In Our Own Voice, a live presentation by consumers, offers living proof that recovery from mental illness is an ongoing reality.
Science has greatly expanded our understanding and treatment of severe mental illnesses. Once forgotten in the back wards of mental institutions, individuals with brain disorders have a real chance at reclaiming full, productive lives, but only if they have access to the treatments, services, and programs so vital to recovery.
- Newer classes of medications can better treat individuals with severe mental illnesses and with far fewer side effects. Eighty percent of those suffering from bipolar disorder and 65 percent of those with major depression respond quickly to treatment; additionally, 60 percent of those with schizophrenia can be relieved of acute symptoms with proper medication.
- Assertive community treatment, a proven model treatment program that provides round-the-clock support to individuals with the most severe and persistent mental illnesses, significantly reduces hospitalizations, incarceration, homelessness, and increases employment, decent housing and quality of life.
- The involvement of consumers and family members in all aspects of planning, organizing, financing, and implementing service-delivery systems results in more responsiveness and accountability, and far fewer grievances.
Resources for Recovery
| 3.378888 |
Joined: 16 Mar 2004
|Posted: Tue Aug 04, 2009 2:40 pm Post subject: Immune Responses Jolted into Action by Nanohorns
|The immune response triggered by carbon nanotube-like structures could be harnessed to help treat infectious diseases and cancers, say researchers.
The way tiny structures like nanotubes can trigger sometimes severe immune reactions has troubled researchers trying to use them as vehicles to deliver drugs inside the body in a targeted way.
White blood cells can efficiently detect and capture nanostructures, so much research is focused on allowing nanotubes and similar structures to pass unmolested in the body.
But a French-Italian research team plans to use nanohorns, a cone-shaped variety of carbon nanotubes, to deliberately provoke the immune system.
They think that the usually unwelcome immune response could kick-start the body into fighting a disease or cancer more effectively.
To test their theory, Alberto Bianco and Hélène Dumortier at the CNRS Institute in Strasbourg, France, in collaboration with Maurizio Prato at the University of Trieste, Italy, gave carbon nanohorns to mouse white blood cells in a Petri dish. The macrophage cells' job is to swallow foreign particles.
After 24 hours, most of the macrophages had swallowed some nanohorns. But they had also begun to release reactive oxygen compounds and other small molecules that signal to other parts of the immune system to become more active.
The researchers think they could tune that cellular distress call to a particular disease or cancer, by filling the interior of nanohorns with particular antigens, like ice cream filling a cone.
"The nanohorns would deliver the antigen to the macrophages while also triggering a cascade of pro-inflammatory effects," Dumortier says. "This process should initiate an antigen-specific immune response."
"There is still a long way to go before this interesting approach might become safe and effective," says Ruth Duncan at Cardiff University , UK . "Safety would ultimately depend on proposed dose, the frequency of dose and the route of administration," she says.
Dumortier agrees more work is needed, but adds that the results so far suggest that nanohorns are less toxic to cells than normal nanotubes can be. "No sign of cell death was visible upon three days of macrophage culture in the presence of nanohorns," Dumortier says.
Recent headline-grabbing results suggest that nanotubes much longer than they are wide can cause similar inflammation to asbestos . But nanohorns do not take on such proportions and so would not be expected to have such an effect.
Journal reference: 10 Advanced Materials (DOI: 1002/adma.200702753)
Source: New Scientist /...
Subscribe to the IoN newsletter.
| 3.382851 |
The knowledge, skills and understandings relating to students’ writing have been drawn from the Statements of Learning for English (MCEECDYA 2005).
Students are taught to write a variety of forms of writing at school. The three main forms of writing (also called genres or text types) that are taught are narrative writing, informative writing and persuasive writing. In the Writing tests, students are provided with a ‘writing stimulus' (sometimes called a prompt – an idea or topic) and asked to write a response in a particular genre or text type.
In 2013, students will be required to complete a persuasive writing task.
The Writing task targets the full range of student capabilities expected of students from Years 3 to 9. The same stimulus is used for students in Years 3, 5, 7 and 9. The lines in the response booklet for Year 3 students are more widely spaced than for Years 5, 7 and 9 and more capable students will address the topic at a higher level. The same marking guide is used to assess all students' writing, allowing for a national comparison of student writing capabilities across these year levels.
Assessing the Writing task
Students’ writing will be marked by assessors who have received intensive training in the application of a set of ten writing criteria summarised below. The full Persuasive Writing Marking Guide ( 5.7 MB) and the writing stimulus used to prompt the writing samples in the Marking Guide are both available for download.
Descriptions of the Writing criteria
||Description of marking criterion
|The writer’s capacity to orient, engage and persuade the reader
||The organisation of the structural components of a persuasive text (introduction, body and conclusion) into an appropriate and effective text structure
||The selection, relevance and elaboration of ideas for a persuasive argument
||The use of a range of persuasive devices to enhance the writer’s position and persuade the reader
||The range and precision of contextually appropriate language choices
||The control of multiple threads and relationships across the text, achieved through the use of grammatical elements (referring words, text connectives, conjunctions) and lexical elements (substitutions, repetitions, word associations)
||The segmenting of text into paragraphs that assists the reader to follow the line of argument
||The production of grammatically correct, structurally sound and meaningful sentences
||The use of correct and appropriate punctuation to aid the reading of the text
||The accuracy of spelling and the difficulty of the words used
The Narrative Writing Marking Guide (used in 2008 - 2010 ) is also available.
Use of formulaic structures
Beginning writers can benefit from being taught how to use structured scaffolds. One such scaffold that is commonly used is the five paragraph argument essay. However, when students becomes more competent, the use of this structure can be limiting. As writers develop their capabilities they should be encouraged to move away from formulaic structures and to use a variety of different persuasive text types, styles and language features, as appropriate to different topics.
Students are required to write their opinion and to draw on personal knowledge and experience when responding to test topics. Students are not expected to have detailed knowledge about the topic. Students should feel free to use any knowledge that they have on the topic, but should not feel the need to manufacture evidence to support their argument. In fact, students who do so may undermine the credibility of their argument by making statements that are implausible.
Example topics and different styles:
City or country (see example prompt )
A beginning writer could write their opinion about living in either the city or country and give reasons for it. A more capable writer might also choose to take one side and argue for it. However, this topic also lends itself to a comparative style response from a more capable writer. It can be argued there are benefits and limitations to living in the city and living in the country. A writer could also choose to introduce other options, for example living in a large country town that might have the benefits of city and rural life. Positions taken on this topic are likely to elicit logical, practical reasons and anecdotes based on writers’ experiences.
Books or TV (see example prompt )
A beginning writer could write about their opinion of one aspect and give reasons for it. However, this topic lends itself to a comparative style response from a more capable writer. It can be argued there are benefits and limitations to both books and TV. The reasons for either side of the topic are likely to elicit logical, practical reasons and personal anecdotes based on the writer's experiences of both books and TV.
It is cruel to keep animals in cages and zoos (see example prompt )
A beginning writer could take on one side of the topic and give reasons for it. However, this topic lends itself to be further redefined. For example, a more capable writer might develop the difference between open range zoos and small cages and then argue the merits of one and limitations of the other. The animal welfare issues raised by this topic are likely to elicit very empathetic and emotive arguments based on the writer's knowledge about zoos and animals.
More information on persuasive writing can be found in the FAQ section for NAPLAN - Writing test.
National minimum standards
The national minimum standards for writing describe some of the skills and understandings students can generally demonstrate at their particular year schooling. The standards are intended to be a snapshot of typical achievement and do not describe the full range of what students are taught or what they may achieve.
For further information on the national minimum standards see Performance Standards.
| 4.505377 |
Atomic oxygen, a corrosive space gas, finds many applications on Earth.
An Atomic Innovation for Artwork
Oxygen may be one of the most common substances on the planet, but recent space research has unveiled a surprising number of new applications for the gas, including restoring damaged artwork.
It all started with a critical problem facing would-be spacecraft: the gasses just outside the Earth’s atmosphere are highly corrosive. While most oxygen atoms on Earth’s surface occur in pairs, in space the pair is often split apart by short-wave solar radiation, producing singular atoms. Because oxygen so easily bonds with other substances, it is highly corrosive in atomic form, and it gradually wears away the protective layering on orbiting objects such as satellites and the International Space Station (ISS).
To combat this destructive gas, NASA recreated it on Earth and applied it to different materials to see what would prove most resistant. The coatings developed through these experiments are currently used on the ISS.
During the tests, however, scientists also discovered applications for atomic oxygen that have since proved a success in the private sector.
Breathing New Life into Damaged Art
In their experiments, NASA researchers quickly realized that atomic oxygen interacted primarily with organic materials. Soon after, they partnered with churches and museums to test the gas’s ability to restore fire-damaged or vandalized art. Atomic oxygen was able to remove soot from fire-damaged artworks without altering the paint.
It was first tested on oil paintings: In 1989, an arson fire at St. Alban’s Episcopal Church in Cleveland nearly destroyed a painting of Mary Magdalene. Although the paint was blistered and charred, atomic oxygen treatment plus a reapplication of varnish revitalized it. And in 2002, a fire at St. Stanislaus Church (also in Cleveland) left two paintings with soot damage, but atomic oxygen removed it.
Buoyed by the successes with oil paints, the engineers also applied the restoration technique to acrylics, watercolors, and ink. At Pittsburgh’s Carnegie Museum of Art, where an Andy Warhol painting, Bathtub, has been kissed by a lipstick-wearing vandal, a technician successfully removed the offending pink mark with a portable atomic oxygen gun. The only evidence that the painting had been treated—a lightened spot of paint—was easily restored by a conservator.
A Genuine Difference-maker
When the successes in art restoration were publicized, forensic analysts who study documents became curious about using atomic oxygen to detect forgeries. They found that it can assist analysts in figuring out whether important documents such as checks or wills have been altered, by revealing areas of overlapping ink created in the modifications.
The gas has biomedical applications as well. Atomic oxygen technology can be used to decontaminate orthopedic surgical hip and knee implants prior to surgery. Such contaminants contribute to inflammation that can lead to joint loosening and pain, or even necessitate removing the implant. Previously, there was no known chemical process that fully removed these inflammatory toxins without damaging the implants. Atomic oxygen, however, can oxidize any organic contaminants and convert them into harmless gases, leaving a contaminant-free surface.
Thanks to NASA’s work, atomic oxygen—once studied in order to keep it at bay in space—is being employed in surprising, powerful ways here on Earth.
To learn more about this NASA spinoff, read the original article
| 3.693121 |
Is light made of waves, or particles?
This fundamental question has dogged scientists for decades, because light seems to be both. However, until now, experiments have revealed light to act either like a particle, or a wave, but never the two at once.
Now, for the first time, a new type of experiment has shown light behaving like both a particle and a wave simultaneously, providing a new dimension to the quandary that could help reveal the true nature of light, and of the whole quantum world.
The debate goes back at least as far as Isaac Newton, who advocated that light was made of particles, and James Clerk Maxwell, whose successful theory of electromagnetism, unifying the forces of electricity and magnetism into one, relied on a model of light as a wave. Then in 1905, Albert Einstein explained a phenomenon called the photoelectric effect using the idea that light was made of particles called photons (this discovery won him the Nobel Prize in physics). [What's That? Your Physics Questions Answered]
Ultimately, there's good reason to think that light is both a particle and a wave. In fact, the same seems to be true of all subatomic particles, including electrons and quarks and even the recently discovered Higgs boson-like particle. The idea is called wave-particle duality, and is a fundamental tenet of the theory of quantum mechanics.
Depending on which type of experiment is used, light, or any other type of particle, will behave like a particle or like a wave. So far, both aspects of light's nature haven't been observed at the same time.
But still, scientists have wondered, does light switch from being a particle to being a wave depending on the circumstance? Or is light always both a particle and a wave simultaneously?
Now, for the first time, researchers have devised a new type of measurement apparatus that can detect both particle and wave-like behavior at the same time. The device relies on a strange quantum effect called quantum nonlocality, a counter-intuitive notion that boils down to the idea that the same particle can exist in two locations at once.
"The measurement apparatus detected strong nonlocality, which certified that the photon behaved simultaneously as a wave and a particle in our experiment," physicist Alberto Peruzzo of England's University of Bristol said in a statement. "This represents a strong refutation of models in which the photon is either a wave or a particle."
Peruzzo is lead author of a paper describing the experiment published in the Nov. 2 issue of the journal Science.
The experiment further relies on another weird aspect of quantum mechanics — the idea of quantum entanglement. Two particles can become entangled so that actions performed on one particle affect the other. In this way, the researchers were able to allow the photons in the experiment to delay the choice of whether to be particles or waves.
MIT physicist Seth Lloyd, who was not involved in the project, called the experiment "audacious" in a related essay in Science, and said that while it allowed the photons to delay the choice of being particles or waves for only a few nanoseconds, "if one has access to quantum memory in which to store the entanglement, the decision could be put off until tomorrow (or for as long as the memory works reliably). So why decide now? Just let those quanta slide!"
- Twisted Physics: 7 Mind-Blowing Findings
- Quantum Weirdness Goes Big – Molecules Act Like Waves | Video
- Wacky Physics: The Coolest Little Particles in Nature
© 2012 LiveScience.com. All rights reserved.
| 3.151066 |
INDIA INFO: India - More Indian Musical Instruments
by V.A.Ponmelil (Feedback)
More Indian Musical Instruments
Chitra Veena / Gotu Vadhyam
The Chitraveena which is also referred to as the gotuvadhyam is one of the most exquisite instruments. It is a 21 stringed fretless lute similar to Vichitraveena.
It contains a flat top, two resonant chambers, and a hollow stem of wood. While the right hand plectrums pluck the strings, the left hand slides a piece of wood over the strings.
It is one of the oldest instruments of the world and the forerunner of the fretted Saraswati Veena.
The word Jaltarang means "waves in water". The jaltarang is an interesting ancient musical instrument consisting of a series of tuned bowls arranged in a semicircle around the performer.
The bowls are of different sizes and are tuned precisely to the pitches of various ragas by adding appropriate amounts of water. The instrument is played by striking the inside edge of the bowls with two small wooden sticks, one held in each hand.
Jal tarang is not very common and is normally found in the accompaniment of kathak dancers.
The Morsing is a tiny instrument which is held in the left hand, the prongs against the upper and lower front teeth.
The tongue, which protrudes from the mouth, is made of spring steel. This is plucked with the Index finger of the right hand (backwards, not forwards) while the tone and timbre are adjusted by changing the shape of the mouth cavity and moving the tongue. Further control of the sound can be achieved with the breath.
Like the mridangam, the morsing is tuned to the Shruti and fine tuning is achieved by placing small amounts of bee's wax on the end of the tongue.
The Shank is one of the ancient instruments of India. It is also referred to as the sushirvadya which is associated with religious functions.
In India it is considered very sacred. It is being regarded as one of the attributes of Lord Vishnu. Before using, the Shankh is drilled in such a way as to produce a hole at the base taking care that the natural hole is not disturbed. In Athar¬Veda, one finds reference to Shankh, though it existed long, before. In Bhagvad Gita, during the time of war, Shankh had played an important role. It also has different names like Panch Janya Shankh, Devadatt Shankh, Mahashan Ponder Shankh and more.
Even in Valmiki's Ramayna, the mention of a Shankh can be traced. In the temples, Shankh is played in the mornings and evenings during the prayers. In homes, it is played before the starting of havan, yagnopavit, marriage, etc.
The Kombu is a wind instrument or a kind of trumpet which is usually played along with the Panchavadyam or the Pandi Melam or the Panchari melam. This musical instrument is like a long horn and is usually seen in Kerala state of South India.
Travel Information on top destinations of India
Hill Stations of India
| 3.257702 |
Young goats learn new and distinctive bleating "accents" once they begin to socialise with other kids.
The discovery is a surprise because the sounds most mammals make were thought to be too primitive to allow subtle variations to emerge or be learned. The only known exceptions are humans, bats and cetaceans – although many birds, including songbirds, parrots and hummingbirds have legendary song-learning or mimicry abilities.
Now, goats have joined the club. "It's the first ungulate to show evidence of this," says Alan McElligott of Queen Mary, University of London.
McElligott and his colleague, Elodie Briefer, made the discovery using 23 newborn kids. To reduce the effect of genetics, all were born to the same father, but from several mothers, so the kids were a mixture of full siblings plus their half-brothers and sisters.
The researchers allowed the kids to stay close to their mothers, and recorded their bleats at the age of 1 week. Then, the 23 kids were split randomly into four separate "gangs" ranging from five to seven animals. When all the kids reached 5 weeks, their bleats were recorded again. "We had about 10 to 15 calls per kid to analyse," says McElligott.
Some of the calls are clearly different to the human ear, but the full analysis picked out more subtle variations, based on 23 acoustic parameters. What emerged was that each kid gang had developed its own distinctive patois. "It probably helps with group cohesion," says McElligott.
"People presumed this didn't exist in most mammals, but hopefully now, they'll check it out in others," says McElligott. "It wouldn't surprise me if it's found in other ungulates and mammals."
Erich Jarvis of Duke University Medical Center in Durham, North Carolina, says the results fit with an idea he has developed with colleague Gustavo Arriaga, arguing that vocal learning is a feature of many species.
"I would call this an example of limited vocal learning," says Jarvis. "It involves small modifications to innately specified learning, as opposed to complex vocal learning which would involve imitation of entirely novel sounds."
Journal reference: Animal Behaviour, DOI: 10.1016/j.anbehav.2012.01.020
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
| 3.577973 |
Evolution can fall well short of perfection. Claire Ainsworth and Michael Le Page assess where life has gone spectacularly wrong
THE ascent of Mount Everest's 8848 metres without bottled oxygen in 1978 suggests that human lungs are pretty impressive organs. But that achievement pales in comparison with the feat of the griffon vulture that set the record for the highest recorded bird flight in 1975 when it was sucked into the engine of a plane flying at 11,264 metres.
Birds can fly so high partly because of the way their lungs work. Air flows through bird lungs in one direction only, pumped through by interlinked air sacs on either side. This gives them numerous advantages over lungs like our own. In mammals' two-way lungs, not as much fresh air reaches the deepest parts of the lungs, and incoming air is diluted by the oxygen-poor air that remains after ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
| 3.279792 |
Collins Field Guide to the Birds of South America: Non-Passerines: From rheas to woodpeckers
The only field guide to illustrate and describe every non-passerine species of bird in South America, this superbly illustrated field guide to the birds of South America covers all the non-passerines (Divers to Woodpeckers). All plumages for each species are illustrated, including males, females and juveniles. Featuring 1,273 species, the text gives information on key identification features, habitat, and songs and calls. The 156 colour plates appear opposite their relevant text for quick and easy reference and include all field identifiable species, including subspecies and colour morphs. Distribution maps are included, showing where each species can be found and how common it is, to further aid identification.
Vew all titles in Americas: Central & South America combined with South America (GEN)
View other products from the same publisher
| 3.045444 |
The best gifts are handmade. Make this craft together and give it as a gift to a parent, grandparent, or your child's classmates.
This craft is best suited for parents to make on their own or with minimal help from kids.
You'll need to allow extra time for glue or paint to dry.
Create with us skills involve self-expression, experimentation, and imagination through visual arts (like painting and sculpting), dramatic play, cooking, and dance.
Read with us skills focus on early literacy and include: listening, comprehension, speech, reading, writing, vocabulary, letters and their sounds, and spelling.
| 3.192551 |
In January 1968, Nixon decided to once again seek the nomination of the Republican Party for president. Portraying himself as a figure of stability in a time of national upheaval, Nixon promised a return to traditional values and "law and order." He fended off challenges from other candidates such as California Governor Ronald Reagan, New York Governor Nelson Rockefeller, and Michigan Governor George Romney to secure the nomination at the Republican convention in Miami. Nixon unexpectedly chose Governor Spiro Agnew of Maryland as his running mate.
Nixon's campaign was helped by the tumult within the Democratic Party in 1968. Consumed by the war in Vietnam, President Lyndon B. Johnson announced on March 31 that he would not seek re-election. On June 5, immediately after winning the California primaries, former attorney general and then-U.S. Senator Robert F. Kennedy (brother of the late president John F. Kennedy) was assassinated in Los Angeles. The campaign of Vice President Hubert Humphrey, the Democratic nominee for president, went into a tailspin after the Democratic national convention in Chicago was marred by mass protests and violence. By contrast, Nixon appeared to represent a calmer society, and his campaign promised peace at home and abroad. Despite a late surge by Humphrey, Nixon won by nearly 500,000 popular votes. Third-party candidate George Wallace, the once and future governor of Alabama, won nearly ten million popular votes and 46 electoral votes, principally in the Deep South.
Once in office, Nixon and his staff faced the problem of how to end the Vietnam War, which had broken his predecessor's administration and threatened to cause major unrest at home. As protesters in America's cities called for an immediate withdrawal from Southeast Asia, Nixon made a nationally televised address on November 3, 1969, calling on the "silent majority" of Americans to renew their confidence in the American government and back his policy of seeking a negotiated peace in Vietnam. Earlier that year, Nixon and his Defense Secretary Melvin Laird had unveiled the policy of "Vietnamization," which entailed reducing American troop levels in Vietnam and transferring the burden of fighting to South Vietnam; accordingly, U.S. troop strength in Vietnam fell from 543,000 in April 1969 to zero on March 29, 1973. Nevertheless, the Nixon administration was harshly criticized for its use of American military force in Cambodia and its stepped-up bombing raids during the later years of the first term.
Nixon's foreign policy aimed to reduce international tensions by forging new links with old rivals. In February 1972, Nixon traveled to Beijing, Hangzhou, and Shanghai in China for talks with Chinese leaders Chairman Mao Zedong and Premier Zhou Enlai. Nixon's trip was the first high-level contact between the United States and the People's Republic of China in more than twenty years, and it ushered in a new era of relations between Washington and Beijing. Several weeks later, in May 1972, Nixon visited Moscow for a summit meeting with Leonid Brezhnev, general secretary of the Communist Party of the Soviet Union, and other Soviet leaders. Their talks led to the signing of the Strategic Arms Limitation Treaty, the first comprehensive and detailed nuclear weapons limitation pact between the two superpowers.
Foreign policy initiatives represented only one aspect of Nixon's presidency during his first term. In August 1969, Nixon proposed the Family Assistance Plan, a welfare reform that would have guaranteed an income to all Americans. The plan, however, did not receive congressional approval. In August 1971, spurred by high inflation rates, Nixon imposed wage and price controls in an effort to gain control of price levels in the U.S. economy; at the same time, prompted by worries over the soundness of U.S. currency, Nixon took the dollar off the gold standard and let it float against other countries' currencies.
On July 19, 1969, astronauts Neil Armstrong and Buzz Aldrin became the first humans to walk on the Earth's moon, while fellow astronaut Michael Collins orbited in the Apollo 11 command module. Nixon made what has been termed the longest-distance telephone call ever made to speak with the astronauts from the Oval Office. And on September 28, 1971, Nixon signed legislation abolishing the military draft.
In addition to such weighty affairs of state, Nixon's first term was also full of lighter-hearted moments. On April 29, 1969, Nixon awarded the Presidential Medal of Freedom, the nation's highest civilian honor, to Duke Ellington-and then led hundreds of guests in singing "Happy Birthday" to the famed band leader. On June 12, 1971, Tricia became the sixteenth White House bride when she and Edward Finch Cox of New York married in the Rose Garden. (Julie had wed Dwight David Eisenhower II, grandson of President Eisenhower, on December 22, 1968, in New York's Marble Collegiate Church, while her father was President-elect.) Perhaps most famous was Nixon's meeting with Elvis Presley on December 21, 1970, when the president and the king discussed the drug problem facing American youth.
Re-election, Second Term, and Watergate
In his 1972 bid for re-election, Nixon defeated South Dakota Senator George McGovern, the Democratic candidate for president, by one of the widest electoral margins ever, winning 520 electoral college votes to McGovern's 17 and nearly 61 percent of the popular vote. Just a few months later, investigations and public controversy over the Watergate scandal had sapped Nixon's popularity. The Watergate scandal began with the June 1972 discovery of a break-in at the Democratic National Committee offices in the Watergate office complex in Washington, D.C., but media and official investigations soon revealed a broader pattern of abuse of power by the Nixon administration, leading to his resignation.
The Watergate burglars were soon linked to officials of the Committee to Re-elect the President, the group that had run Nixon's 1972 re-election campaign. Soon thereafter, several administration officials resigned; some, including former attorney general John Mitchell, were later convicted of offenses connected with the break-in and other crimes and went to jail. Nixon denied any personal involvement with the Watergate burglary, but the courts forced him to yield tape recordings of conversations between the president and his advisers indicating that the president had, in fact, participated in the cover-up, including an attempt to use the Central Intelligence Agency to divert the FBI's investigation into the break-in. (For more information about Watergate, please visit the Ford Presidential Library and Museum's online Watergate exhibit.)
Investigations into Watergate also revealed other abuses of power, including numerous warrantless wiretaps on reporters and others, campaign "dirty tricks," and the creation of a "Plumbers" unit within the White House. The Plumbers, formed in response to the leaking of the Pentagon Papers to news organizations by former Pentagon official Daniel Ellsberg, broke into the office of Ellsberg's psychiatrist.
Adding to Nixon's worries was an investigation into Vice President Agnew's ties to several campaign contributors. The Department of Justice found that Agnew had taken bribes from Maryland construction firms, leading to Agnew's resigning in October 1973 and his entering a plea of no contest to income tax evasion. Nixon nominated Gerald Ford, Republican leader in the House of Representatives, to succeed Agnew. Ford was confirmed by both houses of Congress and took office on December 6, 1973.
Such controversies all but overshadowed Nixon's other initiatives in his second term, such as the signing of the Paris peace accords ending American involvement in the Vietnam war in January 1973; two summit meetings with Brezhnev, in June 1973 in Washington and in June and July 1974 in Moscow; and the administration's efforts to secure a general peace in the Middle East following the Yom Kippur War of 1973.
The revelations from the Watergate tapes, combined with actions such as Nixon's firing of Watergate special prosecutor Archibald Cox, badly eroded the president's standing with the public and Congress. Facing certain impeachment and removal from office, Nixon announced his decision to resign in a national televised address on the evening of August 8, 1974. He resigned effective at noon the next day, August 9, 1974. Vice President Ford then became president of the United States. On September 8, 1974, Ford pardoned Nixon for "all offenses against the United States" which Nixon "has committed or may have committed or taken part in" during his presidency. In response, Nixon issued a statement in which he said that he regretted "not acting more decisively and forthrightly in dealing with Watergate."
| 3.65284 |
How to Use
Reading 1: Three Days of Carnage at Gettysburg
(Refer to Map 2 as you read the description of the battle.)
Units of the Union and the Confederate armies met near Gettysburg on June 30, 1863, and each quickly requested reinforcements. The main battle opened on July 1, with early morning attacks by the Confederates on Union troops on McPherson Ridge, west of the town. Though outnumbered, the Union forces held their position. The fighting escalated throughout the day as more soldiers from each army reached the battle area. By 4 p.m., the Union troops were overpowered, and they retreated through the town, where many were quickly captured. The remnants of the Union force fell back to Cemetery Hill and Culp's Hill, south of town. The Southerners failed to pursue their advantage, however, and the Northerners labored long into the night regrouping their men.
Throughout the night, both armies moved their men to Gettysburg and took up positions in preparation for the next day. By the morning of July 2, the main strength of both armies had arrived on the field. Battle lines were drawn up in sweeping arcs similar to a "J," or fishhook shape. The main portions of both armies were nearly a mile apart on parallel ridges: Union forces on Cemetery Ridge, Confederate forces on Seminary Ridge, to the west. General Robert E. Lee, commanding the Confederate troops, ordered attacks against the Union left and right flanks (ends of the lines). Starting in late afternoon, Confederate General James Longstreet's attacks on the Union left made progress, but they were checked by Union reinforcements brought to the fighting from the Culp's Hill area and other uncontested parts of the Union battle line. To the north, at the bend and barb of the fishhook (the other flank), Confederate General Richard Ewell launched his attack in the evening as the fighting at the other end of the fishhook was subsiding. Ewell's men seized part of Culp's Hill, but elsewhere they were repulsed. The day's results were indecisive for both armies.
In the very early morning of July 3, the Union army forced out the Confederates who had successfully taken Culp's Hill the previous evening. Then General Lee, having attacked the ends of the Union line the previous day, decided to assail the Union. The attack was preceded by a two hour artillery bombardment of Cemetery Hill and Ridge. For a time, the massed guns of both armies were engaged in a thunderous duel for supremacy. The Union defensive position held. In a final attempt to gain the initiative and win the battle, Lee sent approximately 12,000 soldiers across the one mile of open fields that separated the two armies near the Union center. General George Meade, commander of the Union forces, anticipated such a move and had readied his army. The Union lines did not break. Only every other Southerner who participated in this action retired to safety. Despite great courage, the attack (sometimes called Pickett's Charge or Longstreet's assault) was repulsed with heavy losses. Crippled by extremely heavy casualties in the three days at Gettysburg, the Confederates could no longer continue the battle, and on July 4 they began to withdraw from Gettysburg.
1. Which army had the advantage after the first day of fighting? What were some reasons for their success? Could they have been even more successful?
2. What was the situation by the evening of July 2?
3. What evidence from the previous day's fighting brought General Lee to decide on the strategy for Pickett's Charge on July 3? What was the result of that assault?
4. Why did General Lee decide to withdraw from Gettysburg?
Reading 1 was adapted from the National Park Service's visitor's guide for Gettysburg National Military Park.
| 3.475549 |
Algorithm Positions Solar Trackers, Movie Stars
March 30, 2011
Math and programming experts at a federal laboratory took an algorithm used to track the stars and rewrote its code to precisely follow the sun, even taking into consideration the vagaries of the occasional leap second.
Now, the algorithm and its software are helping solar power manufacturers build more precise trackers, orchards to keep their apples spotless and movie makers to keep the shadows off movie stars.
The Solar Position Algorithm (SPA) was developed at the U.S. Department of Energy's National Renewable Energy Laboratory to calculate the sun's position with unmatched low uncertainty of +/- 0.0003 degrees at vertex, in the period of years from -2000 to 6000 (or 2001 B.C. until just short of 4,000 years from now). That's more than 30 times more precise than the uncertainty levels for all other algorithms used in solar energy applications, which claim no better than +/- 0.01 degrees, and are only valid for a maximum of 50 years. And those uncertainty claims cannot be validated because of the need to add an occasional leap second because of the randomly increasing length of the mean solar day. The SPA does account for the leap second.
That difference in uncertainty levels is no small change, because an error of .01 degrees at noon can throw calculations off by 2 or 3 percent at sunrise or sunset, said NREL Senior Scientist Ibrahim Reda, the leader on the project. "Every uncertainty of 1 percent in the energy budget is millions of dollars uncertainty for utility companies and bankers," Reda said. "Accuracy is translated into dollars. When you can be more accurate, you save a lot of money."
"Siemens Industry Inc. uses NREL's SPA in its newest and smallest S7-1200 compact controller," says Paul Ruland of Siemens Industry, Inc. "Siemens took that very complex calculation, systemized it into our code and made a usable function block that its customers can use with their particular technologies to track the sun in the most efficient way. The end result is a 30 percent increase in accuracy compared to other technologies."
Science, Engineering and Math All Add to Breakthroughs
An algorithm is a set of rules for solving a mathematical problem in a finite number of steps, even though those steps can number in the hundreds or thousands.
NREL is known more for its solar, wind, and biofuel researchers than for its work in advanced math. But algorithms are key to so many scientific and technological breakthroughs today that a scientist well-versed in the math of algorithms is behind many of NREL's big innovations.
Since SPA was published on NREL's website, more than 4,000 users from around the world have downloaded it. In the European Union, for the past three years, it has been the reference algorithm to calculate the sun's position both for solar energy and atmospheric science applications. It has been licensed to, and downloaded by, major U.S. manufacturers of sun trackers, military equipment and cell phones. It has been used to boost agriculture and to help forecast the weather. Archaeologists, universities and religious organizations have employed SPA, as have other national laboratories.
Fewer Dropped Cell-Phone Calls
Billions of cell-phone calls are made each day, and they stay connected only because algorithms help determine exactly when to switch signals from one satellite to another.
Cell-phone companies can use the SPA to know exactly the moments when the phone, satellite, and the bothersome sun are in the same alignment, vulnerable to disconnections or lost calls. "The cell phone guys use SPA to know the specific moment to switch to another satellite so you're not disconnected," said Reda, who has a master's degree in electrical engineering/measurement from the University of Colorado. "Think of how many millions of people would be disconnected if there's too much uncertainty about the sun's position."
From a Tool for Solar Scientists to Widespread Uses
SPA sprang from NREL's need to calibrate solar measuring instruments at its Solar Radiation Research Laboratory. "We characterize the instruments based on the solar angle," Reda said. "It's vital that instruments get a precise read on the amount of energy they are getting from the sun at precise solar angle."
That will become even more critical in the future when utilities add more energy garnered from the sun to the smart grid. "The smart grid has to know precisely what your budget is for each resource you are using — oil, coal, solar, wind," Reda said.
Making an Astronomy Algorithm One for the Sun
Reda borrowed from the "Astronomical Algorithms," which is based on the Variations Sèculaires des Orbites Planètaires Theory (VSOP87) developed in 1982 then modified in 1987. Astronomers trust it to let them know exactly where to point their telescopes to get the best views of Jupiter, Alpha Centauri, the Magellan galaxy or whatever celestial bodies they are studying. "We were able to separate and modify that global astronomical algorithm and apply it just to solar energy, while making it less complex and easy to implement," said Reda, highlighting the role of his colleague, Afshin Andreas, who has a degree in engineering physics from the Colorado School of Mines, as well as expertise in computer programming.
They spent an intense three or four weeks of programming to make sure the equations were accurate before distributing the 1,100 lines of code, Andreas said.
They used almanacs and historical data to ensure that what the algorithm was calculating agreed with what observers from previous generations said about the sun's position on a particular day. "We did spot checks so we would have a good comfort level that the future projections are accurate," Reda said.
"We used our independent math and programming skills to make sure that our results agreed, Reda said.
Available for Licensing, Free Public Use
The new SPA algorithm simply served the needs of NREL scientists, until the day it was put on NREL's public website.
"A lot of people started downloading it," so NREL established some rules of use, Reda said. Individuals and universities could use SPA free of charge, but companies with commercial interests would have to pay for the software.
Factoring in Leap Seconds Improves Accuracy
NREL's SPA knows the position of the sun in the sky over an 8,000 year period partly because it has learned when to add those confounding leap seconds. Solar positioners that don't factor in the leap second only can calculate a few years or a few decades.
The length of an Earth day isn't determined by an expensive watch, but by the actual rotation of the Earth.
Almost immeasurably, the Earth's rotation is slowing down, meaning the solar day is getting just a tiny bit longer. But it's not doing so at a constant rate. "It happens in unpredictable ways," Reda said. Sometimes a leap second is added every year; sometimes there isn't a need for another leap second for three or four years. For example, the International Earth Rotation and Reference Systems Service (IERS) added six leap seconds over the course of seven years between 1992 and 1998, but has added just one extra second since 2006.
The algorithm calculates exactly when to add a leap second because included in its equations are rapid, monthly, and long-term data on the solar day provided by IERS, Reda and Andreas said.
"IERS receives the data from many observatories around the world," Reda added. "Each observatory has its own measuring instruments to measure the Earth's rotation. A consensus correction is then calculated for the fraction of second. As long as we know the time, and how much the Earth's rotation has slowed, we know the sun's position precisely."
That precision has proved useful in unexpected fields.
Practical Uses in Agriculture, Movie Making
One person who bought a license for the SPA software has an apple orchard, and wanted to keep the black spots off the apples that turn off finicky consumers, thus making wholesale buyers hesitate, Reda said.
The black spots appear when too much sun hits a particular apple, a particular tree or a particular row of trees in an orchard.
The spots can be prevented by showering the apples with water, but growers don't want to use more water than necessary.
SPA's precise tracking of the sun tells the grower exactly when the automatic sprinkler should spray for a few moments on a particular set of trees, and when it's OK to shut off that sprayer and turn on the next one. SPA communicates with the sprinkler system so, "instead of spraying the whole orchard, the spray moves minute by minute," Reda said. "He takes our tool and plugs it into the software that controls the sprinkler system. And he saves a lot of water."
Religious groups with traditions of praying at a particular time of day even have turned to SPA to help with precision.
A movie-camera manufacturer has purchased the SPA software to help cinematographers combat the precious waste of money when shadows disrupt outdoor shooting.
"They have cameras on those big cranes and booms, and typically they'd have to manually change them based on the shadows," Reda said. "This company that bought it has an automatic camera positioner."
Combining the positioner with the SPA's calculations, the camera can tell the precise moment when the sun will, say, peak above the tall buildings of an outdoor set. "They don't have to make so many judgments on their own about where the camera should be positioned," Reda said. "It gives them a clearer picture."
Learn more about NREL's solar radiation research and the Electricity, Resources, and Building Systems Integration Center.
— Bill Scanlon
| 3.138807 |
Tuberculosis (TB) is a chronic bacterial infection that usually infects the lungs, although other organs such as the kidneys, spine, or brain are sometimes involved. TB is primarily an airborne disease.
There is a difference between being infected with the TB bacterium and having active tuberculosis disease.
There are three important ways to describe the stages of TB. They are as follows:
- Exposure. This occurs when a person has been in contact with, or exposed to, another person who is thought to have or does have TB. The exposed person will have a negative skin test, a normal chest X-ray, and no signs or symptoms of the disease.
- Latent TB infection. This occurs when a person has the TB bacteria in his or her body, but does not have symptoms of the disease. The infected person's immune system walls off the TB organisms, and they remain dormant throughout life in 90 percent of people who are infected. This person would have a positive skin test but a normal chest X-ray.
- TB disease. This describes the person who has signs and symptoms of an active infection. The person would have a positive skin test and a positive chest X-ray.
The predominant TB bacterium is Mycobacterium tuberculosis (M. tuberculosis). Many people infected with M. tuberculosis never develop active TB and remain in the latent TB stage. However, in people with weakened immune systems, especially those with HIV (human immunodeficiency virus), TB organisms can overcome the body's defenses, multiply, and cause an active disease.
TB affects all ages, races, income levels, and both genders. Those at higher risk include the following:
- People who live or work with others who have TB
- Medically underserved populations
- Homeless people
- People from other countries where TB is prevalent
- People in group settings, such as nursing homes
- People who abuse alcohol
- People who use intravenous drugs
- People with impaired immune systems
- The elderly
- Health care workers who come in contact with high-risk populations
The following are the most common symptoms of active TB. However, each individual may experience symptoms differently.
- Cough that will not go away
- Chest pain
- Loss of appetite
- Unintended weight loss
- Poor growth in children
- Coughing blood or sputum
- Chills or night sweats
The symptoms of TB may resemble other lung conditions or medical problems. Consult a physician for a diagnosis.
The TB bacterium is spread through the air when an infected person coughs, sneezes, speaks, sings, or laughs; however, repeated exposure to the germs is usually necessary before a person will become infected. It is not likely to be transmitted through personal items, such as clothing, bedding, a drinking glass, eating utensils, a handshake, a toilet, or other items that a person with TB has touched. Adequate ventilation is the most important measure to prevent the transmission of TB.
TB is diagnosed with a TB skin test. In this test, a small amount of testing material is injected into the top layer of the skin. If a certain size bump develops within two or three days, the test may be positive for tuberculosis infection. Additional tests to determine if a person has TB disease include X-rays and sputum tests.
TB skin tests are suggested for those:
- In high-risk categories.
- Who live or work in close contact with people who are at high risk.
- Who have never had a TB skin test.
For skin testing in children, the American Academy of Pediatrics recommends:
- If the child is thought to have been exposed in the last five years.
- If the child has an X-ray that looks like TB.
- If the child has any symptoms of TB.
- If a child is coming from countries where TB is prevalent.
Yearly skin testing:
- For children with HIV.
- For children who are in jail.
Testing every two to three years:
- For children who are exposed to high-risk people.
Consider testing in children from ages 4 to 6 and 11 to 16:
- If a child's parent has come from a high-risk country.
- If a child has traveled to high-risk areas.
- Children who live in densely populated areas.
Specific treatment will be determined by your physician based on:
- Your age, overall health, and medical history
- Extent of the disease
- Your tolerance for specific medications, procedures, or therapies
- Expectations for the course of the disease
- Your opinion or preference
Treatment may include:
- Short-term hospitalization
- For latent TB which is newly diagnosed: Usually a six- to 12-month course of an antibiotic called isoniazid will be given to kill off the TB organisms in the body.
- For active TB: Your doctor may prescribe three to four antibiotics in combination for a time period of six to nine months. Examples include: isoniazid, rifampin, pyrazinamide, and ethambutol. Patients usually begin to improve within a few weeks of the start of treatment. After two weeks of treatment with the correct medications, the patient is not usually contagious, provided that treatment is carried through to the end, as prescribed by a physician.
Click here to view the
Online Resources of Infectious Diseases
| 3.882569 |
"Almanac"—the word comes from the Arabic al-manakh, meaning "the calendar," earlier "the weather," deriving ultimately from ma-, "a place," and nakha, "to kneel," or a place where camels kneel, a seasonal stopping place, a camp or settlement. Coming as it does from a nomadic human society, it is a fitting word as we talk about our bird life, and their travels and destinations, all as they are influenced by the season of the year.
In order to understand the vital interplay of time and space as they determine which birds they'll bring us, let us first set aside time to deal with space. For birds, Ohio's longitude has less to do with time, except as it determines the diurnal rhythms of night and day, and as it figured eons ago in the shifting of continents, where our present longitude marks our place between mountain ranges, at the edge of the feathering-out of the great prairies and the great forests, and consequently midway between the great north- and southbound rivers of birds in the Mississippi and Atlantic flyways. Our latitude, by contrast, is all about time for birds—their seasonal movements north and south, their life cycles along the way, the timing of migrations and even vagrancy, the changing length of daylight and the intensity of Earth's magnetic fields, even their habitats as developed in the topography of our land as formed by mile-high glaciers moving latitudinally thousands of years ago, forming our plains and hills, Lake Erie, and the Ohio River.
Survival for birds means successful breeding, and for this success timing is everything. For migrants, early arrival at the breeding grounds is balanced against the risk of arriving too soon to find adequate food; attempting a second brood must be balanced by the risk of an early reduction in food sources. The phenology of predators, frosts, food sources, leafing of local plants, rain cycles, etc., all affect breeding success, and the species we see have successfully adapted to these influences to remain with us today.
Humans have recently (here, over the past two hundred years) radically influenced some of these influences, upsetting delicate balances, and our bird life is changing as a result.
We have removed some predators, and encouraged the proliferation of others. We have apparently caused climatic warming, with earlier springs and later winters. We have introduced exotic animals and plants. We have bulldozed and burned and filled in and poisoned bird habitats. We allow birds to be killed in great numbers, but not, we reassure ourselves, in numbers too great to diminish them. Our effect on the life cycles of birds is dramatic, ongoing, and uncertain as to ultimate outcome.
Reassuringly, it is still possible to discern primeval patterns of birds' natural life cycles throughout the year. Birders find the continuation of these cycles deeply satisfying as a continuous manifestation of the renewal of life, and a way to measure and better understand, during our short span, the passage of time.
Fifty or more species of our birds remain pretty much equally abundant year-round, present in good numbers in every month. Many are the most familiar of our familiar birds, but even to them the calendar brings profound changes. The crows, robins, blue jays, and song sparrows we see year-round are not always the same birds, as these are at least in part migratory species, with different cohorts inhabiting different places at different times of year. Their behavior, too, may change radically over the calendar year: robins that are solitary worm-eaters in summer will flock in winter to eat fruit. The breeding cycle, with all its changes over times, governs all—migrating, singing, incubating, fledging, flocking, molting. Many species have expanded their ranges over recent time—mockingbirds, titmice, cardinals, house finches—and many once-common birds have receded beyond Ohio's borders: prairie-chickens, Bachman's sparrows, and Bewick's wrens are no longer to be found here. Time has claimed some of our birds forever—the passenger pigeon, the Eskimo curlew, the Carolina parakeet—but there is time to save the rest.
| 3.670859 |
A safe place to play
If you ask your child what he likes most about school, the answer you are likely to get is, “Recess!” It is important for kids to be active, get some fresh air, and release their pent up energy during and after the school day, and playgrounds are a great place to do so. However, faulty equipment, unsafe surfaces, and lack of appropriate supervision can result in injury. Each year, more than 200,000 children are treated in hospital emergency rooms for playground-related injuries. Schools are addressing this by developing rules for safe outdoor play on and off the playground. There are also a few things that you should keep in mind and convey to any other caregivers of your child about play on and around the playground.
Tips for injury-free outdoor fun
- Know the school rules. Depending on the amount of outdoor space, the size of the student body, and staff limitations, your child’s school may limit the games students can play on the playground. Games like tag and unsupervised sports such as dodgeball are increasingly being banned due to injuries. Find out what your school’s playground rules are and explain them to your child. If your child wants to take a ball, jump rope, or other equipment to share with friends, be sure to check with the school first.
- Find out about supervision. Adequate supervision is the best way to reduce the number of injuries on the playground. The National Program for Playground Safety advises that children be supervised when playing on playground structures, whether these are located in your home, in the community, or at school. Adults in charge should be able to direct children to use playground equipment properly and respond to emergencies appropriately. Make sure your child is supervised on the playground at all times, at and outside of school.
- Know what is age appropriate. The Consumer Product Safety Commission requires that playground equipment be separated for 2-5 year-olds and 5-12 year-olds. It is recommended that children be further separated according to age group: Pre K, grades K-2, grades 3-4, and grades 5-6. Most schools separate outdoor play times by grades. If you take child to the playground, make sure he is playing on equipment that he is able to use comfortably. Encourage your child to use equipment appropriately and to take turns. Beware of clothing that could get caught or that your child could trip over, such as untied shoelaces, hoods, or drawstrings..
- Keep an eye on the equipment. Before you let your child play on playground structures, check the equipment and its surrounding area to make sure that it is safe. Check the structure to make sure it is not damaged or broken. Look out for any objects that can cause injuries, such as broken glass, rocks, animal feces, or other debris. According to the National Program for Playground Safety, the surface of a play structure should be of loose or soft materials that will cushion a fall, such as wood chips or rubber.
Know how to respond. Even a fall of one foot can cause a broken bone or concussion. If your child is injured while playing on the playground, check him carefully for bruises. If you are not sure of the extent of your child’s injury, take him to the pediatrician or the emergency room. If you think your child may have a head or neck injury or if he appears to have a broken bone and you are afraid to move him, call for help.
For a playground safety checklist, visit the Consumer Product Safety Commission at http://www.cpsc.gov
This information was compiled by Sunindia Bhalla, and reviewed by the Program Staff of the Massachusetts Children’s Trust Fund.
| 3.567991 |
Analogue Tachographs: A Brief History
Note: Since May 2006, Analogue Tachographs are being phased out in favour of digital versions which record data on a smart card. Find out more about Digital Tachographs.
A tachograph displays vehicle speed and makes a record of all speeds during an entire trip. The name ‘tachograph’ comes from the graphical recording of the tachometer or engine speed.
Analogue units record the driver’s periods of duty on a waxed paper disc – a tachograph chart. An ink pen records the engine speed on circular graph paper that automatically advances according to the internal clock of the tachograph. This graph paper is removed on a regular basis and maintained by the fleet owner for government records.
In the 1950s, there were an increasing number of road accidents attributed to sleep-deprived and tired truck drivers. Concerns for safety led to the rapid spread of the tachograph in the commercial vehicle market, but at this point it was voluntary and not legislated.
Fleet operators then found that tachographs helped them to monitor driver hours more reliably, and safety also improved. In Europe, use of tachographs has been compulsory for all trucks over 3.5 tonnes since 1970.
For safety reasons, most countries also have limits on the working hours of drivers of commercial vehicles. Tachographs are used to monitor drivers’ working hours and ensure that appropriate breaks are taken.
Legislation relating to Tachographs has been in force in the UK for 16 years. The tachograph is now an indispensable tool for managing fleets and ensuring the safety of drivers of commercial vehicles.
Find out more about Digital Tachographs.
| 3.219936 |
LESSON ONE: Transforming Everyday Objects
Marcel Duchamp: Bicycle Wheel, bicycle wheel on wooden stool, 1963 (Henley-on-Thames, Richard Hamilton Collection); © 2007 Artists Rights Society (ARS), New York/ADAGP, Paris, photo credit: Cameraphoto/Art Resource, NY
Man Ray: Rayograph, gelatin silver print, 29.4×23.2 cm, 1923 (New York, Museum of Modern Art); © 2007 Man Ray Trust/Artists Rights Society (ARS), New York/ADAGP, Paris, photo © The Museum of Modern Art, New York
Meret Oppenheim: Object (Le Déjeuner en fourrure), fur-lined cup, diam. 109 mm, saucer, diam. 237 mm, spoon, l. 202 mm, overall, h. 73 mm, 1936 (New York, Museum of Modern Art); © 2007 Artists Rights Society (ARS), New York/ProLitteris, Zurich, photo © Museum of Modern Art/Licensed by SCALA/Art Resource, NY
Dada and Surrealist artists questioned long-held assumptions about what a work of art should be about and how it should be made. Rather than creating every element of their artworks, they boldly selected everyday, manufactured objects and either modified and combined them with other items or simply se-lected them and called them “art.” In this lesson students will consider their own criteria for something to be called a work of art, and then explore three works of art that may challenge their definitions.
Students will consider their own definitions of art.
Students will consider how Dada and Surrealist artists challenged conventional ideas of art.
Students will be introduced to Readymades and photograms.
Ask your students to take a moment to think about what makes something a work of art. Does art have to be seen in a specific place? Where does one encounter art? What is art supposed to accomplish? Who is it for?
Ask your students to create an individual list of their criteria. Then, divide your students into small groups to discuss and debate the results and come up with a final list. Finally, ask each group to share with the class what they think is the most important criteria and what is the most contested criteria for something to be called a work of art. Write these on the chalkboard for the class to review and discuss.
Show your students the image of Bicycle Wheel. Ask your students if Marcel Duchamp’s sculp-ture fulfills any of their criteria for something to be called a work of art. Ask them to support their obser-vations with visual evidence.
Inform your students that Duchamp made this work by fastening a Bicycle Wheel to a kitchen stool. Ask your students to consider the fact that Duchamp rendered these two functional objects unus-able. Make certain that your students notice that there is no tire on the Bicycle Wheel.
To challenge accepted notions of art, Duchamp selected mass-produced, often functional objects from everyday life for his artworks, which he called Readymades. He did this to shift viewers’ engagement with a work of art from what he called the “retinal” (there to please the eye) to the “intellectual” (“in the service of the mind.”) [H. H. Arnason and Marla F. Prather, History of Modern Art: Painting, Sculpture, Architecture, Photography (Fourth Edition) (New York: Harry N. Abrams, Inc., 1998), 274.] By doing so, Duchamp subverted the traditional notion that beauty is a defining characteristic of art.
Inform your students that Bicycle Wheel is the third version of this work. The first, now lost, was made in 1913, almost forty years earlier. Because the materials Duchamp selected to be Readymades were mass-produced, he did not consider any Readymade to be “original.”
Ask your students to revisit their list of criteria for something to be called a work of art. Ask them to list criteria related specifically to the visual aspects of a work of art (such as “beauty” or realistic rendering).
Duchamp said of Bicycle Wheel, “In 1913 I had the happy idea to fasten a Bicycle Wheel to a kitchen stool and watch it turn.” [John Elderfield, ed., Studies in Modern Art 2: Essays on Assemblage (New York: The Museum of Modern Art, 1992), 135.] Bicycle Wheel is a kinetic sculpture that depends on motion for effect. Although Duchamp selected items for his Readymades without regard to their so-called beauty, he said, “To see that wheel turning was very soothing, very comforting . . . I en-joyed looking at it, just as I enjoy looking at the flames dancing in a fireplace.” [Francis M. Naumann, The Mary and William Sisler Collection (New York: The Museum of Modern Art, 1984), 160.] By en-couraging viewers to spin Bicycle Wheel, Duchamp challenged the common expectation that works of art should not to be touched.
Show your students Rayograph. Ask your students to name recognizable shapes in this work. Ask them to support their findings with visual evidence. How do they think this image was made?
Inform your students that Rayograph was made by Man Ray, an American artist who was well-known for his portrait and fashion photography. Man Ray transformed everyday objects into mysterious images by placing them on photographic paper, exposing them to light, and oftentimes repeating this process with additional objects and exposures. When photographic paper is developed in chemicals, the areas blocked from light by objects placed on the paper earlier on will remain light, and the areas exposed to light will turn black. Man Ray discovered the technique of making photograms by chance, when he placed some objects in his darkroom on light-sensitive paper and accidentally exposed them to light. He liked the resulting images and experimented with the process for years to come. He likened the technique, now known as the photogram, to “painting with light,” calling the images rayographs, after his assumed name.
Now that your students have identified some recognizable objects used to make Rayograph, ask them to consider which of those objects might have been translucent and which might have been opaque, based on the tone of the shapes in the photogram.
Now show your students Meret Oppenheim’s sculpture Object (Déjeuner en fourrure). Both Rayograph and Object were made using everyday objects and materials not traditionally used for making art, which, when combined, challenge ideas of reality in unexpected ways. Ask your students what those everyday objects are and how they have been transformed by the artists.
Ask your students to name some traditional uses for the individual materials (cup, spoon, saucer, fur) used to make Object. Ask your students what choices they think Oppenheim made to transform these materials and objects.
In 1936, the Swiss artist Oppenheim was at a café in Paris with her friends Pablo Picasso and Dora Maar. Oppenheim was wearing a bracelet she had made from fur-lined, polished metal tubing. Picasso joked that one could cover anything with fur, to which Oppenheim replied, “Even this cup and saucer.” [Bice Curiger, Meret Oppenheim: Defiance in the Face of Freedom (Zurich, Frankfurt, New York: PARKETT Publishers Inc., 1989), 39.] Her tea was getting cold, and she reportedly called out, “Waiter, a little more fur!” Soon after, when asked to participate in a Surrealist exhibition, she bought a cup, saucer, and spoon at a department store and lined them with the fur of a Chinese gazelle. [Josephine Withers, “The Famous Fur-Lined Teacup and the Anonymous Meret Oppenheim” (New York: Arts Magazine, Vol. 52, Novem-ber 1977), 88-93.]
Duchamp, Oppenheim, and Man Ray transformed everyday objects into Readymades, Surrealist objects, and photograms. Ask your students to review the images of the three artworks in this lesson and discuss the similarities and differences between these artists’ transformation of everyday objects.
Art and Controversy
At the time they were made, works of art like Duchamp’s Bicycle Wheel and Oppenheim’s Object were controversial. Critics called Duchamp’s Readymades immoral and vulgar—even plagiaristic. Overwhelmed by the publicity Object received, Oppenheim sunk into a twenty-year depres-sion that greatly inhibited her creative production.
Ask your students to conduct research on a work of art that has recently been met with controversy. Each student should find at least two articles that critique the work of art. Have your students write a one-page summary of the issues addressed in these articles. Students should consider how and why the work chal-lenged and upset critics. Was the controversial reception related to the representation, the medium, the scale, the cost, or the location of the work? After completing the assignment, ask your students to share their findings with the class. Keep a list of shared critiques among the work’s various receptions.
Make a Photogram
If your school has a darkroom, have your students make photograms. Each student should collect several small objects from school, home, and the outside to place on photographic paper. Their collection should include a range of translucent and opaque objects to allow different levels of light to shine through. Stu-dents may want to overlap objects or use their hands to cover parts of the light-sensitive paper. Once the objects are arranged on the paper in a darkroom, have your students expose the paper to light for several seconds (probably about five to ten seconds, depending on the level of light) then develop, fix, rinse, and dry the paper. Allow for a few sheets of photographic paper per student so that they can experiment with different arrangements and exposures. After the photograms are complete, have your students discuss the different results that they achieved. Students may also make negatives of their photograms by placing them on top of a fresh sheet of photographic paper and covering the two with a sheet of glass. After ex-posing this to light, they can develop the paper to get the negative of the original photogram.
Encourage your students to try FAUXtogram, an activity available on Red Studio, MoMA's Web site for teens.
GROVE ART ONLINE: Suggested Reading
Below is a list of selected articles which provide more information on the specific topics discussed in this lesson.
| 3.865427 |
Asthma and Exercise
What is exercise-induced asthma?
Most people diagnosed with asthma will experience asthma symptoms when exercising. In addition, some who are not diagnosed with asthma will experience asthma symptoms, but only during exercise. This is a condition called exercise-induced asthma.
Long-distance running may aggravate exercise-induced asthma.
Exercise-induced asthma is different from the typical asthma that is triggered by allergens and/or irritants. Some people have both types of asthma, while others only experience exercise-induced asthma.
Asthma is a chronic, inflammatory lung disease that leads to three airway problems: obstruction, inflammation, and hyper-responsiveness. Unfortunately, the basic cause of asthma is still not known.
How does exercise cause asthma symptoms?
When breathing normally, the air that enters the airways is first warmed and moistened by the nasal passages to prevent injury to the delicate lining of the airways. However, for someone with asthma, the airways may be extremely sensitive to allergens, irritants, infection, weather, and/or exercise. When asthma symptoms begin, the airways' muscles constrict and narrow, the lining of the airways begins to swell, and mucus production may increase. When exercising (especially outside in cold weather), the increased breathing in and breathing out through the mouth may cause the airways to dry and cool, which may irritate them and cause the onset of asthma symptoms. In addition, when breathing through the mouth during exercise, a person will inhale more air-borne particles, including pollen, which can trigger asthma.
What are the symptoms of exercise-induced asthma?
Exercise-induced asthma is characterized by asthma symptoms, such as coughing, wheezing, and tightness in the chest within five to 20 minutes after starting to exercise. Exercised-induced asthma can also include symptoms such as unusual fatigue and feeling short-of-breath while exercising.
However, exercise should not be avoided because of asthma. In fact, exercise is very beneficial to a person with asthma, improving their airway function by strengthening their breathing muscles. Consult your doctor for more information.
How can exercise-induced asthma be controlled?
Stretching and proper warm-up and cool-down exercises may relieve any chest tightness that occurs with exercising. In addition, breathing through the nose and not the mouth will help warm and humidify the air before it enters the airways, protecting the delicate lining of the airways. Other ways to help prevent an asthma attack due to exercise include the following:
Your doctor may prescribe an inhaled asthma medication to use before exercise, which may also be used after exercise if symptoms occur.
Avoid exercising in very low temperatures.
If exercising during cold weather, wear a scarf over your mouth and nose, so that the air breathed in is warm and easier to inhale.
Avoid exercising when pollen or air pollution levels are high (if allergy plays a role in the asthma).
If inhaling air through the mouth, keep the mouth pursed (lips forming a small "O" close together), so that the air is less cold and dry when it enters the airways during exercise.
Carry an inhaler, just in case of an asthma attack.
Wear an allergy mask during pollen season.
Avoid exercise when experiencing a viral infection.
What sports are recommended for people with asthma?
According to the American Academy of Allergy, Asthma, and Immunology, the recommended sport for people with asthma is swimming, due to the warm, humid environment, the toning of the upper muscles, and the horizontal position (which may actually loosen mucus from the bottom of the lungs). Other recommended activities and sports include:
Sports that may aggravate exercise-induced asthma symptoms include:
However, with proper management and preparation, most people with asthma can participate in any sport.
| 3.328275 |
What is endometriosis?
Endometriosis (say "en-doh-mee-tree-OH-sus") is a problem many women have during their childbearing years. It means that a type of tissue that lines your uterus is also growing outside your uterus. This does not always cause symptoms. And it usually isn't dangerous. But it can cause pain and other problems.
The clumps of tissue that grow outside your uterus are called implants. They usually grow on the ovaries, the fallopian tubes, the outer wall of the uterus, the intestines, or other organs in the belly. In rare cases they spread to areas beyond the belly.
How does endometriosis cause problems?
Your uterus is lined with a type of tissue called Reference endometrium Opens New Window (say "en-doh-MEE-tree-um"). Each month, your body releases hormones that cause the endometrium to thicken and get ready for an egg. If you get pregnant, the fertilized egg attaches to the endometrium and starts to grow. If you do not get pregnant, the endometrium breaks down, and your body sheds it as blood. This is your Reference menstrual period Opens New Window.
When you have endometriosis, the implants of tissue outside your uterus act just like the tissue lining your uterus. During your menstrual cycle, they get thicker, then break down and bleed. But the implants are outside your uterus, so the blood cannot flow out of your body. The implants can get irritated and painful. Sometimes they form scar tissue or fluid-filled sacs (cysts). Scar tissue may make it hard to get pregnant.
What causes endometriosis?
Experts don't know what causes endometrial tissue to grow outside your uterus. But they do know that the female hormone Reference estrogen Opens New Window makes the problem worse. Women have high levels of estrogen during their childbearing years. It is during these years—usually from their teens into their 40s—that women have endometriosis. Estrogen levels drop when menstrual periods stop (menopause). Symptoms usually go away then.
What are the symptoms?
The most common symptoms are:
- Pain. Where it hurts depends on where the implants are growing. You may have pain in your lower belly, your rectum or vagina, or your lower back. You may have pain only before and during your periods or all the time. Some women have more pain during sex, when they have a bowel movement, or when their ovaries release an egg (ovulation).
- Abnormal bleeding. Some women have heavy periods, spotting or bleeding between periods, bleeding after sex, or blood in their urine or stool.
- Trouble getting pregnant (Reference infertility Opens New Window). This is the only symptom some women have.
Endometriosis varies from woman to woman. Some women don't know that they have it until they go to see a doctor because they can't get pregnant or have a procedure for another problem. Some have mild cramping that they think is normal for them. In other women, the pain and bleeding are so bad that they aren't able to work or go to school.
How is endometriosis diagnosed?
Many different problems can cause painful or heavy periods. To find out if you have endometriosis, your doctor will:
- Ask questions about your symptoms, your periods, your past health, and your family history. Endometriosis sometimes runs in families.
- Do a Reference pelvic exam Opens New Window. This may include checking both your Reference vagina Opens New Window and Reference rectum Opens New Window.
If it seems like you have endometriosis, your doctor may suggest that you try medicine for a few months. If you get better using medicine, you probably have endometriosis.
To find out if you have a cyst on an ovary, you might have an imaging test like an Reference ultrasound Opens New Window, an Reference MRI Opens New Window, or a Reference CT scan Opens New Window. These tests show pictures of what is inside your belly.
The only way to be sure you have endometriosis is to have a type of surgery called Reference laparoscopy Opens New Window (say "lap-uh-ROSS-kuh-pee"). During this surgery, the doctor puts a thin, lighted tube through a small cut in your belly. This lets the doctor see what is inside your belly. If the doctor finds implants, scar tissue, or cysts, he or she can remove them during the same surgery.
How is it treated?
There is no cure for endometriosis, but there are good treatments. You may need to try several treatments to find what works best for you. With any treatment, there is a chance that your symptoms could come back.
Treatment choices depend on whether you want to control pain or you want to get pregnant. For pain and bleeding, you can try medicines or surgery. If you want to get pregnant, you may need surgery to remove the implants.
Treatments for endometriosis include:
- Over-the-counter pain medicines like ibuprofen (such as Advil or Motrin) or naproxen (such as Aleve). These medicines are called anti-inflammatory drugs, or NSAIDs. They can reduce bleeding and pain.
- Birth control pills. They are the best treatment to control pain and shrink implants. Most women can use them safely for years. But you cannot use them if you want to get pregnant.
- Hormone therapy. This stops your periods and shrinks implants. But it can cause side effects, and pain may come back after treatment ends. Like birth control pills, hormone therapy will keep you from getting pregnant.
- Laparoscopy to remove implants and scar tissue. This may reduce pain, and it may also help you get pregnant.
As a last resort for severe pain, some women have their uterus and ovaries removed (Reference hysterectomy Opens New Window and oophorectomy). If you have your ovaries taken out, your estrogen level will drop and your symptoms will probably go away. But you may have symptoms of menopause, and you will not be able to get pregnant.
If you are getting close to Reference menopause Opens New Window, you may want to try to manage your symptoms with medicines rather than surgery. Endometriosis usually stops causing problems when you stop having periods.
Frequently Asked Questions
|By:||Reference Healthwise Staff||Last Revised: Reference July 7, 2011|
|Medical Review:||Reference Adam Husney, MD - Family Medicine
Reference Kirtly Jones, MD - Obstetrics and Gynecology
| 3.431998 |
What is pancreatitis?
Pancreatitis is inflammation of the Reference pancreas Opens New Window Reference Opens New Window, an organ in your belly that makes the hormones Reference insulin Opens New Window and Reference glucagon Opens New Window. These two hormones control how your body uses the sugar found in the food you eat. Your pancreas also makes other hormones and Reference enzymes Opens New Window that help you break down food.
Usually the digestive enzymes stay in one part of the pancreas. But if these enzymes leak into other parts of the pancreas, they can irritate it and cause pain and swelling. This may happen suddenly or over many years. Over time, it can damage and scar the pancreas.
What causes pancreatitis?
Most cases are caused by Reference gallstones Opens New Window or alcohol abuse. The disease can also be caused by an injury, an infection, or certain medicines.
Long-term, or chronic, pancreatitis may occur after one attack. But it can also happen over many years. In Western countries, alcohol abuse causes most chronic cases.
In some cases doctors don't know what caused the disease.
What are the symptoms?
The main symptom of pancreatitis is medium to severe pain in the upper belly. Pain may also spread to your back.
Some people have other symptoms too, such as nausea, vomiting, a fever, and sweating.
How is pancreatitis diagnosed?
Your doctor will do a physical exam and ask you questions about your symptoms and past health. You may also have blood tests to see if your levels of certain enzymes are higher than normal. This can mean that you have pancreatitis.
Your doctor may also want you to have a complete blood count (CBC), a liver test, or a stool test.
Other tests include an MRI, a CT scan, or an ultrasound of your belly (abdominal ultrasound) to look for gallstones.
A test called endoscopic retrograde cholangiopancreatogram, or ERCP, may help your doctor see if you have chronic pancreatitis. During this test, the doctor can also remove gallstones that are stuck in the Reference bile duct Opens New Window.
How is it treated?
Most attacks of pancreatitis need treatment in the hospital. Your doctor will give you pain medicine and fluids through a vein (Reference IV Opens New Window) until the pain and swelling go away.
Fluids and air can build up in your stomach when there are problems with your pancreas. This buildup can cause severe vomiting. If buildup occurs, your doctor may place a tube through your nose and into your stomach to remove the extra fluids and air. This will help make the pancreas less active and swollen.
Although most people get well after an attack of pancreatitis, problems can occur. Problems may include Reference cysts Opens New Window, infection, or death of tissue in the pancreas.
You may need surgery to remove your gallbladder or a part of the pancreas that has been damaged.
If your pancreas has been severely damaged, you may need to take insulin to help your body control blood sugar. You also may need to take pancreatic enzyme pills to help your body digest fat and protein.
If you have chronic pancreatitis, you will need to follow a low-fat diet and stop drinking alcohol. You may also take medicine to manage your pain. Making changes like these may seem hard. But with planning, talking with your doctor, and getting support from family and friends, these changes are possible.
Frequently Asked Questions
|By:||Reference Healthwise Staff||Last Revised: Reference October 31, 2011|
|Medical Review:||Reference Kathleen Romito, MD - Family Medicine
Reference Peter J. Kahrilas, MD - Gastroenterology
| 3.137856 |
Gary McConkey from Knightdale, N.C., writes:
I often park my car in the sun. When I get back inside, it feels warmer than the outside temperature. Why is that?
This is a good example of the “greenhouse effect,” which is essential to life on Earth. Without it, our planet wouldn’t be warm enough for living things to survive.
In the case of a car, the sun’s rays enter through the window glass. Some of the heat is absorbed by interior components, such as the dashboard, seats, and carpeting. But the heat they radiate is a different wavelength from the rays of the sun that got through the glass, and it doesn’t let as much of the rays pass back out. As a result, more energy goes into the car than goes out, and the inside temperature increases.
| 3.311596 |
Work Out With Your Dog - How Animal Agility Training Can Burn Calories For You
The University of Massachusetts studied human oxygen consumption during canine agility training. John Ales
Vigorous Exercise for Dog and Human
Researchers at the University of Massachusetts Department of Kinesiology have studied the impact on humans during canine agility training, and their findings were recently highlighted on Zoom Room Dog Agility Training Center's website.
The researchers looked at oxygen consumption (using a face mask and battery-operated, portable metabolic system that measures breath-by-breath gas exchange) as well as heart rate (detected and recorded using a Polar heart rate monitor). The data collected was translated into Metabolic Equivalents, or METs, a way of comparing how much energy a person expends at rest versus during a given activity.
| 3.019903 |
Geology and Geography Information about Portage County Wisconsin
We will provide as much historical map information as possible about the county. Google Map of Our Museums .
How Wisconsin Was Surveyed
The methods used to survey land is largely unknown to the general public. But, the Wisconsin Public Land Survey Records: Original Field Notes and Plat Maps site offers a complete explanation of this method as well as access to the original field notes and maps compiled by the surveyors.
This section has links to our maps as well as external links to free printable map providers.
- Map of Portage County (33k).
- Map of Wisconsin (285k).
- Map of Central Wisconsin (30k).
- Map of Townships (7k).
A Portage County Plat Book for 1895 has been photographed using a digital camera.
The following maps are from "Page-Size Maps of Wisconsin" published by: University of Wisconsin - Extension and Wisconsin Geological and Natural History Survey, 3817 Mineral Point Road, Madison WI., 53705-5100.
- Bedrock Geology of Wisconsin (168k).
- Ice Age Deposits of Wisconsin (150k).
- Early Vegetation of Wisconsin (126k).
- Landforms of Wisconsin (109k).
- Soil Regions of Wisconsin (180k).
The following maps are in pdf format. Maps available from the Wisconsin Historical Society also.
- British Era fur trading posts 1760-1815.
- American Era fur trading posts 1815-1850.
- American Forts and Exploration ca 1820.
- Military Roads 1815-1862.
- Wisconsin counties 1835.
- Wisconsin counties 1850.
- Wisconsin counties 1870.
- Wisconsin counties 1901.
- Wisconsin Railroads 1865.
- Wisconsin Railroads 1873.
- Wisconsin Railroads 1936.
- From National Atlas, a government agency, are printable maps of all the states and more.
- This link is located in France and provides free printable maps covering all countries.
The Society will embark on a project during the summer of 2009 and continuing onward to provide county maps with geotag information locating:
- Small Communities.
- Cemetery Locations.
- Locations of School Houses, one-room and others of historic value.
- Catholic Churches.
- Lutheran Churches.
- Other Churches.
- Historic sites within the communities
Portage County Ice Age Trail
Here is a list of all the Historical Makers in the State of Wisconsin.
| 3.009242 |
Hepatitis A is a virus that can infect the liver. In most cases, the infection goes away on its own and doesn't lead to long-term liver problems. In rare cases, it can be more serious.
The hepatitis A virus is found in the stool of an infected person. It is spread when a person eats food or drinks water that has come in contact with infected stool.
Sometimes a group of people who eat at the same restaurant can get hepatitis A. This can happen when an employee with hepatitis A doesn't wash his or her hands well after using the bathroom and then prepares food. It can also happen when a food item is contaminated by raw sewage or by an infected garden worker.
The disease can also spread in day care centers. Children, especially those in diapers, may get stool on their hands and then touch objects that other children put into their mouths. And workers can spread the virus if they don't wash their hands well after changing a diaper.
Some things can raise your risk of getting hepatitis A, such as eating raw oysters or undercooked clams. If you're traveling in a country where hepatitis A is common, you can lower your chances of getting the disease by avoiding uncooked foods and untreated tap water.
You may also be at risk if you live with or have sex with someone who has hepatitis A.
After you have been exposed to the virus, it can take from 2 to 7 weeks before you see any signs of it. Symptoms usually last for about 2 months but may last longer.
Common symptoms are:
All forms of hepatitis have similar symptoms. Only a blood test can tell if you have hepatitis A or another form of the disease.
Call your doctor if you have reason to think that you have hepatitis A or have been exposed to it. (For example, did you recently eat in a restaurant where a server was found to have hepatitis A? Has there been an outbreak at your child's day care? Does someone in your house have hepatitis A?)
Your doctor will ask questions about your symptoms and where you have eaten or traveled. You may have blood tests if your doctor thinks you have the virus. These tests can tell if your liver is inflamed and whether you have antibodies to the hepatitis A virus. These antibodies prove that you have been exposed to the virus.
Hepatitis A goes away on its own in most cases. Most people get well within a few months. While you have hepatitis:
If hepatitis A causes more serious illness, you may need to stay in the hospital to prevent problems while your liver heals.
Be sure to take steps to avoid spreading the virus to others.
You can only get the hepatitis A virus once. After that, your body builds up a defense against it.
Learning about hepatitis A:
Preventing hepatitis A:
|American Liver Foundation (ALF)|
|39 Broadway, Suite 2700|
|New York, NY 10006|
The American Liver Foundation (ALF) funds research and informs the public about liver disease. A nationwide network of chapters and support groups exists to help people who have liver disease and to help their families. ALF also sponsors a national organ-donor program to increase public awareness of the continuing need for organs. You can send an email by completing a form on the contact page on the ALF website: www.liverfoundation.org/contact.
|Centers for Disease Control and Prevention (CDC): Division of Viral Hepatitis|
The Division of Viral Hepatitis provides information about viral hepatitis online and by telephone 24 hours a day. Pamphlets also are available. Information is available in English and in Spanish.
|Hepatitis Foundation International|
|504 Blick Drive|
|Silver Spring, MD 20904-2901|
This organization is a grassroots communication and support network for people with viral hepatitis. It provides education to patients, professionals, and the public about the prevention, diagnosis, and treatment of viral hepatitis. The organization will make referrals to local doctors and support groups.
|Immunization Action Coalition|
|1573 Selby Avenue|
|St. Paul, MN 55104|
The Immunization Action Coalition (IAC) works to raise awareness of the need for immunizations to help prevent disease. The website has videos and photos about how vaccines work and the diseases the vaccines prevent. The site also offers information about vaccine safety and common concerns and myths about vaccines.
|National Digestive Diseases Information Clearinghouse|
|2 Information Way|
|Bethesda, MD 20892-3570|
This clearinghouse is a service of the U.S. National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), part of the U.S. National Institutes of Health. The clearinghouse answers questions; develops, reviews, and sends out publications; and coordinates information resources about digestive diseases. Publications produced by the clearinghouse are reviewed carefully for scientific accuracy, content, and readability.
- Centers for Disease Control and Prevention (2007). Update: Prevention of hepatitis A after exposure to hepatitis A virus and in international travelers. Updated recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR, 56(RR-41): 1080–1084. Also available online: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5641a3.htm.
Other Works Consulted
- American Academy of Pediatrics (2009). Hepatitis A. In LK Pickering et al., eds., Red Book: 2009 Report of the Committee on Infectious Diseases, 28th ed., pp. 329–337. Elk Grove Village, IL: American Academy of Pediatrics.
- Centers for Disease Control and Prevention (2006). Prevention of hepatitis A through active or passive immunization: Recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR, 55 (RR-7): 1–23. Also available online: http://www.cdc.gov/mmwr/PDF/rr/rr5507.pdf.
- Centers for Disease Control and Prevention (2009). Updated recommendations from the Advisory Committee on Immunization Practices (ACIP) for use of hepatitis A vaccine in close contacts of newly arriving international adoptees. MMWR, 58(36): 1006–1007. Also available online: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5836a4.htm?s_cid=mm5836a4_e.
- Centers for Disease Control and Prevention (2010). Sexually transmitted diseases treatment guidelines, 2010. MMWR, 59(RR-12): 1–110. Also available online: http://www.cdc.gov/mmwr/preview/mmwrhtml/rr5912a1.htm?s_cid=rr5912a1_w.
- Curry MP, Chopra S (2010). Acute viral hepatitis. In GL Mandell et al., eds., Mandell, Douglas, and Bennett's Principles and Practice of Infectious Diseases, 7th ed., vol. 1, pp. 1577–1592. Philadelphia: Churchill Livingstone Elsevier.
- Weller PF (2009). Health advice for international travelers. In EG Nabel, ed., ACP Medicine, Clinical Essentials, chap. 7. Hamilton, ON: BC Decker.
|Primary Medical Reviewer||E. Gregory Thompson, MD - Internal Medicine|
|Specialist Medical Reviewer||W. Thomas London, MD - Hepatology|
|Last Revised||August 30, 2012|
Last Revised: August 30, 2012
To learn more visit Healthwise.org
© 1995-2013 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
| 3.552116 |
Peoria Tribe of Indians of OklahomaThe Peoria Tribe of Indians of Oklahoma is a confederation of Kaskaskia, Peoria, Piankeshaw and Wea Indians united into a single tribe in 1854. The tribes which constitute The Confederated Peorias, as they then were called, originated in the lands bordering the Great Lakes and drained by the mighty Mississippi. They are Illinois or Illini Indians, descendants of those who created the great mound civilizations in the central United States two thousand to three thousand years ago.
Forced from their ancestral lands in Illinois, Michigan, Ohio and Missouri, the Peorias were relocated first in Missouri, then in Kansas and, finally, in northeastern Oklahoma. There, in Miami, Ottawa County, Oklahoma is their tribal headquarters.
The Peoria Tribe of Indians of Oklahoma is a federally-recognized sovereign Indian tribe, functioning under the constitution and by-laws approved by the Secretary of the U.S. Department of the Interior on August 13, 1997. Under Article VIII, Section 1 of the Peoria Constitution, the Peoria Tribal Business Committee is empowered to research and pursue economic and business development opportunities for the Tribe.
The increased pressure from white settlers in the 1840’s and 1850’s in Kansas brought cooperation among the Peoria, Kaskaskia, Piankashaw and Wea Tribes to protect these holdings. By the Treaty of May 30, 1854, 10 Stat. 1082, the United States recognized the cooperation and consented to their formal union as the Confederated Peoria. In addition to this recognition, the treaty also provided for the disposition of the lands of the constituent tribes set aside by the treaties of the 1830’s; ten sections were to be held in common by the new Confederation, each tribal member received an allotment of 160 acres; the remaining or “surplus” land was to be sold to settlers and the proceeds to be used by the tribes.
The Civil War caused considerable turmoil among all the people of Kansas, especially the Indians. After the war, most members of the Confederation agreed to remove to the Indian Territory under the provisions of the so-called Omnibus Treaty of February 23, 1867, 15 Stat. 513. Some of the members elected at this time to remain in Kansas, separate from the Confederated Tribes, and become citizens of the United States.
The lands of the Confederation members in the Indian Territory were subject to the provisions of the General Allotment Act of 1887. The allotment of all the tribal land was made by 1893, and by 1915, the tribe had no tribal lands or any lands in restricted status. Under the provisions of the Oklahoma Indian Welfare Act of 1936, 49 Stat. 1967, the tribes adopted a constitution and by-laws, which was ratified on October 10, 1939, and they became known as the Peoria Tribe of Indians of Oklahoma.
As a result of the “Termination Policy” of the Federal Government in the 1950’s, the Federal Trust relationship over the affairs of the Peoria Tribe of Indians of Oklahoma and its members, except for claims then pending before the Indian Claims Commission and Court of claims, was ended on August 2, 1959, pursuant to the provisions of the Act of August 2, 1956, 709 Stat. 937, and Federal services were no longer provided to the individual members of the tribe. More recently, however, the Peoria Tribe of Indians of Oklahoma was reinstated as a federally recognized tribe by the Act of May 15, 1978, 92 Stat. 246.
| 3.303698 |
Q. What's wrong with hot dogs?
A. Nitrite additives in hotdogs form carcinogens.
Petition to ban
Three different studies have come out in the past year, finding that the consumption
of hot dogs can be a risk factor for childhood cancer.
Peters et al. studied the relationship between the intake of
certain foods and the risk of leukemia in children from birth to
age 10 in Los Angeles County
1980 and 1987. The study found that children eating more than 12 hot dogs
per month have nine times the normal risk of developing childhood
leukemia. A strong
risk for childhood leukemia also existed for those children whose fathers'
intake of hot dogs was 12 or more per month.
Researchers Sarusua and Savitz studied childhood cancer cases
in Denver and found that children born to mothers who consumed
hot dogs one or more times
during pregnancy has approximately double the risk of developing brain
tumors. Children who ate hot dogs one or more times per week were
also at higher
risk of brain cancer.
Bunin et al, also found that maternal consumption of hot dogs
during pregnancy was associated with an excess risk of childhood
Q. How could hot dogs cause cancer?
A. Hot dogs contain nitrites which are used as preservatives, primarily
to combat botulism. During the cooking process, nitrites combine with
amines naturally present in meat to form carcinogenic N-nitroso compounds.
that nitrites can combine with amines in the human stomach to form N-nitroso
compounds. These compounds are known carcinogens and have been associated
with cancer of the oral cavity, urinary bladder, esophagus, stomach and
Q. Some vegetables contain nitrites, do they cause cancer too?
A. It is true that nitrites are commonly found in many green vegetables,
especially spinach, celery and green lettuce. However, the consumption
of vegetables appears
to be effective in reducing the risk of cancer. How is this possible?
The explanation lies in the formation of N-nitroso compounds from nitrites
and amines. Nitrite
containing vegetables also have Vitamin C and D, which serve to inhibit
the formation of N-nitroso compounds. Consequently, vegetables are quite
and serve to reduce your cancer risk.
Q. Do other food products contain nitrites?
A. Yes, all cured meats contain nitrites. These include bacon and fish.
Q. Are all hot dogs a risk for childhood cancer?
A. No. Not all hot dogs on the market contain nitrites. Because of modern
refrigeration methods, nitrites are now used more for the red color they
produce (which is
associated with freshness) than for preservation. Nitrite-free hot dogs,
while they taste the same as nitrite hot dogs, have a brownish color
that has limited
their popularity among consumers. When cooked, nitrite-free hot dogs
are perfectly safe and healthy.
HERE ARE FOUR THINGS THAT YOU CAN DO:
- Do not buy
hot dogs containing nitrite. It is especially important that
children and potential parents do not consume 12 or more of these
- Request that your supermarket have nitrite-free hot
- Contact your local school board and find out
whether children are being served nitrite hot dogs in the cafeteria,
- Write the FDA and express your concern that nitrite-hot
dogs are not labeled for their cancer risk to children. You can
dogs, docket #: 95P 0112/CP1.
Cancer Prevention Coalition
of Public Health, M/C 922
University of Illinois at Chicago
2121 West Taylor Street
Chicago, IL 60612
Tel: (312) 996-2297, Fax: (312) 413-9898
1, Peters J, et al " Processed meats and risk of childhood leukemia (California,
USA)" Cancer Causes & Control 5: 195-202, 1994.
2 Sarasua S, Savitz D. " Cured and broiled meat consumption
in relation to childhood cancer: Denver, Colorado (United States)," Cancer
Causes & Control 5:141-8, 1994.
3 Bunin GR, et al. "Maternal diet and risk of astrocytic
glioma in children: a report from the children's cancer group
(United States and Canada)," Cancer
Causes & Control 5:177-87, 1994.
4. Lijinsky W, Epstein, S. "Nitrosamines as environmental
carcinogens," Nature 225 (5227): 2112, 1970.
| 3.068424 |
PPPL scientists propose a solution to a critical barrier to producing fusion
Posted April 23, 2012; 05:00 p.m.
Physicists from the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) have discovered a possible solution to a mystery that has long baffled researchers working to harness fusion. If confirmed by experiment, the finding could help scientists eliminate a major impediment to the development of fusion as a clean and abundant source of energy for producing electric power.
An in-depth analysis by PPPL scientists zeroed in on tiny, bubble-like islands that appear in the hot, charged gases — or plasmas — during experiments. These minute islands collect impurities that cool the plasma. And these islands, the scientists report in the April 20 issue of the journal Physical Review Letters, are at the root of a longstanding problem known as the "density limit" that can prevent fusion reactors from operating at maximum efficiency.
Fusion occurs when plasmas become hot and dense enough for the atomic nuclei contained within the hot gas to combine and release energy. But when the plasmas in experimental reactors called tokamaks reach the mysterious density limit, they can spiral apart into a flash of light.
"The big mystery is why adding more heating power to the plasma doesn't get you to higher density," said David Gates, a principal research physicist at PPPL and co-author of the proposed solution with Luis Delgado-Aparicio, a postdoctoral fellow at PPPL and a visiting scientist at the Massachusetts Institute of Technology's Plasma Science Fusion Center. "This is critical because density is the key parameter in reaching fusion and people have been puzzling about this for more than 30 years."
A discovery by Princeton Plasma Physics Laboratory physicists Luis Delgado-Aparicio (left) and David Gates could help scientists eliminate a major impediment to the development of fusion as a clean and abundant source of energy for producing electric power. Listen to a podcast with the scientists discussing their discovery. (Photo by Elle Starkman)
The scientists hit upon their theory in what Gates called "a 10-minute 'Aha!' moment." Working out equations on a whiteboard in Gates' office, the physicists focused on the islands and the impurities that drive away energy. The impurities stem from particles that the plasma kicks up from the tokamak wall. "When you hit this magical density limit, the islands grow and coalesce and the plasma ends up in a disruption," said Delgado-Aparicio.
These islands actually inflict double damage, the scientists said. Besides cooling the plasma, the islands act as shields that block out added power. The balance tips when more power escapes from the islands than researchers can pump into the plasma through a process called ohmic heating — the same process that heats a toaster when electricity passes through it. When the islands grow large enough, the electric current that helps to heat and confine the plasma collapses, allowing the plasma to fly apart.
Gates and Delgado-Aparicio now hope to test their theory with experiments on a tokamak called Alcator C-Mod at MIT, and on the DIII-D tokamak at General Atomics in San Diego. Among other things, they intend to see if injecting power directly into the islands will lead to higher density. If so, that could help future tokamaks reach the extreme density and 100-million-degree temperatures that fusion requires.
The scientists' theory represents a fresh approach to the density limit, which also is known as the "Greenwald limit" after MIT physicist Martin Greenwald, who has derived an equation that describes it. Greenwald has another potential explanation for the source of the limit. He thinks it may occur when turbulence creates fluctuations that cool the edge of the plasma and squeeze too much current into too little space in the core of the plasma, causing the current to become unstable and crash. "There is a fair amount of evidence for this," Greenwald said. However, he added, "We don't have a nice story with a beginning and end and we should always be open to new ideas."
Gates and Delgado-Aparicio pieced together their model from a variety of clues that have developed in recent decades. Gates first heard of the density limit while working as a postdoctoral fellow at the Culham Centre for Fusion Energy in Abingdon, England, in 1993. The limit had previously been named for Culham scientist Jan Hugill, who described it to Gates in detail.
Separately, papers on plasma islands were beginning to surface in scientific circles. French physicist Paul-Henri Rebut described radiation-driven islands in a mid-1980s conference paper, but not in a periodical. German physicist Wolfgang Suttrop speculated a decade later that the islands were associated with the density limit. "The paper he wrote was actually the trigger for our idea, but he didn't relate the islands directly to the Greenwald limit," said Gates, who had worked with Suttrop on a tokamak experiment at the Max Planck Institute for Plasma Physics in Garching, Germany, in 1996 before joining PPPL the following year.
In early 2011, the topic of plasma islands had mostly receded from Gates' mind. But a talk by Delgado-Aparicio about the possibility of such islands erupting in the plasmas contained within the Alcator C-Mod tokamak reignited his interest. Delgado-Aparicio spoke of corkscrew-shaped phenomena called snakes that had first been observed by PPPL scientists in the 1980s and initially reported by German physicist Arthur Weller.
Intrigued by the talk, Gates urged Delgado-Aparicio to read the papers on islands by Rebut and Suttrop. An email from Delgado-Aparicio landed in Gates' inbox some eight months later. In it was a paper that described the behavior of snakes in a way that fit nicely with the C-Mod data. "I said, 'Wow! He's made a lot of progress,'" Gates remembered. "I said, 'You should come down and talk about this.'"
What most excited Gates was an equation for the growth of islands that hinted at the density limit by modifying a formula that British physicist Paul Harding Rutherford had derived back in the 1980s. "I thought, 'If Wolfgang (Suttrop) was right about the islands, this equation should be telling us the Greenwald limit," Gates said. "So when Luis arrived I pulled him into my office."
Then a curious thing happened. "It turns out that we didn't even need the entire equation," Gates said. "It was much simpler than that." By focusing solely on the density of the electrons in a plasma and the heat radiating from the islands, the researchers devised a formula for when the heat loss would surpass the electron density. That in turn pinpointed a possible mechanism behind the Greenwald limit.
Delgado-Aparicio became so absorbed in the scientists' new ideas that he missed several turnoffs while driving back to Cambridge, Mass., that night. "It's intriguing to try to explain Mother Nature," he said. "When you understand a theory you can try to find a way to beat it. By that I mean find a way to work at densities higher than the limit."
Conquering the limit could provide essential improvements for future tokamaks that will need to produce self-sustaining fusion reactions, or "burning plasmas," to generate electric power. Such machines include proposed successors to ITER, a $20 billion experimental reactor that is being built in Cadarache, France, by the European Union, the United States and five other countries.
Why hadn't researchers pieced together a similar theory of the density-limit puzzle before? The answer, said Gates, lies in how ideas percolate through the scientific community. "The radiation-driven islands idea never got a lot of press," he said. "People thought of them as curiosities. The way we disseminate information is through publications, and this idea had a weak initial push."
PPPL, in Plainsboro, N.J., is devoted both to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. Through the process of fusion, which is constantly occurring in the sun and other stars, energy is created when the nuclei of two lightweight atoms, such as those of hydrogen, combine in plasma at very high temperatures. When this happens, a burst of energy is released, which can be used to generate electricity.
PPPL is managed by Princeton University for the U.S. Department of Energy's Office of Science.
| 3.336679 |
|Trees and Shrubs that Tolerate Saline Soils and Salt Spray Drift||
Concentrated sodium (Na), a component of salt, can damage plant tissue whether it contacts above or below ground parts. High salinity can reduce plant growth and may even cause plant death. Care should be taken to avoid excessive salt accumulation from any source on tree and shrub roots, leaves or stems. Sites with saline (salty) soils, and those that are exposed to coastal salt spray or paving de-icing materials, present challenges to landscapers and homeowners.
|May 1, 2009||430-031|
|Urban Forestry Issues||May 1, 2009||420-180|
|Value, Benefits, and Costs of Urban Trees||May 1, 2009||420-181|
| 3.157203 |
Below you will find several recent observations about the relationship between reading and science process skills.
Significant improvement in both science and reading scores occurred when the regular basal reading program was replaced with reading in science that correlated with the science curriculum (Romance and Vitale, 2001).
Teachers should help students recognize the important role that prior knowledge plays and teach them to use that knowledge when learning science through reading (Barton and Jordan, 2001).
Most students arrive at the science teacher's classroom knowing how to read, but few understand how to use reading for learning science content (Santa, Havens, and Harrison, 1996).
The same skills that make good scientists also make good readers: engaging prior knowledge, forming hypotheses, establishing plans, evaluating understanding, determining the relative importance of information, describing patterns, comparing and contrasting, making inferences, drawing conclusions, generalizing, evaluating sources, and so on (Armbruster, 1993).
The skills in science are remarkably similar to those used in other subjects, especially reading. When students are doing science, following scientific procedures, and thinking as scientists, they are developing skills that are necessary for effective reading and understanding (Padilla, Muth and Lund Padilla, 1991).
Students engaging in hands-on activities are forced to confront currently held cognitive frameworks with new ideas, and, thus actively reconstruct meaning form experience (Shymansky, 1989).
Because hands-on activities encourage students to generate their own questions whose answers are found by subsequent reading of their science textbook or other science materials, such activities can provide students with both a meaningful purpose for reading (Ulerick, 1989) and context-valid cognitive frames of reference from which to construct meaning from text (Nelson-Herber, 1986).
Reading and activity-oriented sciences emphasize the same intellectual skills and are both concerned with thinking processes. When a teacher helps students develop science process skills, reading processes are simultaneously being developed (Mechling & Oliver, 1983 and Simon & Zimmerman, 1980).
Research indicates that a strong experienced-based science program, one in which students directly manipulate materials, can facilitate the development of language arts skills (Wellman, 1978).
Science process skills have reading counterparts. For example, when a teacher is working on "describing" in science, students are learning to isolate important characteristics, enumerate characteristics, use appropriate terminology, and use synonyms which are important reading skills (Carter & Simpson, 1978).
When students have used the process skills of observing, identifying, and classifying, they are better able to discriminate between vowels and consonants and to learn the sounds represented by letters, letter blends, and syllables (Murray & Pikul ski, 1978).
Science instruction provides an alternative teaching strategy that motivates students who may have reading difficulties (Wellman, 1978).
Children's involvement with process skills enables them to recognize more easily the contextual and structural clues in attacking new words and better equips them to interpret data in a paragraph. Science process skills are essential to logical thinking, as well as to forming the basic skills for learning to read (Barufaldi & Swift, 1977).
Guszak defines reading readiness as a skill-complex. Of the three areas within the skill-complex, two can be directly enhanced by science process skills: (1) physical factors (health, auditory, visual, speech, and motor); and (2) understanding factors (concepts, processes). When students see, hear, and talk about science experiences, their understanding, perception, and comprehension of concepts and processes may improve (Barufaldi & Swift, 1977 and Bethel, 1974).
The hands-on manipulative experiences science provides are the key to the relationship between process skills in both science and reading (Lucas & Burlando, 1975).
Science activities provide opportunities for manipulating large quantities of multi-sensory materials which promote perceptual skills, i.e., tactile, kinesthetic, auditory, and visual (Neuman, 1969). These skills then contribute to the development of the concepts, vocabulary, and oral language skills (listening and speaking) necessary for learning to read (Wellman, 1978).
Studies viewed cumulatively suggest that science instruction at the intermediate and upper elementary grades does improve the attainment of reading skills. The findings reveal that students have derived benefits in the areas of vocabulary enrichment, increased verbal fluency, enhanced ability to think logically, and improved concept formation and communication skills (Campbell, 1972; Kraft, 1961; Olson, 1971; Quinn & Kessler, 1976).
| 3.39612 |
A canticle (from the Latin canticulum, a diminutive of canticum, song) is a hymn (strictly excluding the Psalms) taken from the Bible. The term is often expanded to include ancient non-biblical hymns such as the Te Deum and certain psalms used liturgically.
These three canticles are sometimes referred to as the "evangelical canticles", as they are taken from the Gospel of St Luke. They are sung every day (unlike those from the Old Testament which, as is, shown above, are only of weekly occurrence). They are placed not amongst the psalms (as are the seven from the Old Testament), but separated from them by the Chapter, the Hymn, the Versicle and Response, and thus come immediately before the Prayer (or before the preces, if these are to be said). They are thus given an importance and distinction elevating them into great prominence, which is further heightened by the rubric which requires the singers and congregations to stand while they are being sung (in honour of the mystery of the Incarnation, to which they refer). Further, while the "Magnificat" is being sung at Solemn Vespers, the altar is incensed as at Solemn Mass. All three canticles are in use in the Greek and Anglican churches. In the Breviary the above-named ten canticles are provided with antiphons and are sung in the same eight psalm-tones and in the same alternating manner as the psalms. To make the seven taken from the Old Testament suitable for this manner of singing, nos. 2-7 sometimes divide a verse of the Bible into two verses, thus increasing the number of Breviary verses. No. 1, however, goes much farther than this. It uses only a portion of the long canticle in Daniel, and condenses, expands, omits, and interverts verses and portions of verses. In the Breviary the canticle begins with verse 57, and ends with verse 56 (Dan., iii); and the penultimate verse is clearly an interpolation, "Benedicamus Patrem, et Filium . . .". In addition to their Breviary use some of the canticles are used in other connections in the liturgy; e.g. the "Nunc dimittis" as a tract at the Mass of the Feast of the Purification (when 2 February comes after Septuagesima); the "Benedictus" in the burial of the dead and in various processions. The use of the "Benedictus" and the "Benedicite" at the old Gallican Mass is interestingly described by Duchene (Christian Worship: Its Origin and Evolution, London, 1903, 191-196). In the Office of the Greek Church the canticles numbered 1, 3, 5, 6, 7, 8, 9 are used at Lauds, but are not assigned to the same days as in the Roman Breviary. Two others (Isaiah 26:9-20, and Jonah 2:2-9) are added for Friday and Saturday respectively.
The ten canticles so far mentioned do not exhaust the portions of Sacred Scripture which are styled "canticles". There are, so example, those of Deborah and Barac, Judith, the "canticle of Canticles"; and many psalms (e.g. xvii, 1, "this canticle"; xxxviii,1, "canticle of David"; xliv,1, "canticle for the beloved"; and the first verse of Pss. 1xiv, 1xv, 1xvi, 1xvii, etc). In the first verse of some psalms the phrase psalmus cantici (the psalm of a canticle) is found, and in others the phrase canticum psalmi (a canticle of a psalm). Cardinal Bona thinks that psalmus cantici indicated that the voice was to precede the instrumental accompaniment, while canticum psalmi indicated an instrumental prelude to the voice. This distinction follows from his view of a canticle as an unaccompanied vocal song, and of a psalm as an accompanied vocal song. It is not easy to distinguish satisfactorily the meanings of psalm, hymn, canticle, as referred to by St. Paul in two places. Canticum appears to be generic - a song, whether sacred or secular; and there is reason to think that his admonition did not contemplate religious assemblies of the Christians, but their social gatherings. In these the Christians were to sing "spiritual songs", and not the profane or lascivious songs common amongst the pagans. These spiritual songs were not exactly psalms or hymns. The hymn may then be defined as a metrical or rhythmical praise of God; and the psalm, accompanied sacred song or canticle, either taken from the Psalms or from some less authoritative source (St. Augustine declaring that a canticle may be without a psalm but not a psalm without a canticle).
In addition to the ten canticles enumerated above the Roman Breviary places in its index, under the heading "Cantica", the "Te Deum" (at the end of Matins for Sundays and Festivals, but there styled "Hymnus SS. Ambrosii et Augustini") and the: "Quicumque vult salvus esse" (Sundays at Prime, but there styled "Symbolum S. Athanasii", the "Creed of St. Athanasius"). To these are sometimes added by writers the "Gloria in excelsis", the "Trisagion", and the "Gloria Patri" (the Lesser Doxology). In the "Psalter and Canticles Pointed for chanting" (Philadelphia, 1901), for the use of the Evangelical Lutheran Congregations, occurs a "Table of canticles" embracing Nos. 1, 3, 8, 9, 10, besides certain psalms, and the "Te Deum" and "Venite" (Ps. xicv, used at the beginning of Matins in the Roman Breviary). The word Canticles is thus seen to be somewhat elastic in its comprehension. On the one hand, while it is used in the common parlance in the Church of England to cover several of the enumerated canticles, the Prayer Book applies it only to the "Benedicite", while in its Calendar the word Canticles is applied to what is commonly known as the "Song of Solomon" (the Catholic "Canticle of Canticles", Vulgate, "Canticum canticorum").
The nine Canticles are as follows:
Originally, these Canticles were chanted in their entirety every day, with a short refrain inserted between each verse. Eventually, short verses (troparia) were composed to replace these refrains, a process traditionally inaugurated by Saint Andrew of Crete. Gradually over the centuries, the verses of the Biblical Canticles were omitted (except for the Magnificat) and only the composed troparia were read, linked to the original canticles by an Irmos. During Great Lent however, the original Biblical Canticles are still read.
Another Biblical Canticle, the Nunc Dimittis is either read or sung at Vespers.
| 3.173298 |
Water and sediment testing
EPA is currently collecting and analyzing water and sediment samples to help states and other federal agencies understand the immediate and long-term impacts of oil contamination along the Gulf coast. The results and the interpretation of all data collected by EPA will be posted to www.epa.gov/bpspill.
Water and sediment samples are being taken prior to oil reaching the area to determine water quality and sediment conditions that are typical of selected bays and beaches in Louisiana, Mississippi, Alabama, and the Florida panhandle. This data will be used to supplement existing data generated from previous water quality surveys conducted by states, EPA, and others.
Water sampling will continue once the oil reaches the shore; periodic samples will be collected to document water quality changes. EPA will make data publicly available as quickly as possible. Other state and federal agencies make beach closure and seafood harvesting and consumption determinations, but the data generated by EPA will assist in their evaluations.
Why is EPA sampling and monitoring the water?
EPA is tracking the prevalence of potentially harmful chemicals in the water as a result of this spill to determine the level of risk posed to fish and other wildlife. While these chemicals can impact ecosystems, drinking water supplies are not expected to be affected.
The oil itself can cause direct effects on fish and wildlife, for example when it coats the feathers of waterfowl and other types of birds. In addition, other chemical compounds can have detrimental effects. Monitoring information allows EPA to estimate the amount of these compounds that may reach ecological systems. When combined with available information on the toxicity of these compounds, EPA scientists can estimate the likely magnitude of effects on fish, wildlife, and human health.
To Learn More:
| 3.669191 |
XML and the Second-Generation Web; May 1999; Scientific American Magazine; by Bosak, Bray; 5 Page(s)
Give people a few hints, and they can figure out the rest. They can look at this page, see some large type followed by blocks of small type and know that they are looking at the start of a magazine article. They can look at a list of groceries and see shopping instructions. They can look at some rows of numbers and understand the state of their bank account.
Computers, of course, are not that smart; they need to be told exactly what things are, how they are related and how to deal with them. Extensible Markup Language (XML for short) is a new language designed to do just that, to make information self-describing. This simple-sounding change in how computers communicate has the potential to extend the Internet beyond information delivery to many other kinds of human activity. Indeed, since XML was completed in early 1998 by the World Wide Web Consortium (usually called the W3C), the standard has spread like wildfire through science and into industries ranging from manufacturing to medicine.
| 3.167992 |
Please Read How You Can Help Keep the Encyclopedia Free
Absolute and Relational Theories of Space and Motion
Since antiquity, natural philosophers have struggled to comprehend the nature of three tightly interconnected concepts: space, time, and motion. A proper understanding of motion, in particular, has been seen to be crucial for deciding questions about the natures of space and time, and their interconnections. Since the time of Newton and Leibniz, philosophers’ struggles to comprehend these concepts have often appeared to take the form of a dispute between absolute conceptions of space, time and motion, and relational conceptions. This article guides the reader through some of the history of these philosophical struggles. Rather than taking sides in the (alleged) ongoing debates, or reproducing the standard dialectic recounted in most introductory texts, we have chosen to scrutinize carefully the history of the thinking of the canonical participants in these debates — principally Descartes, Newton, Leibniz, Mach and Einstein. Readers interested in following up either the historical questions or current debates about the natures of space, time and motion will find ample links and references scattered through the discussion and in the Other Internet Resources section below.
- 1. Introduction
- 2. Aristotle
- 3. Descartes
- 4. Newton
- 5. Absolute Space in the Twentieth Century
- 6. Leibniz
- 7. ‘Not-Newton’ versus ‘Be-Leibniz’
- 8. Mach and Later Machians
- 9. Relativity and Motion
- 10. Conclusion
- Other Internet Resources
- Related Entries
Things change. A platitude perhaps, but still a crucial feature of the world, and one which causes many philosophical perplexities — see for instance the entry on Zeno's Paradoxes. For Aristotle, motion (he would have called it ‘locomotion’) was just one kind of change, like generation, growth, decay, fabrication and so on. The atomists held on the contrary that all change was in reality the motion of atoms into new configurations, an idea that was not to begin to realize its full potential until the Seventeenth Century, particularly in the work of Descartes. (Of course, modern physics seems to show that the physical state of a system goes well beyond the geometrical configuration of bodies. Fields, while determined by the states of bodies, are not themselves configurations of bodies if interpreted literally, and in quantum mechanics bodies have ‘internal states' such as particle spin.)
While not all changes seem to be merely the (loco)motions of bodies in physical space. Yet since antiquity, in the western tradition, this kind of motion has been absolutely central to the understanding of change. And since motion is a crucial concept in physical theories, one is forced to address the question of what exactly it is. The question might seem trivial, for surely what is usually meant by saying that something is moving is to say that it is moving relative to something, often tacitly understood between speakers. For instance: the car is moving at 60mph (relative to the road and things along it), the plane is flying (relative) to London, the rocket is lifting off (the ground), or the passenger is moving (to the front of the speeding train). Typically the relative reference body is either the surroundings of the speakers, or the Earth, but this is not always the case. For instance, it seems to make sense to ask whether the Earth rotates about its axis West-East diurnally or whether it is instead the heavens that rotate East-West; but if all motions are to be reckoned relative to the Earth, then its rotation seems impossible. But if the Earth does not offer a unique frame of reference for the description of motion, then we may wonder whether any arbitrary object can be used for the definition of motions: are all such motions on a par, none privileged over any other? It is unclear whether anyone has really, consistently espoused this view: Aristotle, perhaps, in the Metaphysics; Descartes and Leibniz are often thought to have but, as we'll see, those claims are suspect; possibly Huygens, though his remarks remain cryptic; Mach at some moments perhaps. If this view were correct, then the question of whether the Earth or heavens rotate would be meaningless, merely different but equivalent expressions of the facts.
But suppose, like Aristotle, you take ordinary language accurately to reflect the structure of the world, then you could recognize systematic everyday uses of ‘up’ and ‘down’ that require some privileged standards — uses that treat things closer to a point at the center of the Earth as more ‘down’ and motions towards that point as ‘downwards'. Of course we would likely explain this usage in terms of the fact that we and our language evolved in a very noticeable gravitational field directed towards the center of the Earth, but for Aristotle, as we shall see, this usage helped identify an important structural feature of the universe, which itself was required for the explanation of weight. Now a further question arises: how should a structure, such as a preferred point in the universe, which privileges certain motions, be understood? What makes that point privileged? One might expect that Aristotle simply identified it with the center of the Earth, and so relative to that particular body; but in fact he did not adopt that tacit convention as fundamental, for he thought it possible for the Earth to move from the ‘down’ point. Thus the question arises (although Aristotle does not address it explicitly) of whether the preferred point is somewhere picked out in some other way by the bodies in the universe —the center of the heavens perhaps? Or is it picked out quite independently of the arrangements of matter?
The issues that arise in this simple theory help frame the debates between later physicists and philosophers concerning the nature of motion; in particular, we will focus on the theories of Descartes, Newton, Leibniz, Mach and Einstein, and their interpretations. But similar issues circulate through the different contexts: is there any kind of privileged sense of motion, a sense in which things can be said to move or not, not just relative to this or that reference body, but ‘truly’? If so, can this true motion be analyzed in terms of motions relative to other bodies — to some special body, or to the entire universe perhaps? (And in relativity, in which distances, times and measures of relative motion are frame-dependent, what relations are relevant?) If not, then how is the privileged kind of motion to be understood, as relative to space itself — something physical but non-material — perhaps? Or can some kinds of motion be best understood as not being spatial changes — changes of relative location or of place — at all?
To see that the problem of the interpretation of spatiotemporal quantities as absolute or relative is endemic to almost any kind of mechanics one can imagine, we can look to one of the simplest theories — Aristotle's account of natural motion (e.g., On the Heavens I.2). According to this theory it is because of their natures, and not because of ‘unnatural’ forces, that that heavy bodies move down, and ‘light’ things (air and fire) move up; it is their natures, or ‘forms’, that constitute the gravity or weight of the former and the levity of the latter. This account only makes sense if ‘up’ and ‘down’ can be unequivocally determined for each body. According to Aristotle, up and down are fixed by the position of the body in question relative to the center of the universe, a point coincident with the center of the Earth. That is, the theory holds that heavy bodies naturally move towards the center, while light bodies naturally move away.
Does this theory involve absolute or merely relative quantities? It depends on how the center is conceived. If the center were identified with the center of the Earth, then the theory could be taken to eschew absolute quantities: it would simply hold that the natural motions of any body depend on its position relative to another, namely the Earth. But Aristotle is explicit that the center of the universe is not identical with, but merely coincident with the center of the Earth (e.g., On the Heavens II.14): since the Earth itself is heavy, if it were not at the center it would move there! So the center is not identified with any body, and so perhaps direction-to-center is an absolute quantity in the theory, not understood fundamentally as direction to some body (merely contingently as such if some body happens to occupy the center). But this conclusion is not clear either. In On the Heavens II.13, admittedly in response to a different issue, Aristotle suggests that the center itself is ‘determined’ by the outer spherical shell of the universe (the aetherial region of the fixed stars). If this is what he intends, then the natural law prescribes motion relative to another body after all — namely up or down with respect to the mathematical center of the stars.
It would be to push Aristotle's writings too hard to suggest that he was consciously wrestling with the issue of whether mechanics required absolute or relative quantities of motion, but what is clear is that these questions arise in his physics and his remarks impinge on them. His theory also gives a simple model of how these questions arise: a physical theory of motion will say that ‘under such-and-such circumstances, motion of so-and-so a kind will occur’ — and the question of whether that kind of motion makes sense in terms of the relations between bodies alone arises automatically. Aristotle may not have recognized the question explicitly, but we see it as one issue in the background of his discussion of the center.
The issues are, however, far more explicit in Descartes' physics; and since the form of his theory is different the ‘kinds of motion’ in question are quite different — as they change with all the different theories that we discuss. For Descartes argued in his 1644 Principles of Philosophy (see Book II) that the essence of matter was extension (i.e., size and shape) because any other attribute of bodies could be imagined away without imagining away matter itself. But he also held that extension constitutes the nature of space, hence he concluded that space and matter were one and the same thing. An immediate consequence of the identification is the impossibility of the vacuum; if every region of space is a region of matter, then there can be no space without matter. Thus Descartes' universe is ‘hydrodynamical’ — completely full of mobile matter of in different sized pieces in motion, rather like a bucket full of water and lumps of ice of different sizes, which has been stirred around. Since fundamentally the pieces of matter are nothing but extension, the universe is in fact nothing but a system of geometric bodies in motion without any gaps. (Descartes held that all other properties arise from the configurations and motions of such bodies — from geometric complexes. See Garber 1992 for a comprehensive study.)
The identification of space and matter poses a puzzle about motion: if the space that a body occupies literally is the matter of the body, then when the body — i.e., the matter — moves, so does the space that it occupies. Thus it doesn't change place, which is should be to say that it doesn't move after all! Descartes resolved this difficulty by taking all motion to be the motion of bodies relative to one another, not a literal change of space.
Now, a body has as many relative motions as there are bodies but it does not follow that all are equally significant. Indeed, Descartes uses several different concepts of relational motion. First there is ‘change of place’, which is nothing but motion relative to this or that arbitrary reference body (II.13). In this sense no motion of a body is privileged, since the speed, direction, and even curve of a trajectory depends on the reference body, and none is singled out. Next, he discusses motion in ‘the ordinary sense’ (II.24). This is often conflated with mere change of arbitrary place, but it in fact differs because according to the rules of ordinary speech one properly attributes motion only to bodies whose motion is caused by some action, not to any relative motion. (For instance, a person sitting on a speeding boat is ordinarily said to be at rest, since ‘he feels no action in himself’.) Finally, he defined motion ‘properly speaking’ (II.25) to be a body's motion relative to the matter contiguously surrounding it, which the impossibility of a vacuum guarantees to exist. (Descartes’ definition is complicated by the fact that he modifies this technical concept to make it conform more closely to the pre-theoretical sense of ‘motion’; however, in our discussion transference is all that matters, so we will ignore those complications.) Since a body can only be touching one set of surroundings, Descartes (dubiously) argued that this standard of motion was unique.
What we see here is that Descartes, despite holding motion to be the motion of bodies relative to one another, also held there to be a privileged sense of motion; in a terminology sometimes employed by writers of the period, he held there to be a sense of ‘true motion’, over and above the merely relative motions. Equivalently, we can say that Descartes took motion (‘properly speaking’) to be a complete predicate: that is, moves-properly-speaking is a one-place predicate. (In contrast, moves-relative-to is a two-place predicate.) And note that the predicate is complete despite the fact that it is analyzed in terms of relative motion. (Formally, let contiguous-surroundings be a function from bodies to their contiguous surroundings, then x moves-properly-speaking is analyzed as x moves-relative-to contiguous-surroundings(x).)
This example illustrates why it is crucial to keep two questions distinct: on the one hand, is motion to be understood in terms of relations between bodies or by invoking something additional, something absolute; on the other hand, are all relative motions equally significant, or is there some ‘true’, privileged notion of motion? Descartes' views show that eschewing absolute motion is logically compatible with accepting true motion; which is of course not to say that his definitions of motion are themselves tenable.
There is an interpretational tradition which holds that Descartes only took the first, ‘ordinary’ sense of motion seriously, and introduced the second notion to avoid conflict with the Catholic Church. Such conflict was a real concern, since the censure of Galileo's Copernicanism took place only 11 years before publication of the Principles, and had in fact dissuaded Descartes from publishing an earlier work, The World. Indeed, in the Principles (III.28) he is at pains to explain how ‘properly speaking’ the Earth does not move, because it is swept around the Sun in a giant vortex of matter — the Earth does not move relative to its surroundings in the vortex.
The difficulty with the reading, aside from the imputation of cowardice to the old soldier, is that it makes nonsense of Descartes' mechanics, a theory of collisions. For instance, according to his laws of collision if two equal bodies strike each other at equal and opposite velocities then they will bounce off at equal and opposite velocities (Rule I). On the other hand, if the very same bodies approach each other with the very same relative speed, but at different speeds then they will move off together in the direction of the faster one (Rule III). But if the operative meaning of motion in the Rules is the ordinary sense, then these two situations are just the same situation, differing only in the choice of reference frame, and so could not have different outcomes — bouncing apart versus moving off together. It seems inconceivable that Descartes could have been confused in such a trivial way. (Additionally, as Pooley 2002 points out, just after he claims that the Earth is at rest ‘properly speaking’, Descartes argues that the Earth is stationary in the ordinary sense, because common practice is to determine the positions of the stars relative to the Earth. Descartes simply didn't need motion properly speaking to avoid religious conflict, which again suggests that it has some other significance in his system of thought.)
Thus Garber (1992, Chapter 6-8) proposes that Descartes actually took the unequivocal notion of motion properly speaking to be the correct sense of motion in mechanics. Then Rule I covers the case in which the two bodies have equal and opposite motions relative to their contiguous surroundings, while Rule VI covers the case in which the bodies have different motions relative to those surroundings — one is perhaps at rest in its surroundings. That is, exactly what is needed to make the rules consistent is the kind of privileged, true, sense of motion provided by Descartes' second definition. Insurmountable problems with the rules remain, but rejecting the traditional interpretation and taking motion properly speaking seriously in Descartes' philosophy clearly gives a more charitable reading.
In an unpublished essay — De Gravitatione (Newton, 2004) — and in a Scholium to the definitions given in his 1687 Mathematical Principles of Natural Philosophy (see Newton, 1999 for an up-to-date translation), Newton attacked both of Descartes' notions of motion as candidates for the operative notion in mechanics. (see Stein 1967and Rynasiewicz 1995 for important, and differing, views on the issue.) (This critique is studied in more detail in the entry Newton's views on space, time, and motion.)
The most famous argument invokes the so-called ‘Newton's bucket’ experiment. Stripped to its basic elements one compares:
- a bucket of water hanging from a cord as the bucket is set spinning about the cord's axis, with
- the same bucket and water when they are rotating at the same rate about the cord's axis.
As is familiar from any rotating system, there will be a tendency for the water to recede from the axis of rotation in the latter case: in (i) the surface of the water will be flat (because of the Earth's gravitational field) while in (ii) it will be concave. The analysis of such ‘inertial effects' due to rotation was a major topic of enquiry of ‘natural philosophers' of the time, including Descartes and his followers, and they would certainly have agreed with Newton that the concave surface of the water in the second case demonstrated that the water was moving in a mechanically significant sense. There is thus an immediate problem for the claim that proper motion is the correct mechanical sense of motion: in (i) and (ii) proper motion is anti-correlated with the mechanically significant motion revealed by the surface of the water. That is, the water is flat in (i) when it is in motion relative to its immediate surroundings — the inner sides of the bucket — but curved in (ii) when it is at rest relative to its immediate surroundings. Thus the mechanically relevant meaning of rotation is not that of proper motion. (You may have noticed a small lacuna in Newton's argument: in (i) the water is at rest and in (ii) in motion relative to that part of its surroundings constituted by the air above it. It's not hard to imagine small modifications to the example to fill this gap.)
Newton also points out that the height that the water climbs up the inside of the bucket provides a measure of the rate of rotation of bucket and water: the higher the water rises up the sides, the greater the tendency to recede must be, and so the faster the water must be rotating in the mechanically significant sense. But supposing, very plausibly, that the measure is unique, that any particular height indicates a particular rate of rotation. Then the unique height that the water reaches at any moment implies a unique rate of rotation in a mechanically significant sense. And thus motion in the sense of motion relative to an arbitrary reference body, is not the mechanical sense, since that kind of rotation is not unique at all, but depends on the motion of the reference body. And so Descartes’ change of place (and for similar reasons, motion in the ordinary sense) is not the mechanically significant sense of motion.
In our discussion of Descartes we called the sense of motion operative in the science of mechanics ‘true motion’, and the phrase is used in this way by Newton in the Scholium. Thus Newton's bucket shows that true (rotational) motion is anti-correlated with, and so not identical with, proper motion (as Descartes proposed according to the Garber reading); and Newton further argues that the rate of true (rotational) motion is unique, and so not identical with change of place, which is multiple. Newton proposed instead that true motion is motion relative to a temporally enduring, rigid, 3-dimensional Euclidean space, which he dubbed ‘absolute space’. Of course, Descartes also defined motion as relative to an enduring 3-dimensional Euclidean space; the difference is that Descartes space was divided into parts (his space was identical with a plenum of corpuscles) in motion, not a rigid structure in which (mobile) material bodies are embedded. So according to Newton, the rate of true rotation of the bucket (and water) is the rate at which it rotates relative to absolute space. Or put another way, Newton effectively defines the complete predicate x moves-absolutely as x moves-relative-to absolute space; both Newton and Descartes offer the competing complete predicates as analyses of x moves-truly.
Newton's proposal for understanding motion solves the problems that he posed for Descartes, and provides an interpretation of the concepts of constant motion and acceleration that appear in his laws of motion. However, it suffers from two notable interpretational problems, both of which were pressed forcefully by Leibniz (in the Leibniz-Clarke Correspondence, 1715–1716) — which is not to say that Leibniz himself offered a superior account of motion (see below). (Of course, there are other features of Newton's proposal that turned out to be empirically inadequate, and are rejected by relativity: Newton's account violates the relativity of simultaneity and postulates a non-dynamical spacetime structure.) First, according to this account, absolute velocity is a well-defined quantity: more simply, the absolute speed of a body is the rate of change of its position relative to an arbitrary point of absolute space. But the Galilean relativity of Newton's laws mean that the evolution of a closed system is unaffected by constant changes in velocity; Galileo's experimenter cannot determine from observations inside his cabin whether the boat is at rest in harbor or sailing smoothly. Put another way, according to Newtonian mechanics, in principle Newton's absolute velocity cannot be experimentally determined. So in this regard absolute velocity is quite unlike acceleration (including rotation); Newtonian acceleration is understood in absolute space as the rate of change of absolute velocity, and is, according to Newtonian mechanics, in general measurable, for instance by measuring the height that the water ascends the sides of the bucket. (It is worth noting that Newton was well-aware of these facts; the Galilean relativity of his theory is demonstrated in Corollary V of the laws of the Principia, while Corollary VI shows that acceleration is unobservable if all parts of the system accelerate in parallel at the same rate, as they do in a homogeneous gravitational field.) Leibniz argued (rather inconsistently, as we shall see) that since differences in absolute velocity were unobservable, they could not be genuine differences at all; and hence that Newton's absolute space, whose existence would entail the reality of such differences, must also be a fiction. Few contemporary philosophers would immediately reject a quantity as meaningless simply because it was not experimentally determinable, but this fact does justify genuine doubts about the reality of absolute velocity, and hence of absolute space.
The second problem concerns the nature of absolute space. Newton quite clearly distinguished his account from Descartes' — in particular with regards to absolute space's rigidity versus Descartes' ‘hydrodynamical’ space, and the possibility of the vacuum in absolute space. Thus absolute space is definitely not material. On the other hand, presumably it is supposed to be part of the physical, not mental, realm. In De Gravitatione, Newton rejected both the standard philosophical categories of substance and attribute as suitable characterizations. Absolute space is not a substance for it lacks causal powers and does not have a fully independent existence, and yet not an attribute since it would exist even in a vacuum, which by definition is a place where there are no bodies in which it might inhere. Newton proposes that space is what we might call a ‘pseudo-substance’, more like a substance than property, yet not quite a substance. (Note that Samuel Clarke, in his Correspondence with Leibniz, which Newton had some role in composing, advocates the property view, and note further that when Leibniz objects because of the vacuum problem, Clarke suggests that there might be non-material beings in the vacuum in which space might inhere.) In fact, Newton accepted the principle that everything that exists, exists somewhere — i.e., in absolute space. Thus he viewed absolute space as a necessary consequence of the existence of anything, and of God's existence in particular — hence space's ontological dependence. Leibniz was presumably unaware of the unpublished De Gravitatione in which these particular ideas were developed, but as we shall see, his later works are characterized by a robust rejection of any notion of space as a real thing rather than an ideal, purely mental entity. This is a view that attracts even fewer contemporary adherents, but there is something deeply peculiar about a non-material but physical entity, a worry that has influenced many philosophical opponents of absolute space.
After the development of relativity (which we will take up below), and its interpretation as a spacetime theory, it was realized that the notion of spacetime had applicability to a range of theories of mechanics, classical as well as relativistic. In particular, there is a spacetime geometry — ‘Galilean’ or ‘neo-Newtonian’ spacetime — for Newtonian mechanics that solves the problem of absolute velocity; an idea exploited by a number of philosophers from the late 1960s (e.g., Earman 1970, Friedman 1983, Sklar 1974 and Stein 1968). For details the reader is referred to the entry on spacetime: inertial frames, but the general idea is that although a spatial distance is well-defined between any two simultaneous points of this spacetime, only the temporal interval is well-defined between non-simultaneous points. Thus things are rather unlike Newton's absolute space, whose points persist through time and maintain their distances; in absolute space the distance between p-now and q-then (where p and q are points) is just the distance between p-now and q-now. However, Galilean spacetime has an ‘affine connection’ which effectively specifies for every point of every continuous curve, the rate at which the curve is changing from straightness at that point; for instance, the straight lines are picked out as those curves whose rate of change from straightness is zero at every point. (Another way of thinking about this space is as possessing — in addition to a distance between any two simultaneous points and a temporal interval between any points — a three-place relation of colinearity, satisfied by three points just in case they lie on a straight line.)
Since the trajectories of bodies are curves in spacetime the affine connection determines the rate of change from straightness at every point of every possible trajectory. The straight trajectories thus defined can be interpreted as the trajectories of bodies moving inertially, and the rate of change from straightness of any trajectory can be interpreted as the acceleration of a body following that trajectory. That is, Newton's Second Law can be given a geometric formulation as ‘the rate of change from straightness of a body's trajectory is equal to the forces acting on the body divided by its mass’. The significance of this geometry is that while acceleration is well-defined, velocity is not — in accord with empirically determinability of acceleration but not velocity according to Newtonian mechanics. (A simple analogy helps see how such a thing is possible: betweenness but not ‘up’ is a well-defined concept in Euclidean space.) Thus Galilean spacetime gives a very nice interpretation of the choice that nature makes when it decides that the laws of mechanics should be formulated in terms of accelerations not velocities (as Aristotle and Descartes proposed).
Put another way, we can define the complete predicate x accelerates as trajectory(x) has-non-zero-rate-of-change-from-straightness, where trajectory maps bodies onto their trajectories in Galilean spacetime. And this predicate, defined this way, applies to the water in the bucket if and only if it is rotating, according to Newtonian mechanics formulated in terms of the geometry of Galilean spacetime; it is the mechanically relevant sense of the word in this theory. But all of this formulation and definition has been given in terms of the geometry of spacetime, not relations between bodies; acceleration is ‘absolute’ in the sense that there is a preferred (true) sense of acceleration in mechanics and which is not defined in terms of the motions of bodies relative to one another. (Note that this sense of ‘absolute’ is broader than that of motion relative to absolute space, which we defined earlier. In the remainder of this article we will use it in the broader sense. The reader should be aware that the term is used in many ways in the literature, and such equivocation often leads to massive misunderstandings.) Thus if any of this analysis of motion is taken literally then one arrives at a position regarding the ontology of spacetime rather like that of Newton's regarding space: it is some kind of ‘substantial’ (or maybe pseudo-substantial) thing with the geometry of Galilean spacetime, just as absolute space possessed Euclidean geometry. This view regarding the ontology of spacetime is usually called ‘substantivalism’ (Sklar, 1974). The Galilean substantivalist usually sees himself as adopting a more sophisticated geometry than Newton but sharing his substantivalism (though there is room for debate on Newton's exact ontological views, see DiSalle, 2002). The advantage of the more sophisticated geometry is that although it allows the absolute sense of acceleration apparently required by Newtonian mechanics to be defined, it does not allow one to define a similar absolute speed or velocity — x accelerates can be defined as a complete predicate in terms of the geometry of Galilean spacetime but not x moves in general — and so the first of Leibniz's problem is resolved. Of course we see that the solution depends on a crucial shift from speed and velocity to acceleration as the relevant senses of ‘motion’: from the rate of change of position to the rate of rate of change.
While this proposal solves the first kind of problem posed by Leibniz, it seems just as vulnerable to the second. While it is true that it involves the rejection of absolute space as Newton conceived it, and with it the need to explicate the nature of an enduring space, the postulation of Galilean spacetime poses the parallel question of the nature of spacetime. Again, it is a physical but non-material something, the points of which may be coincident with material bodies. What kind of thing is it? Could we do without it? As we shall see below, some contemporary philosophers believe so.
There is a ‘folk-reading’ of Leibniz that one finds either explicitly or implicitly in the philosophy of physics literature which takes account of only some of his remarks on space and motion. The reading underlies vast swathes of the literature: for instance, the quantities captured by Earman's (1999) ‘Leibnizian spacetime’, do not do justice to Leibniz's view of motion (as Earman acknowledges). But it is perhaps most obvious in introductory texts (e.g., Ray 1991, Huggett 2000 to mention a couple). According to this view, the only quantities of motion are relative quantities, relative velocity, acceleration and so on, and all relative motions are equal, so there is no true sense of motion. However, Leibniz is explicit that other quantities are also ‘real’, and his mechanics implicitly — but obviously — depends on yet others. The length of this section is a measure, not so much the importance of Leibniz's actual views, but the importance of showing what the prevalent folk view leaves out regarding Leibniz's views on the metaphysics of motion and interpretation of mechanics.
That said, we shall also see that no one has yet discovered a fully satisfactory way of reconciling the numerous conflicting things that Leibniz says about motion. Some of these tensions can be put down simply to his changing his mind (see Cover and Hartz 1988 for an explication of how Leibniz's views on space developed). However, we will concentrate on the fairly short period in the mid 1680-90s during which Leibniz developed his theory of mechanics, and was most concerned with their interpretation. We will supplement this discussion with the important remarks that he made in his Correspondence with Samuel Clarke around 30 years later (1715–1716); this discussion is broadly in line with the earlier period, and the intervening period is one in which he turned to other matters, rather than one in which his views on space were dramatically evolving.
Arguably, Leibniz's views concerning space and motion do not have a completely linear logic, starting from some logically sufficient basic premises, but instead form a collection of mutually supporting doctrines If one starts questioning why Leibniz held certain views — concerning the ideality of space, for instance — one is apt to be led in a circle. Still, exposition requires starting somewhere, and Leibniz's argument for the ideality of space in the Correspondence with Clarke is a good place to begin. But bear in mind the caveats made here — this argument was made later than a number of other relevant writings, and its logical relation to Leibniz's views on motion is complex.
Leibniz (LV.47 — this notation means Leibniz's Fifth letter, section 47, and so on) says that (i) a body comes to have the ‘same place’ as another once did, when it comes to stand in the same relations to bodies we ‘suppose’ to be unchanged (more on this later). (ii) That we can define ‘a place’ to be that which any such two bodies have in common (here he claims an analogy with the Euclidean/Eudoxan definition of a rational number in terms of an identity relation between ratios). And finally that (iii) space is all such places taken together. However, he also holds that properties are particular, incapable of being instantiated by more than one individual, even at different times; hence it is impossible for the two bodies to be in literally the same relations to the unchanged bodies. Thus the thing that we take to be the same for the two bodies — the place — is something added by our minds to the situation, and only ideal. As a result, space, which is after all constructed from these ideal places, is itself ideal: ‘a certain order, wherein the mind conceives the application of relations’.
It's worth pausing briefly to contrast this view of space with those of Descartes and of Newton. Both Descartes and Newton claim that space is a real, mind-independent entity; for Descartes it is matter, and for Newton a ‘pseudo-substance’, distinct from matter. And of course for both, these views are intimately tied up with their accounts of motion. Leibniz simply denies the mind-independent reality of space, and this too is bound up with his views concerning motion. (Note that fundamentally, in the metaphysics of monads that Leibniz was developing contemporaneously with his mechanics, everything is in the mind of the monads; but the point that Leibniz is making here is that even within the world that is logically constructed from the contents of the minds of monads, space is ideal.)
So far (apart from that remark about ‘unchanged’ bodies) we have not seen Leibniz introduce anything more than relations of distance between bodies, which is certainly consistent with the folk view of his philosophy. However, Leibniz sought to provide a foundation for the Cartesian/mechanical philosophy in terms of the Aristotelian/scholastic metaphysics of substantial forms (here we discuss the views laid out in Sections 17-22 of the 1686 Discourse on Metaphysics and the 1695 Specimen of Dynamics, both in Garber and Ariew 1989). In particular, he identifies primary matter with what he calls its ‘primitive passive force’ of resistance to changes in motion and to penetration, and the substantial form of a body with its ‘primitive active force’. It is important to realize that these forces are not mere properties of matter, but actually constitute it in some sense, and further that they are not themselves quantifiable. However because of the collisions of bodies with one another, these forces ‘suffer limitation’, and ‘derivative’ passive and active forces result. (There's a real puzzle here. Collision presupposes space, but primitive forces constitute matter prior to any spatial concepts — the primitive active and passive forces ground motion and extension respectively. See Garber and Rauzy, 2004.) Derivative passive force shows up in the different degrees of resistance to change of different kinds of matter (of ‘secondary matter’ in scholastic terms), and apparently is measurable. Derivative active force however, is considerably more problematic for Leibniz. On the one hand, it is fundamental to his account of motion and theory of mechanics — motion fundamentally is possession of force. But on the other hand, Leibniz endorses the mechanical philosophy, which precisely sought to abolish Aristotelian substantial form, which is what force represents. Leibniz's goal was to reconcile the two philosophies, by providing an Aristotelian metaphysical foundation for modern mechanical science; as we shall see, it is ultimately an open question exactly how Leibniz intended to deal with the inherent tensions in such a view.
The texts are sufficiently ambiguous to permit dissent, but arguably Leibniz intends that one manifestation of derivative active force is what he calls vis viva — ‘living force’. Leibniz had a famous argument with the Cartesians over the correct definition of this quantity. Descartes defined it as size times speed — effectively as the magnitude of the momentum of a body. Leibniz gave a brilliant argument (repeated in a number of places, for instance Section 17 of the Discourse on Metaphysics) that it was size times speed2 — so (proportional to) kinetic energy. If the proposed identification is correct then kinetic energy quantifies derivative active force according to Leibniz; or looked at the other way, the quantity of virtus (another term used by Leibniz for active force) associated with a body determines its kinetic energy and hence its speed. As far as the authors know, Leibniz never explicitly says anything conclusive about the relativity of virtus, but it is certainly consistent to read him (as Roberts 2003 does) to claim that there is a unique quantity of virtus and hence ‘true’ (as we have been using the term) speed associated with each body. At the very least, Leibniz does say that there is a real difference between possession and non-possession of vis viva (e.g., in Section 18 of the Discourse) and it is a small step from there to true, privileged speed. Indeed, for Leibniz, mere change of relative position is not ‘entirely real’ (as we saw for instance in the Correspondence) and only when it has vis viva as its immediate cause is there some reality to it. (However, just to muddy the waters, Leibniz also claims that as a matter of fact, no body ever has zero force, which on the reading proposed means no body is ever at rest, which would be surprising given all the collisions bodies undergo.) An alternative interpretation to the one suggested here might say that Leibniz intends that while there is a difference between motion/virtus and no motion/virtus, there is somehow no difference between any strictly positive values of those quantities.
It is important to emphasize two points about the preceding account of motion in Leibniz's philosophy. First, motion in the everyday sense — motion relative to something else — is not really real. Fundamentally motion is possession of virtus, something that is ultimately non-spatial (modulo its interpretation as primitive force limited by collision). If this reading is right — and something along these lines seems necessary if we aren't simply to ignore important statements by Leibniz on motion — then Leibniz is offering an interpretation of motion that is radically different from the obvious understanding. One might even say that for Leibniz motion is not movement at all! (We will leave to one side the question of whether his account is ultimately coherent.) The second point is that however we should understand Leibniz, the folk reading simply does not and cannot take account of his clearly and repeatedly stated view that what is real in motion is force not relative motion, for the folk reading allows Leibniz only relative motion (and of course additionally, motion in the sense of force is a variety of true motion, again contrary to the folk reading).
However, from what has been said so far it is still possible that the folk reading is accurate when it comes to Leibniz's views on the phenomena of motion, the subject of his theory of mechanics. The case for the folk reading is in fact supported by Leibniz's resolution of the tension that we mentioned earlier, between the fundamental role of force/virtus (which we will now take to mean mass times speed2) and its identification with Aristotelian form. Leibniz's way out (e.g., Specimen of Dynamics) is to require that while considerations of force must somehow determine what form of the laws of motion, the laws themselves should be such as not to allow one to determine the value of the force (and hence true speed). One might conclude that in this case Leibniz held that the only quantities which can be determined are those of relative position and motion, as the folk reading says. But even in this circumscribed context, it is at best questionable whether the interpretation is correct.
Consider first Leibniz's mechanics. Since his laws are what is now (ironically) often called ‘Newtonian’ elastic collision theory, it seems that they satisfy both of his requirements. The laws include conservation of kinetic energy (which we identify with virtus), but they hold in all inertial frames, so the kinetic energy of any arbitrary body can be set to any initial value. But they do not permit the kinetic energy of a body to take on any values throughout a process. The laws are only Galilean relativistic, and so are not true in every frame. Furthermore, according to the laws of collision, in an inertial frame, if a body does not collide then its Leibnizian force is conserved while if (except in special cases) it does collide then its force changes. According to Leibniz's laws one cannot determine initial kinetic energies, but one certainly can tell when they change. At very least, there are quantities of motion implicit in Leibniz's mechanics — change in force and true speed — that are not merely relative; the folk reading is committed to Leibniz simply missing this obvious fact.
That said, when Leibniz discusses the relativity of motion — which he calls the ‘equivalence of hypotheses’ about the states of motion of bodies — some of his statements do suggest that he was confused in this way. For another way of stating the problem for the folk reading is that the claim that relative motions alone suffice for mechanics and that all relative motions are equal is a principle of general relativity, and could Leibniz — a mathematical genius — really have failed to notice that his laws hold only in special frames? Well, just maybe. On the one hand, when he explicitly articulates the principle of the equivalence of hypotheses (for instance in Specimen of Dynamics) he tends to say only that one cannot assign initial velocities on the basis of the outcome of a collision, which requires only Galilean relativity. However, he confusingly also claimed (On Copernicanism and the Relativity of Motion, also in Garber and Ariew 1989) that the Tychonic and Copernican hypotheses were equivalent. But if the Earth orbits the Sun in an inertial frame (Copernicus), then there is no inertial frame according to which the Sun orbits the Earth (Tycho Brahe), and vice versa: these hypotheses are simply not Galilean equivalent (something else Leibniz could hardly have failed to notice). So there is some textual support for Leibniz endorsing general relativity, as the folk reading maintains. A number of commentators have suggested solutions to the puzzle of the conflicting pronouncements that Leibniz makes on the subject, but arguably none is completely successful in reconciling all of them (Stein 1977 argues for general relativity, while Roberts 2003 argues the opposite; see also Lodge 2003).
So the folk reading simply ignores Leibniz's metaphysics of motion, it commits Leibniz to a mathematical howler regarding his laws, and it is arguable whether it is the best rendering of his pronouncements concerning relativity; it certainly cannot be accepted unquestioningly. However, it is not hard to understand the temptation of the folk reading. In his Correspondence with Clarke, Leibniz says that he believes space to be “something merely relative, as time is, … an order of coexistences, as time is an order of successions” (LIII.4), which is naturally taken to mean that space is at base nothing but the distance and temporal relations between bodies. (Though even this passage has its subtleties, because of the ideality of space discussed above, and because in Leibniz's conception space determines what sets of relations are possible.) And if relative distances and times exhaust the spatiotemporal in this way, then shouldn't all quantities of motion be defined in terms of those relations? We have seen two ways in which this would be the wrong conclusion to draw: force seems to involve a notion of speed that is not identified with any relative speed, and (unless the equivalence of hypotheses is after all a principle of general relativity) the laws pick out a standard of constant motion that need not be any constant relative motion. Of course, it is hard to reconcile these quantities with the view of space and time that Leibniz proposes — what is speed in size times speed2 or constant speed if not speed relative to some body or to absolute space? Given Leibniz's view that space is literally ideal (and indeed that even relative motion is not ‘entirely real’) perhaps the best answer is that he took force and hence motion in its real sense not to be determined by motion in a relative sense at all, but to be primitive monadic quantities. That is, he took x moves to be a complete predicate, but he believed that it could be fully analyzed in terms of strictly monadic predicates: x moves iff x possesses-non-zero-derivative-active-force. And this reading explains just what Leibniz took us to be supposing when we ‘supposed certain bodies to be unchanged’ in the construction of the idea of space: that they had no force, nothing causing, or making real any motion.
It's again helpful to compare Leibniz with Descartes and Newton, this time regarding motion. Commentators often express frustration at Leibniz's response to Newton's arguments for absolute space: “I find nothing … in the Scholium that proves or can prove the reality of space in itself. However, I grant that there is a difference between an absolute true motion of a body and a mere relative change …” (LV.53). Not only does Leibniz apparently fail to take the argument seriously, he then goes on to concede the step in the argument that seems to require absolute space! But with our understanding of Newton and Leibniz, we can see that what he says makes perfect sense (or at least that it is not as disingenuous as it is often taken to be). Newton argues in the Scholium that true motion cannot be identified with the kinds of motion that Descartes considers; but both of these are purely relative motions, and Leibniz is in complete agreement that merely relative motions are not true (i.e., ‘entirely real’). Leibniz's ‘concession’ merely registers his agreement with Newton against Descartes on the difference between true and relative motion; he surely understood who and what Newton was refuting, and it was a position that he had himself, in different terms, publicly argued against at length. But as we have seen, Leibniz had a very different analysis of the difference to Newton's; true motion was not, for him, a matter of motion relative to absolute space, but the possession of quantity of force, ontologically prior to any spatiotemporal quantities at all. There is indeed nothing in the Scholium explicitly directed against that view, and since it does potentially offer an alternative way of understanding true motion, it is not unreasonable for Leibniz to claim that there is no deductive inference from true motion to absolute space.
The folk reading which belies Leibniz has it that he sought a theory of mechanics formulated in terms only of the relations between bodies. As we'll see presently, in the Nineteenth Century, Ernst Mach indeed proposed such an approach, but Leibniz clearly did not; though certain similarities between Leibniz and Mach — especially the rejection of absolute space — surely helps explain the confusion between the two. But not only is Leibniz often misunderstood, there are influential misreadings of Newton's arguments in the Scholium, influenced by the idea that he is addressing Leibniz in some way. Of course the Principia was written 30 years before the Correspondence, and the arguments of the Scholium were not written with Leibniz in mind, but Clarke himself suggests (CIV.13) that those arguments — specifically those concerning the bucket — are telling against Leibniz. That argument is indeed devastating to a general principle of relativity — the parity of all relative motions — but we have seen that it is highly questionable whether Leibniz's equivalence of hypotheses amount to such a view. That said, his statements in the first four letters of the Correspondence could understandably mislead Clarke on this point — it is in reply to Clarke's challenge that Leibniz explicitly denies the parity of relative motions. But interestingly, Clarke does not present a true version of Newton's argument — despite some involvement of Newton in writing the replies. Instead of the argument from the uniqueness of the rate of rotation, he argues that systems with different velocities must be different because the effects observed if they were brought to rest would be different. This argument is of course utterly question begging against a view that holds that there is no privileged standard of rest!
As we discuss in Section 8, Mach attributed to Newton the fallacious argument that because the surface of the water curved even when it was not in motion relative to the bucket, it must be rotating relative to absolute space. Our discussion of Newton showed how misleading such a reading is. In the first place he also argues that there must be some privileged sense of rotation, and hence not all relative motions are equal. Second, the argument is ad hominem against Descartes, in which context a disjunctive syllogism — motion is either proper or ordinary or relative to absolute space — is argumentatively legitimate. On the other hand, Mach is quite correct that Newton's argument in the Scholium leaves open the logical possibility that the privileged, true sense of rotation (and acceleration more generally) is some species of relative motion; if not motion properly speaking, then relative to the fixed stars perhaps. (In fact Newton rejects this possibility in De Gravitatione (1962) on the grounds that it would involve an odious action at a distance; an ironic position given his theory of universal gravity.)
However the kind of folk-reading of Newton that underlies much of the contemporary literature replaces Mach's interpretation with a more charitable one. According to this reading, Newton's point is that his mechanics — unlike Descartes' — could explain why the surface of the rotating water is curved, that his explanation involves a privileged sense of rotation, and that absent an alternative hypothesis about its relative nature, we should accept absolute space. But our discussion of Newton's argument showed that it simply does not have an ‘abductive’, ‘best explanation’ form, but shows deductively, from Cartesian premises, that rotation is neither proper nor ordinary motion.
That is not to say that Newton had no understanding of how such effects would be explained in his mechanics. For instance, in Corollaries 5 and 6 to the Definitions of the Principles he states in general terms the conditions under which different states of motion are not — and so by implication are — discernible according to his laws of mechanics. Nor is it to say that Newton's contemporaries weren't seriously concerned with explaining inertial effects. Leibniz, for instance, analyzed a rotating body (in the Specimen). In short, parts of a rotating system collide with the surrounding matter and are continuously deflected, into a series of linear motions that form a curved path. But the system as Leibniz envisions it — comprised of a plenum of elastic particles of matter — is far too complex for him to offer any quantitative model based on this qualitative picture. (In the context of the proposed ‘abductive’ reading of Newton, note that this point is telling against a rejection of intrinsic rigidity or forces acting at a distance, not narrow relationism; it is the complexity of collisions in a plenum that stymies analysis. And since Leibniz's collision theory requires a standard of inertial motion, even if he had explained inertial effects, he would not have thereby shown that all motions are relative, much less that all are equal.)
Although the argument is then not Newton's, it is still an important response to the kind of relationism proposed by the folk-Leibniz, especially when it is extended by bringing in a further example from Newton's Scholium. Newton considered a pair of identical spheres, connected by a cord, too far from any bodies to observe any relative motions; he pointed out that their rate and direction of rotation could still be experimentally determined by measuring the tension in the rod, and by pushing on opposite faces of the two globes to see whether the tension increased or decreased. He intended this simple example to demonstrate that the project he intended in the Principia, of determining the absolute accelerations and hence gravitational forces on the planets from their relative motions, was possible. However, if we further specify that the spheres and cord are rigid and that they are the only things in their universe, then the example can be used to point out that there are infinitely many different rates of rotation all of which agree on the relations between bodies. Since there are no differences in the relations between bodies in the different situations, it follows that the observable differences between the states of rotation cannot be explained in terms of the relations between bodies. Therefore, a theory of the kind attributed to the folk's Leibniz cannot explain all the phenomena of Newtonian mechanics, and again we can argue abductively for absolute space. (Of course, the argument works by showing that, granted the different states of rotation, there are states of rotation that cannot merely be relative rotations of any kind; for the differences cannot be traced to any relational differences. That is, granted the assumptions of the argument, rotation is not true relative motion of any kind.)
This argument (neither the premises nor conclusion) is not Newton's, and must not be taken as a historically accurate reading, However, that is not to say that the argument is fallacious, and indeed many have found it attractive, particularly as a defense not of Newton's absolute space, but of Galilean spacetime. That is, Newtonian mechanics with Galilean spacetime can explain the phenomena associated with rotation, while theories of the kind proposed by Mach cannot explain the differences between situations allowed by Newtonian mechanics, but these explanations rely on the geometric structure of Galilean spacetime — particularly its connection, to interpret acceleration. And thus — the argument goes — those explanations commit us to the reality of spacetime — a manifold of points — whose properties include the appropriate geometric ones. This final doctrine, of the reality of spacetime with its component points or regions, distinct from matter, with geometric properties, is what we earlier identified as ‘substantivalism’.
There are two points to make about this line of argument. First, the relationist could reply that he need not explain all situations which are possible according to Newtonian mechanics, because that theory is to be rejected in favor of one which invokes only distance and time relations between bodies, but which approximates to Newton's if matter is distributed suitably. Such a relationist would be following Mach's proposal, which we will discuss next. Such a position would be satisfactory only to the extent that a suitable concrete replacement theory to Newton's theory is developed; Mach never offered such a theory, but recently more progress has been made.
Second, one must be careful in understanding just how the argument works, for it is tempting to gloss it by saying that in Newtonian mechanics the connection is a crucial part of the explanation of the surface of the water in the bucket, and if the spacetime which carries the connection is denied, then the explanation fails too. But this gloss tacitly assumes that Newtonian mechanics can only be understood in a substantial Galilean spacetime; if an interpretation of Newtonian mechanics that does not assume substantivalism can be constructed, then all Newtonian explanations can be given without a literal connection. Both Sklar (1974) and van Fraassen (1985) have made proposals along these lines. Sklar proposes interpreting ‘true’ acceleration as a primitive quantity not defined in terms of motion relative to anything, be it absolute space, a connection or other bodies. (Notice the family resemblance between this proposal and Leibniz's view of force and speed.) Van Fraassen proposes formulating mechanics as ‘Newton's Laws hold in some frame’, so that the form of the laws and the ways bodies move picks out a standard of inertial motion, not absolute space or a connection, or any instantaneous relations. These proposals aim to keep the full explanatory resources of Newtonian mechanics, and hence admit ‘true acceleration’, but deny any relations between bodies and spacetime itself. Like the actual Leibniz, they allow absolute quantities of motion, but claim that space and time themselves are nothing but the relations between bodies. Of course, such views raise the question of how a motion can be not relative to anything at all, and how we are to understand the privileging of frames; Huggett (2006) contains a proposal for addressing these problems. (Note that Sklar and van Fraassen are committed to the idea that in some sense Newton's laws are capable of explaining all the phenomena without recourse to spacetime geometry; that the connection and the metrical properties are explanatorily redundant. A similar view is defended in the context of relativity in Brown 2005.)
Between the time of Newton and Leibniz and the 20th century, Newton's mechanics and gravitation theory reigned essentially unchallenged, and with that long period of dominance, absolute space came to be widely accepted. At least, no natural philosopher or physicist offered a serious challenge to Newton's absolute space, in the sense of offering a rival theory that dispenses with it. But like the action at a distance in Newtonian gravity, absolute space continued to provoke metaphysical unease. Seeking a replacement for the unobservable Newtonian space, Neumann (1870) and Lange (1885) developed more concrete definitions of the reference frames in which Newton's laws hold. In these and a few other works, the concept of the set of inertial frames was first clearly expressed, though it was implicit in both remarks and procedures to be found in the Principia. (See the entries on space and time: inertial frames and Newton's views on space, time, and motion) The most sustained, comprehensive, and influential attack on absolute space was made by Ernst Mach in his Science of Mechanics (1883).
In a lengthy discussion of Newton's Scholium on absolute space, Mach accuses Newton of violating his own methodological precepts by going well beyond what the observational facts teach us concerning motion and acceleration. Mach at least partly misinterpreted Newton's aims in the Scholium, and inaugurated a reading of the bucket argument (and by extension the globes argument) that has largely persisted in the literature since. Mach viewed the argument as directed against a ‘strict’ or ‘general-relativity’ form of relationism, and as an attempt to establish the existence of absolute space. Mach points out the obvious gap in the argument when so construed: the experiment only establishes that acceleration (rotation) of the water with respect to the Earth, or the frame of the fixed stars, produces the tendency to recede from the center; it does not prove that a strict relationist theory cannot account for the bucket phenomena, much less the existence of absolute space. (The reader will recall that Newton's actual aim was simply to show that Descartes' two kinds of motion are not adequate to accounting for rotational phenomena.) Although Mach does not mention the globes thought experiment specifically, it is easy to read an implicit response to it in the things he does say: nobody is competent to say what would happen, or what would be possible, in a universe devoid of matter other than two globes. So neither the bucket nor the globes can establish the existence of absolute space.
Both in Mach's interpretations of Newton's arguments and in his replies, one can already see two anti-absolute space viewpoints emerge, though Mach himself never fully kept them apart. The first strain, which we may call ‘Mach-lite’, criticizes Newton's postulation of absolute space as a metaphysical leap that is neither justified by actual experiments, nor methodologically sound. The remedy offered by Mach-lite is simple: we should retain Newton's mechanics and use it just as we already do, but eliminate the unnecessary posit of absolute space. In its place we need only substitute the frame of the fixed stars, as is the practice in astronomy in any case. If we find the incorporation of a reference to contingent circumstances (the existence of a single reference frame in which the stars are more or less stationary) in the fundamental laws of nature problematic (which Mach need not, given his official positivist account of scientific laws), then Mach suggests that we replace the 1st law with an empirically equivalent mathematical rival:
Mach's Equation (1960, 287)
The sums in this equation are to be taken over all massive bodies in the universe. Since the top sum is weighted by distance, distant masses count much more than near ones. In a world with a (reasonably) static distribution of heavy distant bodies, such as we appear to live in, the equation entails local conservation of linear momentum in ‘inertial’ frames. The upshot of this equation is that the frame of the fixed stars plays exactly the role of absolute space in the statement of the 1st law. (Notice that this equation, unlike Newton's first law, is not vectorial.) This proposal does not, by itself, offer an alternative to Newtonian mechanics, and as Mach himself pointed out, the law is not well-behaved in an infinite universe filled with stars; but the same can perhaps be said of Newton's law of gravitation (see Malament 1995, and Norton 1993). But Mach did not offer this equation as a proposed law valid in any circumstances; he avers, “it is impossible to say whether the new expression would still represent the true condition of things if the stars were to perform rapid movements among one another.” (p. 289)
It is not clear whether Mach offered this revised first law as a first step toward a theory that would replace Newton's mechanics, deriving inertial effects from only relative motions, as Leibniz desired. But many other remarks made by Mach in his chapter criticizing absolute space point in this direction, and they have given birth to the Mach-heavy view, later to be christened “Mach's Principle” by Albert Einstein. The Mach-heavy viewpoint calls for a new mechanics that invokes only relative distances and (perhaps) their 1st and 2nd time derivatives, and thus ‘generally relativistic’ in the sense sometimes read into Leibniz's remarks about motion. Mach wished to eliminate absolute time from physics too, so he would have wanted a proper relationist reduction of these derivatives also. The Barbour-Bertotti theories, discussed below, provide this.
Mach-heavy apparently involves the prediction of novel effects due to ‘merely’ relative accelerations. Mach hints at such effects in his criticism of Newton's bucket:
Newton's experiment with the rotating vessel of water simply informs us that the relative rotation of the water with respect to the sides of the vessel produces no noticeable centrifugal forces, but that such forces are produced by its relative rotation with respect to the mass of the earth and the other celestial bodies. No one is competent to say how the experiment would turn out if the sides of the vessel [were] increased until they were ultimately several leagues thick. (1883, 284.)
The suggestion here seems to be that the relative rotation in stage (i) of the experiment might immediately generate an outward force (before any rotation is communicated to the water), if the sides of the bucket were massive enough.
More generally, Mach-heavy involves the view that all inertial effects should be derived from the motions of the body in question relative to all other massive bodies in the universe. The water in Newton's bucket feels an outward pull due (mainly) to the relative rotation of all the fixed stars around it. Mach-heavy is a speculation that an effect something like electromagnetic induction should be built into gravity theory. (Such an effect does exist according to the General Theory of Relativity, and is called ‘gravitomagnetic induction’. The recently finished Gravity Probe B mission was designed to measure the gravitomagnetic induction effect due to the Earth's rotation.) Its specific form must fall off with distance much more slowly than 1/r2, if it is to be empirically similar to Newtonian physics; but it will certainly predict experimentally testable novel behaviors. A theory that satisfies all the goals of Mach-heavy would appear to be ideal for the vindication of strict relationism and the elimination of absolute quantities of motion from mechanics.
Direct assault on the problem of satisfying Mach-heavy in a classical framework proved unsuccessful, despite the efforts of others besides Mach (e.g., Friedländer 1896, Föpl 1904, Reissner 1914, 1915), until the work of Barbour and Bertotti in the 1970s and 80s. (Between the late 19th century and the 1970s, there was of course one extremely important attempt to satisfy Mach-heavy: the work of Einstein that led to the General Theory of Relativity. Since Einstein's efforts took place in a non-classical (Lorentz/Einstein/Minkowski) spacetime setting, we discuss them in the next section.) Rather than formulating a revised law of gravity/inertia using relative quantities, Barbour and Bertotti attacked the problem using the framework of Lagrangian mechanics, replacing the elements of the action that involve absolute quantities of motion with new terms invoking only relative distances, velocities etc. Their first (1977) theory uses a very simple and elegant action, and satisfies everything one could wish for from a Mach-heavy theory: it is relationally pure (even with respect to time: while simultaneity is absolute, the temporal metric is derived from the field equations); it is nearly empirically equivalent to Newton's theory in a world such as ours (with a large-scale uniform, near-stationary matter distribution); yet it does predict novel effects such as the ones Mach posited with his thick bucket. Among these is an ‘anisotropy of inertia’ effect — accelerating a body away from the galactic center requires more force than accelerating it perpendicular to the galactic plane — large enough to be ruled out empirically.
Barbour and Bertotti's second attempt (1982) at a relational Lagrangian mechanics was arguably less Machian, but more empirically adequate. In it, solutions are sought beginning with two temporally-nearby, instantaneous relational configurations of the bodies in the universe. Barbour and Bertotti define an ‘intrinsic difference’ parameter that measures how different the two configurations are. In the solutions of the theory, this intrinsic difference quantity gets minimized, as well as the ordinary action, and in this way full solutions are derived despite not starting from a privileged inertial-frame description. The theory they end up with turns out to be, in effect, a fragment of Newtonian theory: the set of models of Newtonian mechanics and gravitation in which there is zero net angular momentum. This result makes perfect sense in terms of strict relationist aims. In a Newtonian world in which there is a nonzero net angular momentum (e.g., a lone rotating island galaxy), this fact reveals itself in the classic “tendency to recede from the center”. Since a strict relationist demands that bodies obey the same mechanical laws even in ‘rotating’ coordinate systems, there cannot be any such tendency to recede from the center (other than in a local subsystem), in any of the relational theory's models. Since cosmological observations, even today, reveal no net angular momentum in our world, the second Barbour & Bertotti theory can lay claim to exactly the same empirical successes (and problems) that Newtonian physics had. The second theory does not predict the (empirically falsified) anisotropy of inertia derivable from the first; but neither does it allow a derivation of the precession of the orbit of Mercury, which the first theory does (for appropriately chosen cosmic parameters).
Mach-lite, like the relational interpretations of Newtonian physics reviewed in section 5, offers us a way of understanding Newtonian physics without accepting absolute position, velocity or acceleration. But it does so in a way that lacks theoretical clarity and elegance, since it does not delimit a clear set of cosmological models. We know that Mach-lite makes the same predictions as Newton for worlds in which there is a static frame associated with the stars and galaxies; but if asked about how things will behave in a world with no frame of fixed stars, or in which the stars are far from ‘fixed’, it shrugs and refuses to answer. (Recall that Mach-lite simply says: “Newton's laws hold in the frame of reference of the fixed stars.”) This is perfectly acceptable according to Mach's philosophy of science, since the job of mechanics is simply to summarize observable facts in an economical way. But it is unsatisfying to those with stronger realist intuitions about laws of nature.
If there is, in fact, a distinguishable privileged frame of reference in which the laws of mechanics take on a specially simple form, without that frame being determined in any way by relation to the matter distribution, a realist will find it hard to resist the temptation to view motions described in that frame as the ‘true’ or ‘absolute’ motions. If there is a family of such frames, disagreeing about velocity but all agreeing about acceleration, she will feel a temptation to think of at least acceleration as ‘true’ or ‘absolute’. If such a realist believes motion to be by nature a relation rather than a property (and as we saw in the introduction, not all philosophers accept this) then she will feel obliged to accord some sort of existence or reality to the structure — e.g., the structure of Galilean spacetime — in relation to which these motions are defined. For philosophers with such realist inclinations, the ideal relational account of motion would therefore be some version of Mach-heavy.
The Special Theory of Relativity (STR) is notionally based on a principle of relativity of motion; but that principle is ‘special’ — meaning, restricted. The relativity principle built into STR is in fact nothing other than the Galilean principle of relativity, which is built into Newtonian physics. In other words, while there is no privileged standard of velocity, there is nevertheless a determinate fact of the matter about whether a body has accelerated or non-accelerated (i.e., inertial) motion. In this regard, the spacetime of STR is exactly like Galilean spacetime (defined in section 5 above). In terms of the question of whether all motion can be considered purely relative, one could argue that there is nothing new brought to the table by the introduction of Einstein's STR — at least, as far as mechanics is concerned.
As Dorling (1978) first pointed out, however, there is a sense in which the standard absolutist arguments against ‘strict’ relationism using rotating objects (buckets or globes) fail in the context of STR. Maudlin (1993) used the same considerations to show that there is a way of recasting relationism in STR that appears to be very successful.
STR incorporates certain novelties concerning the nature of time and space, and how they mesh together; perhaps the best-known examples are the phenomena of ‘length contraction’, ‘time dilation’, and the ‘relativity of simultaneity.’ Since in STR both spatial distances and time intervals — when measured in the standard ways — are observer-relative (observers in different states of motion ‘disagreeing’ about their sizes), it is arguably most natural to restrict oneself to the invariant spacetime separation given by the interval between two points: [dx2 + dy2 + dz2 — dt2] — the four-dimensional analog of the Pythagorean theorem, for spacetime distances. If one regards the spacetime interval relations between masses-at-times as one's basis on which space-time is built up as an ideal entity, then with only mild caveats relationism works: the ‘relationally pure’ facts suffice to uniquely fix how the material systems are embeddable (up to isomorphism) in the ‘Minkowski’ spacetime of STR. The modern variants of Newton's bucket and globes arguments no longer stymie the relationist because (for example) the spacetime interval relations among bits of matter in Newton's bucket at rest are quite different from the spacetime interval relations found among those same bits of matter after the bucket is rotating. For example, the spacetime interval relation between a bit of water near the side of the bucket, at one time, and itself (say) a second later is smaller than the interval relation between a center-bucket bit of water and itself one second later (times referred to inertial-frame clocks). The upshot is that, unlike the situation in classical physics, a body at rest cannot have all the same spatial relations among its parts as a similar body in rotation. We cannot put a body or system into a state of rotation (or other acceleration) without thereby changing the spacetime interval relations between the various bits of matter at different moments of time. Rotation and acceleration supervene on spacetime interval relations.
It is worth pausing to consider to what extent this victory for (some form of) relationism satisfies the classical ‘strict’ relationism traditionally ascribed to Mach and Leibniz. The spatiotemporal relations that save the day against the bucket and globes are, so to speak, mixed spatial and temporal distances. They are thus quite different from the spatial-distances-at-a-time presupposed by classical relationists; moreover they do not correspond to relative velocities (-at-a-time) either. Their oddity is forcefully captured by noticing that if we choose appropriate bits of matter at ‘times’ eight minutes apart, I-now am at zero distance from the surface of the sun (of eight minutes ‘past’, since it took 8 minutes for light from the sun to reach me-now). So we are by no means dealing here with an innocuous, ‘natural’ translation of classical relationist quantities into the STR setting. On the other hand, in light of the relativity of simultaneity (see note), it can be argued that the absolute simultaneity presupposed by classical relationists and absolutists alike was, in fact, something that relationists should always have regarded with misgivings. From this perspective, instantaneous relational configurations — precisely what one starts with in the theories of Barbour and Bertotti — would be the things that should be treated with suspicion.
If we now return to our questions about motions — about the nature of velocities and accelerations — we find, as noted above, that matters in the interval-relational interpretation of STR are much the same as in Newtonian mechanics in Galilean spacetime. There are no well-defined absolute velocities, but there are indeed well-defined absolute accelerations and rotations. In fact, the difference between an accelerating body (e.g., a rocket) and an inertially moving body is codified directly in the cross-temporal interval relations of the body with itself. So we are very far from being able to conclude that all motion is relative motion of a body with respect to other bodies. It is true that the absolute motions are in 1-1 correlation with patterns of spacetime interval relations, but it is not at all correct to say that they are, for that reason, eliminable in favor of merely relative motions. Rather we should simply say that no absolute acceleration can fail to have an effect on the material body or bodies accelerated. But this was already true in classical physics if matter is modeled realistically: the cord connecting the globes does not merely tense, but also stretches; and so does the bucket, even if imperceptibly, i.e., the spatial relations change.
Maudlin does not claim this version of relationism to be victorious over an absolutist or substantivalist conception of Minkowski spacetime, when it comes time to make judgments about the theory's ontology. There may be more to vindicating relationism than merely establishing a 1-1 correlation between absolute motions and patterns of spatiotemporal relations.
The simple comparison made above between STR and Newtonian physics in Galilean spacetime is somewhat deceptive. For one thing, Galilean spacetime is a mathematical innovation posterior to Einstein's 1905 theory; before then, Galilean spacetime had not been conceived, and full acceptance of Newtonian mechanics implied accepting absolute velocities and, arguably, absolute positions, just as laid down in the Scholium. So Einstein's elimination of absolute velocity was a genuine conceptual advance. Moreover, the Scholium was not the only reason for supposing that there existed a privileged reference frame of ‘rest’: the working assumption of almost all physicists in the latter half of the 19th century was that, in order to understand the wave theory of light, one had to postulate an aetherial medium filling all space, wave-like disturbances in which constituted electromagnetic radiation. It was assumed that the aether rest frame would be an inertial reference frame; and physicists felt some temptation to equate its frame with the absolute rest frame, though this was not necessary. Regardless of this equation of the aether with absolute space, it was assumed by all 19th century physicists that the equations of electrodynamic theory would have to look different in a reference frame moving with respect to the aether than they did in the aether's rest frame (where they presumably take their canonical form, i.e., Maxwell's equations and the Lorentz force law.) So while theoreticians labored to find plausible transformation rules for the electrodynamics of moving bodies, experimentalists tried to detect the Earth's motion in the aether. Experiment and theory played collaborative roles, with experimental results ruling out certain theoretical moves and suggesting new ones, while theoretical advances called for new experimental tests for their confirmation or — as it happened — disconfirmation.
As is well known, attempts to detect the Earth's velocity in the aether were unsuccessful. On the theory side, attempts to formulate the transformation laws for electrodynamics in moving frames — in such a way as to be compatible with experimental results — were complicated and inelegant. A simplified way of seeing how Einstein swept away a host of problems at a stroke is this: he proposed that the Galilean principle of relativity holds for Maxwell's theory, not just for mechanics. The canonical (‘rest-frame’) form of Maxwell's equations should be their form in any inertial reference frame. Since the Maxwell equations dictate the velocity c of electromagnetic radiation (light), this entails that any inertial observer, no matter how fast she is moving, will measure the velocity of a light ray as c — no matter what the relative velocity of its emitter. Einstein worked out logically the consequences of this application of the special relativity principle, and discovered that space and time must be rather different from how Newton described them. STR undermined Newton's absolute time just as decisively as it undermined his absolute space (see note ).
Einstein's STR was the first clear and empirically successful physical theory to overtly eliminate the concepts of absolute rest and absolute velocity while recovering most of the successes of classical mechanics and 19th century electrodynamics. It therefore deserves to be considered the first highly successful theory to explicitly relativize motion, albeit only partially. But STR only recovered most of the successes of classical physics: crucially, it left out gravity. And there was certainly reason to be concerned that Newtonian gravity and STR would prove incompatible: classical gravity acted instantaneously at a distance, while STR eliminated the privileged absolute simultaneity that this instantaneous action presupposes.
Several ways of modifying Newtonian gravity to make it compatible with the spacetime structure of STR suggested themselves to physicists in the years 1905-1912, and a number of interesting Lorentz-covariant theories were proposed (set in the Minkowski spacetime of STR). Einstein rejected these efforts one and all, for violating either empirical facts or theoretical desiderata. But Einstein's chief reason for not pursuing the reconciliation of gravitation with STR's spacetime appears to have been his desire, beginning in 1907, to replace STR with a theory in which not only velocity could be considered merely relative, but also acceleration. That is to say, Einstein wanted if possible to completely eliminate all absolute quantities of motion from physics, thus realizing a theory that satisfies at least one kind of ‘strict’ relationism. (Regarding Einstein's rejection of Lorentz-covariant gravity theories, see Norton 1992; regarding Einstein's quest to fully relativize motion, see Hoefer 1994.)
Einstein began to see this complete relativization as possible in 1907, thanks to his discovery of the Equivalence Principle. Imagine we are far out in space, in a rocket ship accelerating at a constant rate g = 9.98 m/s2. Things will feel just like they do on the surface of the Earth; we will feel a clear up-down direction, bodies will fall to the floor when released, etc. Indeed, due to the well-known empirical fact that gravity affects all bodies by imparting a force proportional to their matter (and energy) content, independent of their internal constitution, we know that any experiment performed on this rocket will give the same results that the same experiment would give if performed on the Earth. Now, Newtonian theory teaches us to consider the apparent downward, gravity-like forces in the rocket ship as ‘pseudo-forces’ or ‘inertial forces’, and insists that they are to be explained by the fact that the ship is accelerating in absolute space. But Einstein asked: “Is there any way for the person in the rocket to regard him/herself as being ‘at rest’ rather than in absolute (accelerated) motion?” And the answer he gave is: Yes. The rocket traveler may regard him/herself as being ‘at rest’ in a homogeneous and uniform gravitational field. This will explain all the observational facts just as well as the supposition that he/she is accelerating relative to absolute space (or, absolutely accelerating in Minkowski spacetime). But is it not clear that the latter is the truth, while the former is a fiction? By no means; if there were a uniform gravitational field filling all space, then it would affect all the other bodies in the world — the Earth, the stars, etc, imparting to them a downward acceleration away from the rocket; and that is exactly what the traveler observes.
In 1907, Einstein published his first gravitation theory (Einstein 1907), treating the gravitational field as a scalar field that also represented the (now variable and frame-dependent) speed of light. Einstein viewed the theory as only a first step on the road to eliminating absolute motion. In the 1907 theory, the theory's equations take the same form in any inertial or uniformly accelerating frame of reference. One might say that this theory reduces the class of absolute motions, leaving only rotation and other non-uniform accelerations as absolute. But, Einstein reasoned, if uniform acceleration can be regarded as equivalent to being at rest in a constant gravitational field, why should it not be possible also to regard inertial effects from these other, non-uniform motions as similarly equivalent to “being at rest in a (variable) gravitational field”? Thus Einstein set himself the goal of expanding the principle of equivalence to embrace all forms of ‘accelerated’ motion.
Einstein thought that the key to achieving this aim lay in further expanding the range of reference frames in which the laws of physics take their canonical form, to include frames adapted to any arbitrary motions. More specifically, since the class of all continuous and differentiable coordinate systems includes as a subclass the coordinate systems adapted to any such frame of reference, if he could achieve a theory of gravitation, electromagnetism and mechanics that was generally covariant — its equations taking the same form in any coordinate system from this general class — then the complete relativity of motion would be achieved. If there are no special frames of reference in which the laws take on a simpler canonical form, there is no physical reason to consider any particular state or states of motion as privileged, nor deviations from those as representing ‘absolute motion’. (Here we are just laying out Einstein's train of thought; later we will see reasons to question the last step.) And in 1915, Einstein achieved his aim in the General Theory of Relativity (GTR).
There is one key element left out of this success story, however, and it is crucial to understanding why most physicists reject Einstein's claim to have eliminated absolute states of motion in GTR. Going back to our accelerating rocket, we accepted Einstein's claim that we could regard the ship as hovering at rest in a universe-filling gravitational field. But a gravitational field, we usually suppose, is generated by matter. How is this universe-filling field linked to generating matter? The answer may be supplied by Mach-heavy. Regarding the ‘accelerating’ rocket which we decide to regard as ‘at rest’ in a gravitational field, the Machian says: all those stars and galaxies, etc., jointly accelerating downward (relative to the rocket), ‘produce’ that gravitational field. The mathematical specifics of how this field is generated will have to be different from Newton's law of gravity, of course; but it should give essentially the same results when applied to low-mass, slow-moving problems such as the orbits of the planets, so as to capture the empirical successes of Newtonian gravity. Einstein thought, in 1916 at least, that the field equations of GTR are precisely this mathematical replacement for Newton's law of gravity, and that they fully satisfied the desiderata of Mach-heavy relationism. But it was not so. (See the entry on early philosophical interpretations of general relativity.)
In GTR, spacetime is locally very much like flat Minkowski spacetime. There is no absolute velocity locally, but there are clear local standards of accelerated vs non-accelerated motion, i.e., local inertial frames. In these ‘freely falling’ frames bodies obey the usual rules for non-gravitational physics familiar from STR, albeit only approximately. But overall spacetime is curved, and local inertial frames may tip, bend and twist as we move from one region to another. The structure of curved spacetime is encoded in the metric field tensor gab, with the curvature encoding gravity at the same time: gravitational forces are so to speak ‘built into’ the metric field, geometrized away. Since the spacetime structure encodes gravity and inertia, and in a Mach-heavy theory these phenomena should be completely determined by the relational distribution of matter (and relative motions), Einstein wished to see the metric as entirely determined by the distribution of matter and energy. But what the GTR field equations entail is, in general, only a partial-determination relation.
We cannot go into the mathematical details necessary for a full discussion of the successes and failures of Mach-heavy in the GTR context. But one can see why the Machian interpretation Einstein hoped he could give to the curved spacetimes of his theory fails to be plausible, by considering a few simple ‘worlds’ permitted by GTR. In the first place, for our hovering rocket ship, if we are to attribute the gravity field it feels to matter, there has got to be all this other matter in the universe. But if we regard the rocket as a mere ‘test body’ (not itself substantially affecting the gravity present or absent in the universe), then we can note that according to GTR, if we remove all the stars, galaxies, planets etc. from the world, the gravitational field does not disappear. On the contrary, it stays basically the same locally, and globally it takes the form of empty Minkowski spacetime, precisely the quasi-absolute structure Einstein was hoping to eliminate. Solutions of the GTR field equations for arbitrary realistic configurations of matter (e.g., a rocket ship ejecting a stream of particles to push itself forward) are hard to come by, and in fact a realistic two-body exact solution has yet to be discovered. But numerical methods can be applied for many purposes, and physicists do not doubt that something like our accelerating rocket — in otherwise empty space — is possible according to the theory. We see clearly, then, that GTR fails to satisfy Einstein's own understanding of Mach's Principle, according to which, in the absence of matter, space itself should not be able to exist. A second example: GTR allows us to model a single rotating object in an otherwise empty universe (e.g., a neutron star). Relationism of the Machian variety says that such rotation is impossible, since it can only be understood as rotation relative to some sort of absolute space. In the case of GTR, this is basically right: the rotation is best understood as rotation relative to a ‘background’ spacetime that is identical to the Minkowski spacetime of STR, only ‘curved’ by the presence of matter in the region of the star.
On the other hand, there is one charge of failure-to-relativize-motion sometimes leveled at GTR that is unfair. It is sometimes asserted that the simple fact that the metric field (or the connection it determines) distinguishes, at every location, motions that are ‘absolutely’ accelerated and/or ‘absolutely rotating’ from those that are not, by itself entails that GTR fails to embody a folk-Leibniz style general relativity of motion (e.g. Earman (1989), ch. 5). We think this is incorrect, and leads to unfairly harsh judgments about confusion on Einstein's part. The local inertial structure encoded in the metric would not be ‘absolute’ in any meaningful sense, if that structure were in some clear sense fully determined by the relationally specified matter-energy distribution. Einstein was not simply confused when he named his gravity theory. (Just what is to be understood by “the relationally specified matter-energy distribution” is a further, thorny issue, which we cannot enter into here.)
GTR does not fulfill all the goals of Mach-heavy, at least as understood by Einstein, and he recognized this fact by 1918 (Einstein 1918). And yet … GTR comes tantalizingly close to achieving those goals, in certain striking ways. For one thing, GTR does predict Mach-heavy effects, known as ‘frame-dragging’: if we could model Mach's thick-walled bucket in GTR, it seems clear that it would pull the water slightly outward, and give it a slight tendency to begin rotating in the same sense as the bucket (even if the big bucket's walls were not actually touching the water. While GTR does permit us to model a lone rotating object, if we model the object as a shell of mass (instead of a solid sphere) and let the size of the shell increase (to model the ‘sphere of the fixed stars’ we see around us), then as Brill & Cohen (1966) showed, the frame-dragging becomes complete inside the shell. In other words: our original Minkowski background structure effectively disappears, and inertia becomes wholly determined by the shell of matter, just as Mach posited was the case. This complete determination of inertia by the global matter distribution appears to be a feature of other models, including the Friedman-Robertson-Walker-Lemâitre Big Bang models that best match observations of our universe.
Finally, it is important to recognize that GTR is generally covariant in a very special sense: unlike all other prior theories (and unlike many subsequent quantum theories), it postulates no fixed ‘prior’ or ‘background’ spacetime structure. As mathematicians and physicists realized early on, other theories, e.g., Newtonian mechanics and STR, can be put into a generally covariant form. But when this is done, there are inevitably mathematical objects postulated as part of the formalism, whose role is to represent absolute elements of spacetime structure. What is unique about GTR is that it was the first, and is still the only ‘core’ physical theory, to have no such absolute elements in its covariant equations. The spacetime structure in GTR, represented by the metric field (which determines the connection), is at least partly ‘shaped’ by the distribution of matter and energy. And in certain models of the theory, such as the Big Bang cosmological models, some authors have claimed that the local standards of inertial motion — the local ‘gravitational field’ of Einstein's equivalence principle — are entirely fixed by the matter distribution throughout space and time, just as Mach-heavy requires (see, for example, Wheeler and Cuifollini 1995).
Absolutists and relationists are thus left in a frustrating and perplexing quandary by GTR. Considering its anti-Machian models, we are inclined to say that motions such as rotation and acceleration remain absolute, or nearly-totally-absolute, according to the theory. On the other hand, considering its most Mach-friendly models, which include all the models taken to be good candidates for representing the actual universe, we may be inclined to say: motion in our world is entirely relative; the inertial effects normally used to argue for absolute motion are all understandable as effects of rotations and accelerations relative to the cosmic matter, just as Mach hoped. But even if we agree that motions in our world are in fact all relative in this sense, this does not automatically settle the traditional relationist/absolutist debate, much less the relationist/substantivalist debate. Many philosophers (including, we suspect, Nerlich 1994 and Earman 1989) would be happy to acknowledge the Mach-friendly status of our spacetime, and argue nevertheless that we should understand that spacetime as a real thing, more like a substance than a mere ideal construct of the mind as Leibniz insisted. (Nerlich (1994) and Earman (1989), we suspect, would take this stance.) Some, though not all, attempts to convert GTR into a quantum theory would accord spacetime this same sort of substantiality that other quantum fields possess.
This article has been concerned with tracing the history and philosophy of ‘absolute’ and ‘relative’ theories of space and motion. Along the way we have been at pains to introduce some clear terminology for various different concepts (e.g., ‘true’ motion, ‘substantivalism’, ‘absolute space’), but what we have not really done is say what the difference between absolute and relative space and motion is: just what is at stake? Recently Rynasiewicz (2000) has argued that there simply are no constant issues running through the history that we have discussed here; that there is no stable meaning for either ‘absolute motion’ or ‘relative motion’ (or ‘substantival space’ vs ‘relational space’). While we agree to a certain extent, we think that nevertheless there are a series of issues that have motivated thinkers again and again; indeed, those that we identified in the introduction. (One quick remark: Rynasiewicz is probably right that the issues cannot be expressed in formally precise terms, but that does not mean that there are no looser philosophical affinities that shed useful light on the history.)
Our discussion has revealed several different issues, of which we will highlight three as components of the ‘absolute-relative debate’. (i) There is the question of whether all motions and all possible descriptions of motions are equal, or whether some are ‘real’ — what we have called, in Seventeenth Century parlance, ‘true’. There is a natural temptation for those who hold that there is ‘nothing but the relative positions and motions between bodies' (and more so for their readers) to add ‘and all such motions are equal’, thus denying the existence of true motion. However, arguably — perhaps surprisingly — no one we have discussed has unreservedly held this view (at least not consistently): Descartes considered motion ‘properly speaking’ to be privileged, Leibniz introduced ‘active force’ to ground motion (arguably in his mechanics as well as metaphysically), and Mach's view seems to be that the distribution of matter in the universe determines a preferred standard of inertial motion. (Again, in general relativity, there is a distinction between inertial and accelerated motion.)
That is, relationists can allow true motions if they offer an analysis of them in terms of the relations between bodies. Given this logical point, and given the historical ways thinkers have understood themselves, it seems unhelpful to characterize the issues in (i) as constituting an absolute-relative debate, hence our use of the term ‘true’ instead of ‘absolute’. So we are led to the second question: (ii) is true motion definable in terms of relations or not? (Of course the answer depends on what kind of definitions will count, and absent an explicit definition — Descartes' proper motion for example — the issue is often taken to be that of whether true motions supervene on relations, as Newton's globes are often supposed to refute.) It seems reasonable to call this issue that of whether motion is absolute or relative. Descartes and Mach are relationists about motion in this sense, while Newton is an absolutist. Leibniz is also an absolutist about motion in his metaphysics, and if our reading is correct, also about the interpretation of motion in the laws of collision. This classification of Leibniz's views runs contrary to his customary identification as relationist-in-chief, but we will clarify his relationist credentials below. Finally, we have discussed (ii) in the context of relativity, first examining Maudlin's proposal that the embedding of a relationally-specified system in Minkowski spacetime is in general unique once all the spacetime interval-distance relations are given. This proposal may or may not be held to satisfy the relational-definability question of (ii), but in any case it cannot be carried over to the context of general relativity theory. In the case of GTR we linked relational motion to the satisfaction of Mach's Principle, just as Einstein did in the early years of the theory. Despite some promising features displayed by GTR, and certain of its models, we saw that Mach's Principle is not fully satisfied in GTR as a whole. We also noted that in the absence of absolute simultaneity, it becomes an open question what relations are to be permitted in the definition (or supervience base) — spacetime interval relations? Instantaneous spatial distances and velocities on a 3-d hypersurface? (In recent works, Barbour has argued that GTR is fully Machian, using a 3-d relational-configuration approach. See Barbour, Foster and Murchadha 2002.)
The final issue is that of (iii) whether absolute motion is motion with respect to substantival space or not. Of course this is how Newton understood acceleration — as acceleration relative to absolute space. More recent Newtonians share this view, although motion for them is with respect to substantival Galilean spacetime (or rather, since they know Newtonian mechanics is false, they hold that this is the best interpretation of that theory). Leibniz denied that motion was relative to space itself, since he denied the reality of space; for him true motion was the possession of active force. So despite his ‘absolutism’ (our adjective not his) about motion he was simultaneously a relationist about space: ‘space is merely relative’. Following Leibniz's lead we can call this debate the question of whether space is absolute or relative. The drawback of this name is that it suggests a separation between motion and space, which exists in Leibniz's views, but which is otherwise problematic; still, no better description presents itself.
Others who are absolutists about motion but relationists about space include Sklar (1974) and van Fraassen (1985); Sklar introduced a primitive quantity of acceleration, not supervenient on motions relative to anything at all, while van Fraassen let the laws themselves pick out the inertial frames. It is of course arguable whether any of these three proposals are successful; (even) stripped of Leibniz's Aristotelian packaging, can absolute quantities of motion ‘stand on their own feet’? And under what understanding of laws can they ground a standard of inertial motion? Huggett (2006) defends a similar position of absolutism about motion, but relationism about space; he argues — in the case of Newtonian physics — that fundamentally there is nothing to space but relations between bodies, but that absolute motions supervene — not on the relations at any one time — but on the entire history of relations.
Works cited in text
- Aristotle, 1984, The Complete Works of Aristotle: The Revised Oxford Translation, J. Barnes (ed.), Princeton: Princeton University Press.
- Barbour, J. and Bertotti, B., 1982, “Mach's Principle and the Structure of Dynamical Theories,” Proceedings of the Royal Society (London), 382: 295-306.
- –––, 1977, “Gravity and Inertia in a Machian Framework,” Nuovo Cimento, 38B: 1-27.
- Brill, D. R. and Cohen, J., 1966, “Rotating Masses and their effects on inertial frames,” Physical Review 143: 1011-1015.
- Brown, H. R., 2005, Physical Relativity: Space-Time Structure from a Dynamical Perspective, Oxford: Oxford University Press.
- Descartes, R., 1983, Principles of Philosophy, R. P. Miller and V. R. Miller (trans.), Dordrecht, London: Reidel.
- Dorling, J., 1978, “Did Einstein need General Relativity to solve the Problem of Space? Or had the Problem already been solved by Special Relativity?,” British Journal for the Philosophy of Science, 29: 311-323.
- Earman, J., 1989, World Enough and Spacetime: Absolute and Relational Theories of Motion. Boston: M.I.T. Press.
- –––, 1970, “Who's Afraid of Absolute Space?,” Australasian Journal of Philosophy, 48: 287-319.
- Einstein, A., 1918, “Prinzipielles zur allgemeinen Relativitätstheorie,” Annalen der Physik, 51: 639-642.
- –––, 1907, “Über das Relativitätsprinzip und die aus demselben gezogenen Folgerungen,” Jahrbuch der Radioaktivität und Electronik 4: 411-462.
- Einstein, A., Lorentz, H. A., Minkowski, H. and Weyl, H., 1952, The Principle of Relativity. W. Perrett and G.B. Jeffery, trs. New York: Dover Books.
- Föppl, A. “Über absolute und relative Bewegung,” Sitzungsberichte der Münchener Akad.. 35:383.
- Friedländer, B. and J., 1896, Absolute und relative Bewegung, Berlin: Leonhard Simion.
- Friedman, M., 1983, Foundations of Space-Time Theories: Relativistic Physics and Philosophy of Science, Princeton: Princeton University Press.
- Garber, D., 1992, Descartes' Metaphysical Physics, Chicago: University of Chicago Press.
- Garber, D. and J. B. Rauzy, 2004, “Leibniz on Body, Matter and Extension,” The Aristotelian Society: Supplementary Volume, 78: 23-40.
- Hartz, G. A. and J. A. Cover, 1988, “Space and Time in the Leibnizian Metaphysic,” Nous, 22: 493-519.
- Hoefer, C., 1994, “Einstein's Struggle for a Machian Gravitation Theory,” Studies in History and Philosophy of Science, 25: 287-336.
- Huggett, N., 2006, “The Regularity Account of Relational Spacetime,” Mind, 115: 41-74.
- –––, 2000, “Space from Zeno to Einstein: Classic Readings with a Contemporary Commentary,” International Studies in the Philosophy of Science, 14: 327-329.
- Lange, L., 1885, “Ueber das Beharrungsgesetz,” Berichte der Königlichen Sachsischen Gesellschaft der Wissenschaften zu Leipzig, Mathematisch-physische Classe 37 (1885): 333-51.
- Leibniz, G. W., 1989, Philosophical Essays, R. Ariew and D. Garber (trans.), Indianapolis: Hackett Pub. Co.
- Leibniz, G. W., and Samuel Clarke, 1715–1716, “Correspondence”, in The Leibniz-Clarke Correspondence, Together with Extracts from Newton's “Principia” and “Opticks”, H. G. Alexander (ed.), Manchester: Manchester University Press, 1956.
- Lodge, P., 2003, “Leibniz on Relativity and the Motion of Bodies,” Philosophical Topics, 31: 277-308.
- Mach, E., 1883, Die Mechanik in ihrer Entwickelung, historisch-kritisch dargestellt. 2nd edition. Leipzig: Brockhaus. English translation (6th edition, 1960): The Science of Mechanics, La Salle, Illinois: Open Court Press.
- Malament, D., 1995, “Is Newtonian Cosmology Really Inconsistent?,” Philosophy of Science 62, no. 4.
- Maudlin, T., 1993, “Buckets of Water and Waves of Space: Why Space-Time is Probably a Substance,” Philosophy of Science, 60:183-203.
- Minkowski, H. (1908). “Space and time,” In Einstein, et al. (1952), pp. 75-91.
- Nerlich, Graham, 1994, The Shape of Space (2nd edition), Cambridge: Cambridge University Press.
- Neumann, C., 1870, Ueber die Principien der Galilei-Newton'schen Theorie. Leipzig: B. G. Teubner, 1870.
- Newton, I., 2004, Newton: Philosophical Writings, A. Janiak (ed.), Cambridge: Cambridge University Press.
- Newton, I. and I. B. Cohen, 1999, The Principia: Mathematical Principles of Natural Philosophy, I. B. Cohen and A. M. Whitman (trans.), Berkeley ; London: University of California Press.
- Norton, J., 1995, “Mach's Principle before Einstein,” in J. Barbour and H. Pfister (eds.) Mach's Principle: From Newton's Bucket to Quantum Gravity: Einstein Studies, Vol. 6. Boston: Birkhäuser, pp.9-57.
- Norton, J., 1993, “A Paradox in Newtonian Cosmology,” in M. Forbes , D. Hull and K. Okruhlik (eds.) PSA 1992: Proceedings of the 1992 Biennial Meeting of the Philosophy of Science Association. Vol. 2. East Lansing, MI: Philosophy of Science Association, pp. 412-20.
- –––, 1992, “Einstein, Nordström and the Early Demise of Scalar, Lorentz-Covariant Theories of Gravitation,” Archive for History of Exact Sciences, 45: 17-94.
- Pooley, O., 2002, the Reality of Spacetime, D.Phil thesis, Oxford University.
- Ray, C., 1991, Time, Space and Philosophy, New York: Routledge.
- Roberts, J. T., 2003, “Leibniz on Force and Absolute Motion,” Philosophy of Science, 70: 553-573.
- Rynasiewicz, R., 1995, “By their Properties, Causes, and Effects: Newton's Scholium on Time, Space, Place, and Motion — I. The Text,” Studies in History and Philosophy of Science, 26: 133-153.
- Sklar, L., 1974, Space, Time and Spacetime, Berkeley: University of California Press.
- Stein, H., 1977, “Some Philosophical Prehistory of General Relativity,” in Minnesota Studies in the Philosophy of Science 8: Foundations of Space-Time Theories: , J. Earman, C. Glymour and J. Stachel (eds.), Minneapolis: University of Minnesota Press.
- –––, 1967, “Newtonian Space-Time,” Texas Quarterly, 10: 174-200.
- Wheeler, J.A. and Ciufolini, I., 1995, Gravitation and Inertia, Princeton, N.J.: Princeton U. Press.
Notable Philosophical Discussions of the Absolute-Relative Debates
- Barbour, J. B., 1982, “Relational Concepts of Space and Time,” British Journal for the Philosophy of Science, 33: 251-274.
- Belot, G., 2000, “Geometry and Motion,” British Journal for the Philosophy of Science, 51: 561-595.
- Butterfield, J., 1984, “Relationism and Possible Worlds,” British Journal for the Philosophy of Science, 35: 101-112.
- Callender, C., 2002, “Philosophy of Space-Time Physics,” in The Blackwell Guide to the Philosophy of Science, P. Machamer (ed.), Cambridge: Blackwell. 173-198.
- Carrier, M., 1992, “Kant's Relational Theory of Absolute Space,” Kant Studien, 83: 399-416.
- Dieks, D., 2001, “Space-Time Relationism in Newtonian and Relativistic Physics,” International Studies in the Philosophy of Science, 15: 5-17.
- Disalle, R., 1995, “Spacetime Theory as Physical Geometry,” Erkenntnis, 42: 317-337.
- Earman, J., 1986, “Why Space is Not a Substance (at Least Not to First Degree),” Pacific Philosophical Quarterly, 67: 225-244.
- –––, 1970, “Who's Afraid of Absolute Space?,” Australasian Journal of Philosophy, 48: 287-319.
- Earman, J. and J. Norton, 1987, “What Price Spacetime Substantivalism: The Hole Story,” British Journal for the Philosophy of Science, 38: 515-525.
- Hoefer, C., 2000, “Kant's Hands and Earman's Pions: Chirality Arguments for Substantival Space,” International Studies in the Philosophy of Science, 14: 237-256.
- –––, 1998, “Absolute Versus Relational Spacetime: For Better Or Worse, the Debate Goes on,” British Journal for the Philosophy of Science, 49: 451-467.
- –––, 1996, “The Metaphysics of Space-Time Substantialism,” Journal of Philosophy, 93: 5-27.
- Huggett, N., 2000, “Reflections on Parity Nonconservation,” Philosophy of Science, 67: 219-241.
- Le Poidevin, R., 2004, “Space, Supervenience and Substantivalism,” Analysis, 64: 191-198.
- Malament, D., 1985, “Discussion: A Modest Remark about Reichenbach, Rotation, and General Relativity,” Philosophy of Science, 52: 615-620.
- Maudlin, T., 1993, “Buckets of Water and Waves of Space: Why Space-Time is Probably a Substance,” Philosophy of Science, 60: 183-203.
- –––, 1990, “Substances and Space-Time: What Aristotle would have Said to Einstein,” Studies in History and Philosophy of Science, 531-561.
- Mundy, B., 1992, “Space-Time and Isomorphism,” Proceedings of the Biennial Meetings of the Philosophy of Science Association, 1: 515-527.
- –––, 1983, “Relational Theories of Euclidean Space and Minkowski Space-Time,” Philosophy of Science, 50: 205-226.
- Nerlich, G., 2003, “Space-Time Substantivalism,” in The Oxford Handbook of Metaphysics, M. J. Loux (ed.), Oxford: Oxford Univ Pr. 281-314.
- –––, 1996, “What Spacetime Explains,” Philosophical Quarterly, 46: 127-131.
- –––, 1994, What Spacetime Explains: Metaphysical Essays on Space and Time, New York: Cambridge Univ Pr.
- –––, 1973, “Hands, Knees, and Absolute Space,” Journal of Philosophy, 70: 337-351.
- Rynasiewicz, R., 2000, “On the Distinction between Absolute and Relative Motion,” Philosophy of Science, 67: 70-93.
- –––, 1996, “Absolute Versus Relational Space-Time: An Outmoded Debate?,” Journal of Philosophy, 93: 279-306.
- Teller, P., 1991, “Substance, Relations, and Arguments about the Nature of Space-Time,” Philosophical Review, 363-397.
- Torretti, R., 2000, “Spacetime Models for the World,” Studies in History and Philosophy of Modern Physics, 31B: 171-186.
- St. Andrews School of Mathematics and Statistics Index of Biographies
- The Pittsburgh Phil-Sci Archive of pre-publication articles in philosophy of science
- Ned Wright's Special Relativity tutorial
- Andrew Hamilton's Special Relativity pages
Descartes, René: physics | general relativity: early philosophical interpretations of | Newton, Isaac: views on space, time, and motion | space and time: inertial frames | space and time: the hole argument | Zeno of Elea: Zeno's paradoxes
| 3.024979 |
SAN FRANCISCO, Dec. 29, 2008 -- Facial expressions of emotion are hardwired into our genes, according to a study published today in the Journal of Personality and Social Psychology. The research suggests that facial expressions of emotion are innate rather than a product of cultural learning. The study is the first of its kind to demonstrate that sighted and blind individuals use the same facial expressions, producing the same facial muscle movements in response to specific emotional stimuli.
The study also provides new insight into how humans manage emotional displays according to social context, suggesting that the ability to regulate emotional expressions is not learned through observation.
San Francisco State University Psychology Professor David Matsumoto compared the facial expressions of sighted and blind judo athletes at the 2004 Summer Olympics and Paralympic Games. More than 4,800 photographs were captured and analyzed, including images of athletes from 23 countries.
"The statistical correlation between the facial expressions of sighted and blind individuals was almost perfect," Matsumoto said. "This suggests something genetically resident within us is the source of facial expressions of emotion."
Matsumoto found that sighted and blind individuals manage their expressions of emotion in the same way according to social context. For example, because of the social nature of the Olympic medal ceremonies, 85 percent of silver medalists who lost their medal matches produced "social smiles" during the ceremony. Social smiles use only the mouth muscles whereas true smiles, known as Duchenne smiles, cause the eyes to twinkle and narrow and the cheeks to rise.
"Losers pushed their lower lip up as if to control the emotion on their face and many produced social smiles," Matsumoto said. "Individuals blind from birth could not have learned to control their emotions in this way through visual learning so there must be another mechanism. It could be that our emotions, and the systems to regulate them, are vestiges of our evolutionary ancestry. It's possible that in response to negative emotions, humans have developed a system that closes the mouth so that they are prevented from yelling, biting or throwing insults."
| 3.390564 |
Jan. 30, 2009 A new way of making LEDs could see household lighting bills reduced by up to 75% within five years. Gallium Nitride (GaN), a man-made semiconductor used to make LEDs (light emitting diodes), emits brilliant light but uses very little electricity. Until now high production costs have made GaN lighting too expensive for wide spread use in homes and offices.
However the Cambridge University based Centre for Gallium Nitride has developed a new way of making GaN which could produce LEDs for a tenth of current prices.
GaN, grown in labs on expensive sapphire wafers since the 1990s, can now be grown on silicon wafers. This lower cost method could mean cheap mass produced LEDs become widely available for lighting homes and offices in the next five years.
Based on current results, GaN LED lights in every home and office could cut the proportion of UK electricity used for lights from 20% to 5%. That means we could close or not need to replace eight power stations.
A GaN LED can burn for 100,000 hours so, on average, it only needs replacing after 60 years. And, unlike currently available energy-saving bulbs GaN LEDs do not contain mercury so disposal is less damaging to the environment. GaN LEDs also have the advantage of turning on instantly and being dimmable.
Professor Colin Humphreys, lead scientist on the project said: “This could well be the holy grail in terms of providing our lighting needs for the future. We are very close to achieving highly efficient, low cost white LEDs that can take the place of both traditional and currently available low energy light bulbs. That won’t just be good news for the environment. It will also benefit consumers by cutting their electricity bills.”
GaN LEDs, used to illuminate landmarks like Buckingham Palace and the Severn Bridge, are also appearing in camera flashes, mobile phones, torches, bicycle lights and interior bus, train and plane lighting.
Parallel research is also being carried out into how GaN lights could mimic sunlight to help 3m people in the UK with Seasonal Affective Disorder (SAD).
Ultraviolet rays made from GaN lighting could also aid water purification and disease control in developing countries, identify the spread of cancer tumours and help fight hospital ‘super bugs’.
Funding was provided by the Engineering and Physical Sciences Research Council (EPSRC).
About GaN LEDs
A light-emitting diode (LED) is a semiconductor diode that emits light when charged with electricity. LEDs are used for display and lighting in a whole range of electrical and electronic products. Although GaN was first produced over 30 years ago, it is only in the last ten years that GaN lighting has started to enter real-world applications. Currently, the brilliant light produced by GaN LEDs is blue or green in colour. A phosphor coating is applied to the LED to transform this into a more practical white light.
GaN LEDs are currently grown on 2-inch sapphire. Manufacturers can get 9 times as many LEDs on a 6-inch silicon wafer than on a 2-inch sapphire wafer. In addition, edge effects are less, so the number of good LEDs is about 10 times higher. The processing costs for a 2-inch wafer are essentially the same as for a 6-inch wafer. A 6-inch silicon wafer is much cheaper to produce than a 2-inch sapphire wafer. Together these factors result in a cost reduction of about a factor of 10.
Possible Future Applications
- Cancer surgery. Currently, it is very difficult to detect exactly where a tumour ends. As a result, patients undergoing cancer surgery have to be kept under anaesthetic while cells are taken away for laboratory tests to see whether or not they are healthy. This may need to happen several times during an operation, prolonging the procedure extensively. But in the future, patients could be given harmless drugs that attach themselves to cancer cells, which can be distinguished when a blue GaN LED is shone on them. The tumour’s edge will be revealed, quickly and unmistakably, to the surgeon.
- Water purification. GaN may revolutionise drinking water provision in developing countries. If aluminium is added to GaN then deep ultra-violet light can be produced and this kills all viruses and bacteria, so fitting such a GaN LED to the inside of a water pipe will instantly eradicate diseases, as well as killing mosquito larvae and other harmful organisms.
- Hospital-acquired infections. Shining a ultra-violet GaN torch beam could kill viruses and bacteria, boosting the fight against MRSA and C Difficile. Simply shining a GaN torch at a hospital wall or trolley, for example, could kill any ‘superbugs’ lurking there.
Other social bookmarking and sharing tools:
The above story is reprinted from materials provided by Engineering and Physical Sciences Research Council (EPSRC).
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead.
| 3.70787 |
Mar. 6, 2013 Boys are right-handed, girls are left ... Well at least this is true for sugar gliders (Petaurus breviceps) and grey short-tailed opossums (Monodelphis domestica), according to an article in BioMed Central’s open access journal BMC Evolutionary Biology that shows that handedness in marsupials is dependent on gender. This preference of one hand over another has developed despite the absence of a corpus collosum, the part of the brain which in placental mammals allows one half of the brain to communicate with the other.
Many animals show a distinct preference for using one hand/paw/hoof over another. This is often related to posture – an animal is more likely to show manual laterality if it is upright, related to the difficulty of the task, more complex tasks show a handed preference, or even with age. As an example of all three: crawling human babies show less hand preference than toddlers.
Some species also show a distinct sex effect in handedness but among non-marsupial mammals this tendency is for left-handed males and right-handed females. In contrast researchers from St Petersburg State University show that male quadruped marsupials, such as who walk on all fours, tend to be right-handed while the females are left-handed, especially as tasks became more difficult.
Dr Yegor Malashichev from Saint Petersburg State University who led this study explained why they think this has evolved, “Marsupials do not have a corpus callosum – which connects the two halves of the mammalian brain together. Reversed sex related handedness is an indication of how the marsupial brain has developed different ways of the two halves of the brain communicating in the absence of the corpus callosum.”
Other social bookmarking and sharing tools:
- Andrey Giljov, Karina Karenina, Yegor Malashichev. Forelimb preferences in quadrupedal marsupials and there implications for laterality evolution in mammals. BMC Evolutionary Biology, 2013; 13 (1): 61 DOI: 10.1186/1471-2148-13-61
Note: If no author is given, the source is cited instead.
| 3.502597 |
More 60-Second Science
Plants can pull carbon dioxide, the planet-warming greenhouse gas, out of Earth’s atmosphere. But these aren’t the only living organisms that affect carbon dioxide levels, and thus global warming. Nope, I’m not talking about humans. Humble sea otters can also reduce greenhouse gases, by indirectly helping kelp plants. That finding is in the journal Frontiers in Ecology and the Environment. [Christopher C Wilmer et al., Do trophic cascades affect the storage and flux of atmospheric carbon? An analysis of sea otters and kelp forests]
Researchers used 40 years of data to look at the effect of sea otter populations on kelp. Depending on the plant density, one square meter of kelp forest can absorb anywhere from tens to hundreds of grams of carbon per year. But when sea otters are around, kelp density is high and the plants can suck up more than 12 times as much carbon. That’s because otters nosh on kelp-eating sea urchins. In the mammals’ presence, the urchins hide away and feed on kelp detritus rather than living, carbon-absorbing plants.
So climate researchers need to note that the herbivores that eat plants, and the predators that eat them, also have roles to play in the carbon cycle.
[The above text is a transcript of this podcast.]
| 3.582751 |
OurDocuments.gov. Featuring 100 milestone documents of American history from the National Archives. Includes images of original primary source documents, lesson plans, teacher and student competitions, and educational resources.
In 1866 the Russian government offered to sell the territory of Alaska to the United States. Secretary of State William H. Seward, enthusiastic about the prospects of American Expansion, negotiated the deal for the Americans. Edouard de Stoeckl, Russian minister to the United States, negotiated for the Russians. On March 30, 1867, the two parties agreed that the United States would pay Russia $7.2 million for the territory of Alaska.
For less that 2 cents an acre, the United States acquired nearly 600,000 square miles. Opponents of the Alaska Purchase persisted in calling it “Seward’s Folly” or “Seward’s Icebox” until 1896, when the great Klondike Gold Strike convinced even the harshest critics that Alaska was a valuable addition to American territory.
The check for $7.2 million was made payable to the Russian Minister to the United States Edouard de Stoeckl, who negotiated the deal for the Russians. Also shown here is the Treaty of Cession, signed by Tzar Alexander II, which formally concluded the agreement for the purchase of Alaska from Russia.
| 4.012287 |
Filed under: Foundational Hand
After studying the proportions of the Foundational Hand letters, the next step is to start writing the letters.
Each letter is constructed rather than written. The letters are made up of a combination of pen strokes, which are only made in a top – down or left – right direction. The pen is never pushed up.
When we studied the proportions of the Foundational Hand we could group the letters according to their widths. Now, we can group them according to the order and direction of the pen strokes.
You may find it useful to look at the construction grid whilst studying the order and direction of the letters.
The first group consists of the letters c, e, and o.
These letters are based on the circle shape. This shape is produced with two pen strokes. Visualise a clock face and start the first stroke at approximately the 11, and finish it in an anti-clockwise direction at 5. The second stroke starts again at the 11 and finishes in a clockwise direction on the 5 to complete the letter o.
The first pen-stroke for the letters c and e are the same as the first of the letter o. The second pen-stroke on the c and e are shorter and finish around the 1 position on the imaginary clock face.
Finally, the letter e has a third stroke, starting at the end of the second stroke and finishes when it touches the first stroke.
The next group of letters are d, q, b and p. All these letters combine curved and straight pen strokes. When writing these letters it can be useful to think of the underlying circle shape, which your pen will leave or join at certain points depending upon which letter is being written.
The first stroke of the b starts at the ascender height of the letter, which can be eyed in at just under half the x-height (body height of letters with no ascender or descender). Continue the ascender stroke of the b until it ‘picks up’ the circle shape, follow round the circle until the pen reaches the 5 on the imaginary clock face. The second stroke starts on the first stroke following the circle round until it touches the end of the first stroke.
The letter d is similar to the c except it has a third stroke for the ascender, which will touch the ends of the first and second stroke being for finishing on the write-line.
Letter p starts with a vertical stroke from the x-height down to the imaginary descender line, which is just under half the x-height below the write-line. The second and third strokes are curved, starting on the descender stroke and following round the imaginary circle.
The letter q is almost the same as the d, except it has a descender stroke rather than an ascender stroke.
Letters a, h, m, n, r
All these letters combine curved and straight pen strokes. Once again, think of the underlying circle shape, which your pen will leave or join at certain points depending upon the letter being written.
The Letter h consists of two pen strokes. The first is a vertical ascender stroke. The second stroke starts curved, follows the circle round, then leaves it and becomes straight.
The letter n is produced exactly the same way as the letter h, except the first stroke is not so tall as it starts on the x-height line. The first two pen strokes of the letter m are the same as the letter n. Then a third stroke is added which is identical to the second stroke.
The letter r is also written the same way as the letter n except the second stroke finishes at the point where the circle would have been left and the straight is picked up.
The first stroke of letter a is the same as the second stroke of the letters h, m and n. The second stroke follows the circle. Finally, the third stroke starts at the same point as the second stroke, but is a straight line at a 30° angle and touches the first stroke.
The next group of letters are l, u and t. These letters are straight-forward. The letter l is the same as the first stroke of letter b.
The letter u is also similar to the first stroke of letter b except it starts lower down. The second stroke starts on the x-height line and finishes on the write-line.
Letter t has the same first stroke as letter u. It is completed by a second horizontal stroke.
The following letters k, v, w, x, y and z are made of at least one diagonal pen stroke.
The letter k starts with a vertical ascender stroke, then a second stroke diagonal stroke which joins the vertical stroke. The final stroke is also diagonal and starts where the first and second stroke meet and stops when it touches the write-line. If you look closely you will see it goes further out than the second stroke. This makes the letter look more balanced. If the end of these two pen-strokes lined up the letter would look like it is about to fall over.
Letter v is simply two diagonal strokes and these are repeated to produce the letter w.
The letter y is the same as the v except the second stroke is extended until to create a descender stroke.
Letter x is a little different, you need to create it in such a way that the two stroke cross slightly above the half-way mark on the x-height. This means the top part will be slightly smaller than the bottom which will give the letter a better balance.
Finally, in this group is letter z. The easiest way to produce this is with the two horizontal pen strokes, thenjoin these two strokes with a diagonal pen-stroke to complete the letter.
Now for the hardest letters; f, g and s. Out of these three letters, f is the simplest. It starts with a vertical ascender stroke – except this is not as tall as the other ascender strokes we have produced so far. This is because we have to allow for the second curved stroke. The overall height of these two strokes should be the same as other letters that have an ascender. Finally, we need a horizontal stroke to complete the letter.
Which will you find the hardest letter g or s? These are trickier because unlike all the other letters we have written they do not relate so well to the grid.
The letter g is made of a circle shape, with an oval/bowl shape under the write-line. You can see the letter g is made of three pen-strokes. The first stroke is just like the first stroke of the letter o for example, except it is a smaller. The second stroke starts like the second stroke of the letter o, but when it joins the first stroke it continues and changes direction in the gap between the bottom of the shape and the write-line. The third stroke completes the oval shape. Finally, we have a little fourth stroke to complete the letter.
The letter s is made up of three strokes. The first stroke is sort of an s shape! The second and third strokes complete the letter s. These are easier to get right than the first stroke because they basically follow the circle shape on our construction grid. The secret to this letter is to make both ‘ends’ of the first stroke not too curved. Because the other two strokes are curved they will compensate and give the overall correct shape.
Finally, we are left with the letters i and j, which are made from one pen-stroke. You just need to remember to curve the end of the stroke when writing the letter j.
| 4.128311 |
When it comes to Spanish-style colonial charm, few cities in the Western Hemisphere can rival Old San Juan. But that doesn’t mean that Puerto Rico’s historical significance is exclusively within the capital city’s walls. Roughly 100 miles southwest of San Juan, the lovely town of San Germán holds the venerable distinction of being Puerto Rico’s second oldest city.
Founded in 1573 and named after King Ferdinand the Catholic’s second wife Germaine of Foix, San Germán became the island’s first settlement outside of San Juan. Its significance was such that the island was first divided into the San Juan Party and the San Germán Party. The town also became the focal point from which other settlements were established, thus earning the nickname ‘Ciudad Fundadora de Pueblos’ (roughly, Town-Founding City).
But while San Juan went on to grow exponentially beyond the old city walls and other cities like Ponce, Mayagüez, Arecibo or Caguas grew in population and importance, San Germán remained a sleepy colonial town and one of the best-kept secrets within the island.
From a historical perspective, San Germán’s most famous landmark is Porta Coeli Church. One of the earliest examples of Gothic architecture in the Americas, the chapel was originally built as a convent in 1609 by the Dominican Order. It was reconstructed during the 18th century and expanded with a single nave church of rubble masonry. Listed in 1976 in the U.S. National Register of Historic Places, Porta Coeli was restored by the Institute of Puerto Rican Culture and now houses the Museo de Arte Religioso, which showcases religious paintings and wooden carvings dating back from the 18th and 19th centuries.
Porta Coeli overlooks quaint Plazuela Santo Domingo, an elonganted, cobblestoned square enclosed by pastel-colored, colonial-style houses. A block away sits the town’s main square, Plaza Francisco Mariano Quiñones, where the operational church of San Germán de Auxerre is located. Both Porta Coeli and San Germán de Auxerre are part of the San Germán Historic District, which was also listed in the U.S. National Register of Historic Places in 1994 and includes about 100 significant buildings.
Though San Germán has long since lost its 16th-century designation as Puerto Rico’s most important city after San Juan, the town is nonetheless a regional powerhouse in southwestern Puerto Rico, housing important insitutions as the main campus of Universidad Interamericana (Interamerican University). Sports enthusiasts will also appreciate that the city is considered “The Cradle of Puerto Rican Basketball” as it is home to one of the island’s oldest and most succesful basketball franchises, Atléticos de San Germán (San Germán Athletics).
| 3.168055 |
The basic element in solar modules
The wafers are further processed to solar cells in the third production step. They form the basic element of the resulting solar modules. The cells already possess all of the technical attributes necessary to generate electricity from sunlight. Positive and negative charge carriers are released in the cells through light radiation causing electrical current (direct current) to flow.
The "Cell" business division is part of SolarWorld subsidiary Deutsche Cell GmbH and SolarWorld Industries America LP. Here, solar cells are produced from the preliminary product, the solar silicon wafer. The group manufactures both monocrystalline as well as polycrystalline solar cells.
The monocrystalline as well as polycrystalline solar cells are produced around the clock in one of the most advanced solar cell production facilities. The wafers are produced in the clean rooms of the Deutsche Cell GmbH using the most cutting edge process facilities with the highest level of automation.
Through the fully integrated production concept, it is possible to flexibly control the use of all auxiliary materials necessary for production and to continuously optimize material utilization during operation. This concept allows us to assure the unique quality standard of our solar cells and simultaneously reduce the loss rate compared to conventional processes. This not only lowers production costs, it adds to the expertise in the solar cell production for the SolarWorld group.
The wafer is first cleaned of all damage caused by cutting and then textured. A p/n junction is created by means of phosphorous diffusion which makes the silicon conductive. In the next step, the phosphorus glass layer produced by diffusion is removed.
An anti-reflection layer is added. This which reduces optical losses and ensures electrical passivation of the surface is added. Then, the contacts are attached to the front and back along with a rear contact.
Finally, every individual solar cell is tested for its optical qualities and the electrical efficiency measured.
| 3.14062 |
by Staff Writers
Chicago IL (SPX) Jan 11, 2013
Technologically valuable ultrastable glasses can be produced in days or hours with properties corresponding to those that have been aged for thousands of years, computational and laboratory studies have confirmed.
Aging makes for higher quality glassy materials because they have slowly evolved toward a more stable molecular condition. This evolution can take thousands or millions of years, but manufacturers must work faster. Armed with a better understanding of how glasses age and evolve, researchers at the universities of Chicago and Wisconsin-Madison raise the possibility of designing a new class of materials at the molecular level via a vapor-deposition process.
"In attempts to work with aged glasses, for example, people have examined amber," said Juan de Pablo, UChicago's Liew Family Professor in Molecular Theory and Simulations. "Amber is a glass that has been aged millions of years, but you cannot engineer that material. You get what you get." de Pablo and Wisconsin co-authors Sadanand Singh and Mark Ediger report their findings in the latest issue of Nature Materials.
Ultrastable glasses could find potential applications in the production of stronger metals and in faster-acting pharmaceuticals. The latter may sound surprising, but drugs with the amorphous molecular structure of ultrastable glass could avoid crystallization during storage and be delivered more rapidly in the bloodstream than pharmaceuticals with a semi-crystalline structure. Amorphous metals, likewise, are better for high-impact applications than crystalline metals because of their greater strength.
The Nature Materials paper describes computer simulations that Singh, a doctoral student in chemical engineering at UW-Madison, carried out with de Pablo to follow-up some intriguing results from Ediger's laboratory.
Growing stable glasses
Several years ago, he discovered that glasses grown this way on a specially prepared surface that is kept within a certain temperature range exhibit far more stability than ordinary glasses. Previous researchers must have grown this material under the same temperature conditions, but failed to recognize the significance of what they had done, Ediger said.
Ediger speculated that growing glasses under these conditions, which he compares to the Tetris video game, gives molecules extra room to arrange themselves into a more stable configuration. But he needed Singh and de Pablo's computer simulations to confirm his suspicions that he had actually produced a highly evolved, ordinary glass rather than an entirely new material.
"There's interest in making these materials on the computer because you have direct access to the structure, and you can therefore determine the relationship between the arrangement of the molecules and the physical properties that you measure," said de Pablo, a former UW-Madison faculty member who joined UChicago's new Institute for Molecular Engineering earlier this year.
There are challenges, though, to simulating the evolution of glasses on a computer. Scientists can cool a glassy material at the rate of one degree per second in the laboratory, but the slowest computational studies can only simulate cooling at a rate of 100 million degrees per second. "We cannot cool it any slower because the calculations would take forever," de Pablo said.
"It had been believed until now that there is no correlation between the mechanical properties of a glass and the molecular structure; that somehow the properties of a glass are "hidden" somewhere and that there are no obvious structural signatures," de Pablo said.
Creating better materials
Ultrastable glasses achieve their stability in a manner analogous to the most efficiently packed, multishaped objects in Tetris, each consisting of four squares in various configurations that rain from the top of the screen.
"This is a little bit like the molecules in my deposition apparatus raining down onto this surface, and the goal is to perfectly pack a film, not to have any voids left," Ediger said.
The object of Tetris is to manipulate the objects so that they pack into a perfectly tight pattern at the bottom of the screen. "The difference is, when you play the game, you have to actively manipulate the pieces in order to build a well-packed solid," Ediger said. "In the vapor deposition, nature does it for us."
But in Tetris and experiments alike, when the objects or molecules descend too quickly, the result is a poorly packed, void-riddled pattern.
"In the experiment, if you either rain the molecules too fast or choose a low temperature at which there's no mobility at the surface, then this trick doesn't work," Ediger said. Then it would be like taking a bucket of odd-shaped pieces and just dumping them on the floor. There are all sorts of voids and gaps because the molecules didn't have any opportunity to find a good way of packing."
"Ultrastable glasses from in silico vapor deposition," by Sadamand Singh, M.D. Ediger and Juan J. de Pablo," Nature Materials. National Science Foundation and the U.S. Department of Energy.
University of Chicago
Space Technology News - Applications and Research
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
| 3.175966 |
In January 1992, a container ship near the International Date Line, headed to Tacoma, Washington from Hong Kong, lost 12 containers during severe storm conditions. One of these containers held a shipment of 29,000 bathtub toys. Ten months later, the first of these plastic toys began to wash up onto the coast of Alaska. Driven by the wind and ocean currents, these toys continue to wash ashore during the next several years and some even drifted into the Atlantic Ocean.
The ultimate reason for the world's surface ocean currents is the sun. The heating of the earth by the sun has produced semi-permanent pressure centers near the surface. When wind blows over the ocean around these pressure centers, surface waves are generated by transferring some of the wind's energy, in the form of momentum, from the air to the water. This constant push on the surface of the ocean is the force that forms the surface currents.
Learning Lesson: How it is Currently Done
Around the world, there are some similarities in the currents. For example, along the west coasts of the continents, the currents flow toward the equator in both hemispheres. These are called cold currents as they bring cool water from the polar regions into the tropical regions. The cold current off the west coast of the United States is called the California Current.
Likewise, the opposite is true as well. Along the east coasts of the continents, the currents flow from the equator toward the poles. There are called warm current as they bring the warm tropical water north. The Gulf Stream, off the southeast United States coast, is one of the strongest currents known anywhere in the world, with water speeds up to 3 mph (5 kph).
These currents have a huge impact on the long-term weather a location experiences. The overall climate of Norway and the British Isle is about 18°F (10°C) warmer in the winter than other cites located at the same latitude due to the Gulf Stream.
Take it to the MAX! Keeping Current
While ocean currents are a shallow level circulations, there is global circulation which extends to the depths of the sea called the Great Ocean Conveyor. Also called the thermohaline circulation, it is driven by differences in the density of the sea water which is controlled by temperature (thermal) and salinity (haline).
In the northern Atlantic Ocean, as water flows north it cools considerably increasing its density. As it cools to the freezing point, sea ice forms with the "salts" extracted from the frozen water making the water below more dense. The very salty water sinks to the ocean floor.
Learning Lesson: That Sinking Feeling
It is not static, but a slowly southward flowing current. The route of the deep water flow is through the Atlantic Basin around South Africa and into the Indian Ocean and on past Australia into the Pacific Ocean Basin.
If the water is sinking in the North Atlantic Ocean then it must rise somewhere else. This upwelling is relatively widespread. However, water samples taken around the world indicate that most of the upwelling takes place in the North Pacific Ocean.
It is estimated that once the water sinks in the North Atlantic Ocean that it takes 1,000-1,200 years before that deep, salty bottom water rises to the upper levels of the ocean.
| 3.970609 |
Michele Johnson, Ames Research Center
Astronomers have discovered a pair of neighboring planets with dissimilar densities orbiting very close to each other. The planets are too close to their star to be in the so-called "habitable zone," the region in a system where liquid water might exist on the surface, but they have the closest-spaced orbits ever confirmed. The findings are published today in the journal Science.
The research team, led by Josh Carter, a Hubble fellow at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass., and Eric Agol, a professor of astronomy at the University of Washington in Seattle, used data from NASA's Kepler space telescope, which measures dips in the brightness of more than 150,000 stars, to search for transiting planets.
The inner planet, Kepler-36b, orbits its host star every 13.8 days and the outer planet, Kepler-36c, every 16.2 days. On their closest approach, the neighboring duo comes within about 1.2 million miles of each other. This is only five times the Earth-moon distance and about 20 times closer to one another than any two planets in our solar system.
Kepler-36b is a rocky world measuring 1.5 times the radius and 4.5 times the mass of Earth. Kepler-36c is a gaseous giant measuring 3.7 times the radius and eight times the mass of Earth. The planetary odd couple orbits a star slightly hotter and a couple billion years older than our sun, located 1,200 light-years from Earth
To read more about the discovery, visit: the Harvard-Smithsonian Center for Astrophysics and University of Washington press releases.
Ames Research Center in Moffett Field, Calif., manages Kepler's ground system development, mission operations and science data analysis. NASA’s Jet Propulsion Laboratory, Pasadena, Calif., managed the Kepler mission's development.
Ball Aerospace and Technologies Corp. in Boulder, Colo., developed the Kepler flight system and supports mission operations with the Laboratory for Atmospheric and Space Physics at the University of Colorado in Boulder.
The Space Telescope Science Institute in Baltimore archives, hosts and distributes Kepler science data. Kepler is NASA's 10th Discovery Mission and is funded by NASA's Science Mission Directorate at the agency's headquarters in Washington.
| 3.607584 |
History of the Red Mass
The “Red Mass” is an historical tradition within the Catholic Church dating back to the Thirteenth Century when it officially opened the term of the court for most European countries. The first recorded Red Mass was celebrated in the Cathedral of Paris in 1245. From there, it spread to most European countries.
Around 1310, during the reign of Edward I, the tradition began in England with the Mass offered at Westminster Abbey at the opening of the Michaelmas term. It received its name from the fact that the celebrant was vested in red and the Lord High justices were robed in brilliant scarlet. They were joined by the university professors with doctors among them displaying red in their academic gowns.
The Red Mass also has been traditionally identified with opening of the Sacred Roman Rota, the supreme judicial body of the Catholic Church.
In the United States, the first Red Mass occurred in New York City on October 6, 1928. This Mass was celebrated at Old St. Andrew’s Church with Cardinal Patrick Hayes presiding.
Today, well over 25 cities in the United States celebrate the Red Mass each year, with not only Catholic but also Protestant and Jewish members of the judiciary and legal profession attending the Mass. One of the better-known Red Masses is the one celebrated each fall at the Cathedral of St. Matthew the Apostle in Washington, D.C. It is attended by Justices of the Supreme Court, members of Congress, the diplomatic corps, the Cabinet, and other government departments and, sometimes, the President of the United States. All officials attend in their capacity as private individuals, rather than as government representatives, in order to prevent any issues over separation of church and state.
For the most part the Red Mass is like any other Roman Catholic Mass. A sermon is given, usually with a message which has an overlapping political and religious theme. The Mass is also an opportunity for the Catholic church to express its goals for the coming year. One significant difference between the Red Mass and a traditional Mass is that the prayers and blessings are focused on the leadership roles of those present and to invoke divine guidance and strength during the coming term of Court. It is celebrated in honor of the Holy Spirit as the source of wisdom, understanding, counsel and fortitude, gifts which shine forth preeminently in the dispensing of justice in the courtroom as well as in the individual lawyer’ s office.
| 3.661506 |
Reading Classic Literatures
Classic literature, even though they were written fifty or hundred years ago, still has the power to affect the readers. The gift of literature to educate and inspire people transcends time. Unfortunately, not all people like to read classic literature. Sometimes, to understand classic literature, you have to be mature enough to enjoy and comprehend these writings.
Although we read classic literature because we have to do a report in school, we can also read them for enjoyment. You may have heard of famous authors of classical novels on the television and internet, you can check out their writings and their books.
If you really want to get into the habit of reading classical literature, you can start by reading 30 minutes every day. You should have a dictionary near you when reading classical novels since the words used are always deep or its meaning has changed over time.
To have a better understanding of the setting and the plot of the story, you can make a little background research on the era or its time period. You can also research on the background of the author.
You really have to follow the structure of the story. Most classical literature have complex storyline and plots which makes it hard sometimes to follow the story. The character development is also very extensive. Seeing the overall theme of the story is very important as well as following the basic development of the characters and their story.
There are literature companions that you can buy to help you get started with the classical literature. An example of a literature companion is the "Oxford Companion to Classical Literature."
Another key to understanding classic literature is by understanding the use of the footnotes. These classical literature are full of footnotes that references the social and culture elements of their time.
| 3.199567 |
Many of us act as though we all see the same reality, yet the truth is we don't. Human Beings have cognitive biases or blind spots.
Blind spots are ways that our mind becomes blocked from seeing reality as it is - blinding us from seeing the real truth about ourselves in relation to others. Once we form a conclusion, we become blind to alternatives, even if they are right in front of their eyes.
Emily Pronin, a social psychologist, along with colleagues Daniel Lin and Lee Ross, at Princeton University's Department of Psychology, created the term "blind spots." The bias blind spot is named after the visual blind spot.
Passing the Ball - Watch this Video
There is a classic experiment that demonstrates one level of blind spots that can be attributed to awareness and focused-attention. When people are instructed to count how many passes the people in white shirts make on the basketball court, they often get the number of passes correct, but fail to see the person in the black bear suit walking right in front of their eyes. Hard to believe but true!
Blind Spots & Denial
However, the story of blind spots gets more interesting when we factor in our cognitive biases that come from our social needs to look good in the eyes of others.
When people operate with blind spots, coupled with a strong ego, they often refuse to adjust their course even in the face of opposition from trusted advisors, or incontrovertible evidence to the contrary.
Two well-known examples of blind spots are Henry Ford and A&P:
- Next >>
- Next >>
| 3.647422 |
Friday is Earth Day. It’s a good time to consider how to preserve our environment.
Have you ever wondered how long it takes a plastic grocery bag to disintegrate? The decomposition rate chart presented on the Commonwealth of Virginia’s website (http://www.deq.virginia.gov/recycle) shows the relative speed of organic and inorganic materials. A banana peel takes 2-5 weeks (put up to three around a rose bush for a healthy fertilizer), a newspaper 3-6 months (shred and add to your compost pile), while a plastic bag will last a decade, a plastic beverage container or tin can a century, an aluminum can 2-5 centuries. To save space for future generations, recycling is the responsible thing to do.
Where can you recycle your disposables?
The state’s recycling website provides very helpful information for hard-to-dispose of items, such as computers and automobile products. Learn how and where to properly dispose of a variety of electronics, including cellphones, used oil, oil filters, antifreeze, and old medications (do not flush them down the toilet).
For more ideas about where to recycle what, visit Earth911 at http://earth911.com/.
Another resource with much potential is Freecycle at http://www.freecycle.org/. This is a network for linking those who have something to dispose of and those who are looking for something, organized by zip code. All items offered must be free.
Read High Tech Trash: digital devices, hidden toxics, and human health by Elizabeth Grossman for an eye-opening explanation of the science, politics, and crimes in the collection of masses of e-waste. She follows the trail of toxins, including lead, mercury, chlorine and flame retardants, from mining and processing through disposal and dumping in India, China and Nigeria, where unprotected workers boil the refuse to retrieve useful fragments.
Humanizing the impact of waste is Paolo Bacigalupi’s award winning novel for teens, Ship Breaker. In a futuristic world, teenaged Nailer scavenges copper wiring from grounded oil tankers for a living, but when he finds a beached clipper ship with a girl in the wreckage, he has to decide if he should strip the ship for its wealth or rescue the girl. This is action-packed and very well-written.
Eminent Harvard biologist E. O. Wilson, Pulitzer Prize-winning author of more than twenty works of nonfiction, has written his first novel, Anthill, about the interdependence of life in our biosphere. Raphael Semmes Cody, a lonely child of contentious parents (gentry v. redneck) in south Alabama, relishes summer freedom in a tract of old-growth longleaf pine forest and savanna on Lake Nokobee. He wanders off to observe salamanders and snakes and becomes enthralled by bugs ("every kid has a bug period" says Wilson. "Mine was especially intense and I never grew out of it."). His fascination becomes a lifelong focus, which guides his direction and purpose in mediating competing interests of environmentalists and business. The novel includes "The Anthill Chronicles", a story within the story, which is a riveting account of three colonies of ants, their wars, destruction, and survival, told from their point of view. The simplicity of this satisfying coming of age tale belies an admirable complexity in its portrayal of the interrelatedness of all life. Anthill bears comparison to Huck Finn and Homer’s Iliad in the recounting of epic journeys and the clash of civilizations. It is also very funny and full of sly observations about the "gray wool of the Confederacy" and "zircons in the rough". Anthill is destined to become a classic.
For more about caring for the Earth, visit www.tcplweb.org or call 988-2541.
| 3.379011 |
New Zealand grasshoppers belong to the subfamily Catantopinae. A number of species are present including the common small Phaulacridium of the more coastal areas, the larger species of Sigaus of the tussock lands, and the alpine genera Paprides and Brachaspis, which include some quite large species. These inhabit the alpine areas of the South Island, some preferring scree and others tussock areas. They apparently survive the rigorous alpine winter conditions both as nymphs and as adults, and it is possible that they can withstand complete freezing. All species are plant feeders and lay batches of eggs or pods in short holes in the ground which they excavate with their abdomen. After hatching, the young nymphs moult four or five times before becoming adult.
by Graeme William Ramsay, M.SC., PH.D., Entomology Division, Department of Scientific and Industrial Research, Nelson.
| 3.514025 |
I’m still following the Assembly Primer for Hackers from Vivek Ramachandran of SecurityTube in preparation for Penetration Testing with BackTrack. In this review I’ll cover data types and how to move bytes, numbers, pointers and strings between labels and registers.
Variables (data/labels) are defined in the .data segment of your assembly program. Here are some of the available data types you’ll commonly use.
Data types in assembly; photo credit to Vivek Ramachandran
# Demo program to show how to use Data types and MOVx instructions .data HelloWorld: .ascii "Hello World!" ByteLocation: .byte 10 Int32: .int 2 Int16: .short 3 Float: .float 10.23 IntegerArray: .int 10,20,30,40,50 .bss .comm LargeBuffer, 10000 .text .globl _start _start: nop # Exit syscall to exit the program movl $1, %eax movl $0, %ebx int $0x80
Moving numbers in assembly
Introduction to mov
This is the mov family of operations. By appending b, w or l you can choose to move 8 bits, 16 bits or 32 bits of data. To demonstrate these operations, we’ll be using the example above.
Moving a byte into a register
movb $0, %al
This will move the integer 0 into the lower 8 bits of the EAX register.
Moving a word into a register
movw $10, %ax
This will move the integer 10 into the lower 16 bits of the EAX register.
Moving a word into a register
movl $20, %eax
This will move the integer 20 into the 32-bit EAX register.
Moving a word into a label
movw $50, Int16
This will move the integer 50 into the 16-bit label Int16.
Moving a label into a register
movl Int32, %eax
This will move the contents of the Int32 label into the 32-bit EAX register.
Moving a register into a label
movb %al, ByteLocation
This will move the contents of the 8-bit AL register into the 8-bit ByteLocation label.
Accessing memory locations (using pointers)
In C we have the concept of pointers. A pointer is simply a variable that points to a location in memory. Typically that memory location holds some data that is important to us and that’s why we’re keeping a pointer to it so we can access the data later. This same concept can be achieved in assembly.
Moving a label’s memory address into a register (creating a pointer)
movl $Int32, %eax
This will move the memory location of the Int32 label into the EAX register. In effect the EAX register is now a pointer to the data held by the Int32 label. Notice that we use movl because memory locations are 4 bytes. Also notice that to access the memory location of a label you prepend the $ character.
Dereferencing a pointer (accessing the contents of a memory address)
Moving a word into a dereferenced location
movl $9, (%eax)
This will move the integer 9 into the memory location held in EAX. In other words, if this were C, %eax would be considered a pointer and (%eax) would be the way we dereference that pointer to change the contents of the location it points to. The equivalent in C would like something like this:
int Int32 = 2; int *eax; eax = &Int32; *eax = 9;
The only difference in the C example is that we had to define eax as an int pointer before we could copy the address of Int32. In assembly we can just copy the address of Int32 directly into the EAX register, circumventing the need for an additional variable. But line 4 of this C example is the equivalent of the assembly example shown above.
So to clarify one more time, EAX does not change at all in this example; EAX still points to the same location! However, the data at that location has changed. So if EAX contains the location of the Int32 label, then Int32 now contains 9. So it’s Int32 that has changed, not EAX.
Notice that we use the parentheses to access the memory location stored in the register (dereference the pointer).
Moving a dereferenced value into a register
movl (%eax), %ebx
In effect the EBX register is now a pointer to the data held by EAX. Notice that to access the memory location of the register we’re again enclosing the register name in parentheses.
Moving strings in assembly
I can imagine that reading this you might be thinking, “hey, strings are just bytes of data so why can’t I just move them using the same instructions I just learned?” And the answers to that questions is you can! The problem is that strings are oftentimes much larger. A string might be 1 byte, 5 bytes, or 100 bytes. And none of mov instructions discussed above cover anything larger than 4 bytes. So let’s discuss the string operations that are available to alleviate the pains of copying large strings of data.
A key difference between the standard mov operations and the string series of movs, stos and lods operations is the number of operands. With mov, you specify the source and destination via 2 operands. However, with the movs instructions, the source and destination addresses are placed into the ESI and EDI registers respectively. And with stos and lods, the operations interact directly with the EAX register. This will become more clear with some examples.
The DF flag
DF stands for direction flag. This is a flag stored in the CPU that determines whether to increment or decrement a string’s memory address when string operations are called. When DF is 0 (cleared) the addresses are incremented. When DF is 1 (set) the addresses are decremented. In our examples the DF flag will always be cleared.
The usefulness of the DF flag will make more sense in the examples.
Clearing the DF flag
DF is set to 0. Addresses are incremented where applicable.
Setting the DF flag
DF is set to 1. Addresses are decremented where applicable.
In the example below, the following variables have been defined:
.data HelloWorldString: .asciz "Hello World of Assembly!" .bss .lcomm Destination, 100
movs: Moving a string from one memory location to another memory location
source: %esi; should contain a memory address where the data to be copied resides; the data at this address is not modified, but the address stored in the %esi register is incremented or decremented according to the DF flag destination: %edi; should contain a memory address where the data will be copied to; after copying, the address stored in the %edi register is incremented or decremented according to the DF flag
movsb: move a single byte
movsw: move 2 bytes
movsl: move 4 bytes
movl $HelloWorldString, %esi movl $Destination, %edi movsb movsw movsl
In this example, we first move the address of HelloWorldString into the ESI register (the source string). Then we move the address of Destination into EDI (the destination buffer).
When movsb is called, it tells the CPU to move 1 byte from the source to the destination, so the ‘H’ is copied to the first byte in the Destination label. However, that is not the only thing that happens during this operation. You may have noticed that I pointed out how the address stored in the %esi and %edi registers are both incremented or decremented according to the DF flag. Since the DF flag is cleared, both %esi and %edi are incremented by 1 byte.
But why is this useful? Well, what it means is that the next string operation to be called will start copying from the 2nd byte of the source string instead of the first byte. In other words, rather than copying the ‘H’ a second time, we’ll start by copying the ‘e’ in the HelloWorldString instead. This is what makes the movs series of operations far more useful than the mov operations when dealing with strings.
So, as you might imagine, when calling movsw the next 2 bytes are copied and Destination now holds “Hel”. And finally the movsl operation copies 4 bytes into Destination, which makes it “Hello W”.
Of course, the memory locations held in both %esi and %edi have now been incremented by 7 bytes each. So the final values are..
%esi: $HelloWorldString+7 %edi: $Destination+7 HelloWorldString: "Hello World of Assembly!" Destination: "Hello W"
lods: Moving a string from a memory location into the EAX register
source: %esi; should contain a memory address where the data to be copied resides; the data at this address is not modified, but the address stored in the %esi register is incremented or decremented according to the DF flag destination: %eax; the contents of this register are discarded because the data is copied directly into the register, NOT to any memory address residing in the register; no incrementing or decrementing occurs because the destination is a register and not a memory location
lodsb: move a single byte
lodsw: move 2 bytes
lodsl: move 4 bytes
stos: Moving a string from the EAX register to a memory location
source: %eax; the contents of this register are copied, NOT the contents of any memory address residing in the register; no incrementing or decrementing occurs because the source is a register and not a memory location destination: %edi; should contain a memory address where the data will be copied to; after copying, the address stored in the %edi register is incremented or decremented according to the DF flag
stosb: move a single byte
stosw: move 2 bytes
stosl: move 4 bytes
rep: Repeating an operation so you can move strings more easily
This will continue executing the movsb operation and decrementing the ECX register until it equals 0. So if you wanted to copy a string in its entirety, you could follow this pseudo-code:
* set ESI to the memory address of the source string * set EDI to the memory address of the destination string * set ECX to the length of the source string * clear the DF flag so ESI and EDI will be incremented for each call to movsb * call rep movsb
movl $HelloWorldString, %esi movl $DestinationUsingRep, %edi movl $25, %ecx # because HelloWorldString contains 24 characters + a null terminator cld rep movsb
Here we have movsb being called 25 times (the value of ECX). Because movsb increments both the ESI and EDI register you don’t have to concern yourself with the memory handling at all. So at the end of the example, the values are..
%esi: $HelloWorldString+25 %edi: $Destination+25 %ecx: 0 DF: 0 HelloWorldString: "Hello World of Assembly!" Destination: "Hello World of Assembly!"
More to Come
I hope you enjoyed reviewing data types and mov operations. Stay tuned for more assembly tips!
| 3.232009 |
It is not uncommon for patients to have increasing viral loads while on treatment. However, patients can have a disconnect: they may have detectable viral load and yet still be deriving benefit from their failing regimen. Their CD4 T-cell counts are not plummeting and are often still increasing. Overall, their general health remains well. These individuals continue with their daily life and routine, without any clinical consequences. The discordance between T-cells and viral load is often referred to as the Disconnect Syndrome. It is not a new disease, just an observation of the difference between viral and immunologic lab measures (viral load test vs. T-cell count).
Usually, patients on antiretroviral treatment demonstrate a drop in viral load (often to undetectable levels) while improving their immune system with an accompanying T-cell rise. The disconnect syndrome of rising viral load along with stable or improving immune markers such as T-cells is more common among patients who have a longer history of being on several antiviral regimens. Viral drug resistance, which is associated with decreased efficacy of treatment, is not uncommon for these patients. They have fewer options than patients on their very first antiviral regimen.
Usually, patients with an overtly failing regimen need to undergo changes in their antiviral treatment. This is a basic tenet of care for the chronically HIV-infected individual. This is done to halt progression of HIV disease, to preserve immune system function and to avoid further resistance development.
However, in the unique situation of the disconnect syndrome, a question may be posed: Does every discordant patient merit a change in antiretroviral therapy? Sometimes a clinician may consider the fact that the viral load (HIV RNA) has not reached levels high enough to merit exposing their patient to further antiretroviral drugs. Many patients in this disconnect situation have already been exposed to multiple antiviral agents. Since undetectability does not mean one is cured, one must weigh the risks and benefits of modifying the regimen in order to lower the viral load.
There exists a dilemma when considering altering a regimen in this unique situation. New antivirals to reduce viral load may forestall the emergence of more resistance mutations. On the other hand, one must consider that changing to yet another new regimen will reduce options for the future. This is critical in situations where new options for specific and heavily treated patients are not plentiful. Realistically, formulating a regimen for a heavily treated patient is often challenging because of the presence of multiple resistance mutations. Therefore the likelihood or durability of fully suppressing viral load with a new regimen is in question.
Thus management of patients who are highly treatment experienced and who have a discordant response is a real quandary. It is believed that continuing the failing regimen further selects for resistance mutations, therefore further limiting future therapeutic options. But when there is stability in the elevated viral load together with increasing CD4 counts going yet higher, patients are obviously still deriving clinical benefit. No large prospective clinical trial has been performed to help provide insight for this situation.
Patients who manifest a disconnect do not have undetectable viral loads, so by definition they generally have mutations or resistance. These mutations occur in the virus itself, usually in response to drugs used against it. The mutations in turn allow HIV to develop drug resistance. This means what it says: HIV can resist the drug or drugs, therefore making the medications less effective in fighting the virus.
Individuals with a discordant response usually exhibit high numbers of mutations against the nucleoside drug class, which often includes the M184V mutation. This specific mutation of M184V (refers to a change in the amino acid switch in HIVs viral gene strand) is most known for being the tell-tale sign of 3TC (Epivir) resistance. But having the 184V resistance mutation has also been associated with sustained responses to antivirals, confirmed in several studies. Generally, cross-resistance would be a concern. Mutations that resist one drug may also resist another, especially one in the same drug class. This may lower the efficacy of new drugs which a patient has never taken before. However, if one has the 184V, without other nucleoside mutations, it does not confer resistance to other nucleosides such as ddI, d4T, ddC or abacavir (Videx, Zerit, Hivid or Ziagen). Also, the M184V seems to result in re-sensitization of the virus to AZT (Retrovir) in patients who previously developed resistance to AZT. Finally, the presence of 184V in highly experienced patients is associated with a better antiviral response to the newest HIV agent, tenofovir (Viread).
A complex interaction of viral and other factors are at play in discordant responses to HAART (highly active antiretroviral therapy). These include drug resistance mutations, replicative capacity and immunologic aspects.
The initial status of the patient, including CD4 T-cell count and presence of the 184V mutation before antiviral treatment, is predictive of responses to HAART and development of discordance. A lower CD4 count is more predictive of discordance. The more damaged ones immune system has become prior to treatment, the more difficult it may be for the immune system to assist in suppressing viral load later.
Often, T-cells remain stable or rise despite not obtaining optimally suppressed viral loads because HIV (though resistant) becomes weakened by antiviral drugs, impairing its ability to replicate. Thus the immune system is able to continue its restoration process. In other words, the antiviral treatments cause a decreased replicative capacity of the virus. In fact, there is a firm relationship between the high numbers of mutations and decreased replicative capacity of virus from people with discordance.
The disconnect syndrome can be explained in an alternative way. The M184V and other mutations may result in the virus becoming less fit than wild type. (Wild type is virus that has not mutated, seen usually in non-treated individuals.) The less fit the virus, the less able it is to overcome the effects of other antivirals. Additionally, the reverse transcriptase enzyme, which HIV uses to reproduce itself, is also crippled despite the presence of resistance, and thus becomes less able to help make copies of the virus. HIV can not process its DNA strand (viral gene), and is therefore unable to replicate.
Finally, development of increased mutations does not interfere with immune recovery during HAART. Measured by immune cell proliferation and response to interleukin 15 (a specific cytokine, protein produced by immune cells used for research purposes to measure immune response), researchers found that discordant patients had responses similar to fully respondant patients (Stephano Vella and colleagues, 9th Retroviruses Conference, Seattle, February 2002).
Without attempting to advise whether patients in a disconnect situation should change or continue their treatment, the questions invoked here are placed on the table. The presence of primary resistance mutations can oddly enough be associated with the provision of some beneficial effects. However, developing resistance or discordance is not the preferred outcome. When a patient is facing this discordant predicament, the next path may not be always clear. Phenomena are occurring in the disconnect syndrome that are below the surface. A patients decisions are often complicated by various confounding issues.
This is compounded by the fact that data regarding the long-term outlook of patients continuing in this disconnect pattern is sorely lacking. Some researchers have demonstrated higher progression rates while others concluded that the immunologic deterioration is delayed by an average of three years (Stephen Deeks and colleagues, University of California at San Francisco). However large trials of disconnected patients who continue to maintain good clinical immunologic response to HAART for a specified duration would provide greater insight into the risks. It seems that patients manifesting a disconnect who continue their treatment are stable clinically and not developing opportunistic infections. However, with the ongoing epidemic of resistance, it would be helpful to understand what it all means to a patients health and longevity.
Daniel S. Berger, M.D. is Medical Director for NorthStar Healthcare; Clinical Assistant Professor of Medicine at the University of Illinois at Chicago and editor of AIDS Infosource. He also serves as medical consultant and columnist for Positively Aware. Inquiries are welcomed by Dr. Berger; he can be reached at [email protected] or 773.296.2400.
| 3.030271 |
In mathematics, hyperbolic functions are analogs of the ordinary trigonometric, or circular, functions. The basic hyperbolic functions are the hyperbolic sine "sinh" (typically pronounced /ˈsɪntʃ/ or /ˈʃaɪn/), and the hyperbolic cosine "cosh" (typically pronounced /ˈkɒʃ/), from which are derived the hyperbolic tangent "tanh" (typically pronounced /ˈtæntʃ/ or /ˈθæn/), etc., in analogy to the derived trigonometric functions. The inverse hyperbolic functions are the area hyperbolic sine "arsinh" (also called "asinh", or sometimes by the misnomer of "arcsinh") and so on.
Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the equilateral hyperbola. Hyperbolic functions occur in the solutions of some important linear differential equations, for example the equation defining a catenary, and Laplace's equation in Cartesian coordinates. The latter is important in many areas of physics, including electromagnetic theory, heat transfer, fluid dynamics, and special relativity.
Hyperbolic functions were introduced in the 18th century by the Swiss mathematician Johann Heinrich Lambert.
The hyperbolic functions are:
Via complex numbers the hyperbolic functions are related to the circular functions as follows:
where is the imaginary unit defined as .
Note that, by convention, sinh2x means (sinhx)2, not sinh(sinhx); similarly for the other hyperbolic functions when used with positive exponents. Another notation for the hyperbolic cotangent function is , though cothx is far more common.
Hyperbolic sine and cosine satisfy the identity
which is similar to the Pythagorean trigonometric identity.
It can also be shown that the area under the graph of cosh x from A to B is equal to the arc length of cosh x from A to B.
For a full list of integrals of hyperbolic functions, see list of integrals of hyperbolic functions
In the above expressions, C is called the constant of integration.
It is possible to express the above functions as Taylor series:
A point on the hyperbola xy = 1 with x > 1 determines a hyperbolic triangle in which the side adjacent to the hyperbolic angle is associated with cosh while the side opposite is associated with sinh. However, since the point (1,1) on this hyperbola is a distance √2 from the origin, the normalization constant 1/√2 is necessary to define cosh and sinh by the lengths of the sides of the hyperbolic triangle.
and the property that cosh t ≥ 1 for all t.
The hyperbolic functions are periodic with complex period 2πi (πi for hyperbolic tangent and cotangent).
The parameter t is not a circular angle, but rather a hyperbolic angle which represents twice the area between the x-axis, the hyperbola and the straight line which links the origin with the point (cosh t, sinh t) on the hyperbola.
The function cosh x is an even function, that is symmetric with respect to the y-axis.
The function sinh x is an odd function, that is −sinh x = sinh(−x), and sinh 0 = 0.
The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity into a hyperbolic identity by expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term which contains a product of 2, 6, 10, 14, ... sinhs. This yields for example the addition theorems
the "double angle formulas"
and the "half-angle formulas"
The derivative of sinh x is cosh x and the derivative of cosh x is sinh x; this is similar to trigonometric functions, albeit the sign is different (i.e., the derivative of cos x is −sin x).
The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers.
The graph of the function a cosh(x/a) is the catenary, the curve formed by a uniform flexible chain hanging freely under gravity.
From the definitions of the hyperbolic sine and cosine, we can derive the following identities:
These expressions are analogous to the expressions for sine and cosine, based on Euler's formula, as sums of complex exponentials.
Since the exponential function can be defined for any complex argument, we can extend the definitions of the hyperbolic functions also to complex arguments. The functions sinh z and cosh z are then holomorphic.
Relationships to ordinary trigonometric functions are given by Euler's formula for complex numbers:
| 4.06636 |
Cloudy outlook for climate models
More aerosols - the solution to global warming?
Climate models appear to be missing an atmospheric ingredient, a new study suggests.
December's issue of the International Journal of Climatology from the Royal Meteorlogical Society contains a study of computer models used in climate forecasting. The study is by joint authors Douglass, Christy, Pearson, and Singer - of whom only the third mentioned is not entitled to the prefix Professor.
Their topic is the discrepancy between troposphere observations from 1979 and 2004, and what computer models have to say about the temperature trends over the same period. While focusing on tropical latitudes between 30 degrees north and south (mostly to 20 degrees N and S), because, they write - "much of the Earth's global mean temperature variability originates in the tropics" - the authors nevertheless crunched through an unprecedented amount of historical and computational data in making their comparison.
For observational data they make use of ten different data sets, including ground and atmospheric readings at different heights.
On the modelling side, they use the 22 computer models which participated in the IPCC-sponsored Program for Climate Model Diagnosis and Intercomparison. Some models were run several times, to produce a total of 67 realisations of temperature trends. The IPCC is the United Nation's Intergovernmental Panel on Climate Change and published their Fourth Assessment Report [PDF, 7.8MB] earlier this year. Their model comparison program uses a common set of forcing factors.
Notable in the paper is a generosity when calculating a figure for statistical uncertainty for the data from the models. In aggregating the models, the uncertainty is derived from plugging the number 22 into the maths, rather than 67. The effect of using 67 would be to confine the latitude of error closer to the average trend - with the implication of making it harder to reconcile any discrepancy with the observations. In addition, when they plot and compare the observational and computed data, they also double this error interval.
So to the burning question: on their analysis, does the uncertainty in the observations overlap with the results of the models? If yes, then the models are supported by the observations of the last 30 years, and they could be useful predictors of future temperature and climate trends.
Unfortunately, the answer according to the study is no. Figure 1 in the published paper available here[PDF] pretty much tells the story.
Douglass et al. Temperature time trends (degrees per decade) against pressure (altitutude) for 22 averaged models (shown in red) and 10 observational data sets (blue and green lines). Only at the surface are the mean of the models and the mean of observations seen to agree, within the uncertainties.
While trends coincide at the surface, at all heights in the troposphere, the computer models indicate that higher trending temperatures should have occurred. And more significantly, there is no overlap between the uncertainty ranges of the observations and those of the models.
In other words, the observations and the models seem to be telling quite different stories about the atmosphere, at least as far as the tropics are concerned.
So can the disparities be reconciled?
| 3.261912 |
Over 8,000 websites created by students around the world who have participated in a ThinkQuest Competition.
Compete | FAQ | Contact Us
Men in White
Our website teaches the history of one of the most well known groups in the United States. The Ku Klux Klan is a group that is prejudice and unjust. We have covered the history of this group because we just read a book in class that had the Ku Klux Klan in it. We hope that people do not think we support this group because we do not!
12 & under
History & Government
| 3.016571 |
Want to stay on top of all the space news? Follow @universetoday on Twitter
One of the most prominent features in the Solar System is Jupiter’s Red Spot. This is a massive storm three times the size of the Earth that has been raging across the cloud tops of Jupiter since astronomers first looked at it with a telescope.
Known as the Great Red Spot, this is an anticyclonic (high pressure) storm that rotates around the planet at about 22°. Astronomers think that its darker red color comes from how it dredges up sulfur and ammonia particles from deeper down in Jupiter’s atmosphere. These chemicals start out dark and then lighten as they’re exposed to sunlight. Smaller storms on Jupiter are usually white, and then darken as they get larger. The recently formed Red Spot Jr. storm turned from white to red as it grew in size and intensity.
Astronomers aren’t sure if Jupiter’s Red Spot is temporary or permanent. It has been visible since astronomers started making detailed observations in the 1600s, and it’s still on Jupiter today. Some simulations have predicted that this a storm like this might be a permanent fixture on Jupiter. You can still see the Red Spot with a small telescope larger than about 15 cm (6 inches).
The edge of the Red Spot is turning at a speed of about 360 km/h (225 mph). The whole size of the spot is ranges from 24,000 km x 12,000 km to as wide as 40,000 km. You could fit two or three Earths inside the storm. The actual edge of the storm lifts up about 8 km above the surrounding cloud tops.
Astronomers have noticed that it’s been slowly shrinking over the last decade or so, losing about 15% of its total size. This might be a temporary situation, or Jupiter’s Red Spot might go on losing its size. If it continues, it should look almost round by 2040.
We’ve also recorded an episode of Astronomy Cast all about Jupiter. Listen here, Episode 56: Jupiter.
| 3.799739 |
- Prayer and Worship
- Beliefs and Teachings
- Issues and Action
- Catholic Giving
- About USCCB
The Story of Creation.* 1In the beginning, when God created the heavens and the eartha— 2* and the earth was without form or shape, with darkness over the abyss and a mighty wind sweeping over the waters—b
3Then God said: Let there be light, and there was light.c 4God saw that the light was good. God then separated the light from the darkness. 5God called the light “day,” and the darkness he called “night.” Evening came, and morning followed—the first day.*
6Then God said: Let there be a dome in the middle of the waters, to separate one body of water from the other. 7God made the dome,* and it separated the water below the dome from the water above the dome. And so it happened.d 8God called the dome “sky.” Evening came, and morning followed—the second day.
9Then God said: Let the water under the sky be gathered into a single basin, so that the dry land may appear. And so it happened: the water under the sky was gathered into its basin, and the dry land appeared.e 10God called the dry land “earth,” and the basin of water he called “sea.” God saw that it was good. 11f Then God said: Let the earth bring forth vegetation: every kind of plant that bears seed and every kind of fruit tree on earth that bears fruit with its seed in it. And so it happened: 12the earth brought forth vegetation: every kind of plant that bears seed and every kind of fruit tree that bears fruit with its seed in it. God saw that it was good. 13Evening came, and morning followed—the third day.
14Then God said: Let there be lights in the dome of the sky, to separate day from night. Let them mark the seasons, the days and the years,g 15and serve as lights in the dome of the sky, to illuminate the earth. And so it happened: 16God made the two great lights, the greater one to govern the day, and the lesser one to govern the night, and the stars.h 17God set them in the dome of the sky, to illuminate the earth, 18to govern the day and the night, and to separate the light from the darkness. God saw that it was good. 19Evening came, and morning followed—the fourth day.
20i Then God said: Let the water teem with an abundance of living creatures, and on the earth let birds fly beneath the dome of the sky. 21God created the great sea monsters and all kinds of crawling living creatures with which the water teems, and all kinds of winged birds. God saw that it was good, 22and God blessed them, saying: Be fertile, multiply, and fill the water of the seas; and let the birds multiply on the earth.j 23Evening came, and morning followed—the fifth day.
24k Then God said: Let the earth bring forth every kind of living creature: tame animals, crawling things, and every kind of wild animal. And so it happened: 25God made every kind of wild animal, every kind of tame animal, and every kind of thing that crawls on the ground. God saw that it was good. 26l Then God said: Let us make* human beings in our image, after our likeness. Let them have dominion over the fish of the sea, the birds of the air, the tame animals, all the wild animals, and all the creatures that crawl on the earth.
27God created mankind in his image;
in the image of God he created them;
male and female* he created them.
28God blessed them and God said to them: Be fertile and multiply; fill the earth and subdue it.* Have dominion over the fish of the sea, the birds of the air, and all the living things that crawl on the earth.m 29* n God also said: See, I give you every seed-bearing plant on all the earth and every tree that has seed-bearing fruit on it to be your food; 30and to all the wild animals, all the birds of the air, and all the living creatures that crawl on the earth, I give all the green plants for food. And so it happened. 31God looked at everything he had made, and found it very good. Evening came, and morning followed—the sixth day.o
* [1:1–2:3] This section, from the Priestly source, functions as an introduction, as ancient stories of the origin of the world (cosmogonies) often did. It introduces the primordial story (2:4–11:26), the stories of the ancestors (11:27–50:26), and indeed the whole Pentateuch. The chapter highlights the goodness of creation and the divine desire that human beings share in that goodness. God brings an orderly universe out of primordial chaos merely by uttering a word. In the literary structure of six days, the creation events in the first three days are related to those in the second three.
|1.||light (day)/darkness (night)||=||4.||sun/moon|
|2.||arrangement of water||=||5.||fish + birds from waters|
|3.||a) dry land||=||6.||a) animals|
|b) vegetation||b) human beings: male/female|
The seventh day, on which God rests, the climax of the account, falls outside the six-day structure.
Until modern times the first line was always translated, “In the beginning God created the heavens and the earth.” Several comparable ancient cosmogonies, discovered in recent times, have a “when…then” construction, confirming the translation “when…then” here as well. “When” introduces the pre-creation state and “then” introduces the creative act affecting that state. The traditional translation, “In the beginning,” does not reflect the Hebrew syntax of the clause.
* [1:2] This verse is parenthetical, describing in three phases the pre-creation state symbolized by the chaos out of which God brings order: “earth,” hidden beneath the encompassing cosmic waters, could not be seen, and thus had no “form”; there was only darkness; turbulent wind swept over the waters. Commencing with the last-named elements (darkness and water), vv. 3–10 describe the rearrangement of this chaos: light is made (first day) and the water is divided into water above and water below the earth so that the earth appears and is no longer “without outline.” The abyss: the primordial ocean according to the ancient Semitic cosmogony. After God’s creative activity, part of this vast body forms the salt-water seas (vv. 9–10); part of it is the fresh water under the earth (Ps 33:7; Ez 31:4), which wells forth on the earth as springs and fountains (Gn 7:11; 8:2; Prv 3:20). Part of it, “the upper water” (Ps 148:4; Dn 3:60), is held up by the dome of the sky (vv. 6–7), from which rain descends on the earth (Gn 7:11; 2 Kgs 7:2, 19; Ps 104:13). A mighty wind: literally, “spirit or breath [ruah] of God”; cf. Gn 8:1.
* [1:5] In ancient Israel a day was considered to begin at sunset.
* [1:7] The dome: the Hebrew word suggests a gigantic metal dome. It was inserted into the middle of the single body of water to form dry space within which the earth could emerge. The Latin Vulgate translation firmamentum, “means of support (for the upper waters); firmament,” provided the traditional English rendering.
* [1:26] Let us make: in the ancient Near East, and sometimes in the Bible, God was imagined as presiding over an assembly of heavenly beings who deliberated and decided about matters on earth (1 Kgs 22:19–22; Is 6:8; Ps 29:1–2; 82; 89:6–7; Jb 1:6; 2:1; 38:7). This scene accounts for the plural form here and in Gn 11:7 (“Let us then go down…”). Israel’s God was always considered “Most High” over the heavenly beings. Human beings: Hebrew ’ādām is here the generic term for humankind; in the first five chapters of Genesis it is the proper name Adam only at 4:25 and 5:1–5. In our image, after our likeness: “image” and “likeness” (virtually synonyms) express the worth of human beings who have value in themselves (human blood may not be shed in 9:6 because of this image of God) and in their task, dominion (1:28), which promotes the rule of God over the universe.
* [1:27] Male and female: as God provided the plants with seeds (vv. 11, 12) and commanded the animals to be fertile and multiply (v. 22), so God gives sexuality to human beings as their means to continue in existence.
* [1:28] Fill the earth and subdue it: the object of the verb “subdue” may be not the earth as such but earth as the territory each nation must take for itself (chaps. 10–11), just as Israel will later do (see Nm 32:22, 29; Jos 18:1). The two divine commands define the basic tasks of the human race—to continue in existence through generation and to take possession of one’s God-given territory. The dual command would have had special meaning when Israel was in exile and deeply anxious about whether they would continue as a nation and return to their ancient territory. Have dominion: the whole human race is made in the “image” and “likeness” of God and has “dominion.” Comparable literature of the time used these words of kings rather than of human beings in general; human beings were invariably thought of as slaves of the gods created to provide menial service for the divine world. The royal language here does not, however, give human beings unlimited power, for kings in the Bible had limited dominion and were subject to prophetic critique.
* [1:29] According to the Priestly tradition, the human race was originally intended to live on plants and fruits as were the animals (see v. 30), an arrangement that God will later change (9:3) in view of the human inclination to violence.
By accepting this message, you will be leaving the website of the
United States Conference of Catholic Bishops. This link is provided
solely for the user's convenience. By providing this link, the United
States Conference of Catholic Bishops assumes no responsibility for,
nor does it necessarily endorse, the website, its content, or
| 3.184275 |
Everyone is familiar with weather systems on Earth like rain, wind and snow. But space weather – variable conditions in the space surrounding Earth – has important consequences for our lives inside Earth’s atmosphere.
Solar activity occurring miles outside Earth’s atmosphere, for example, can trigger magnetic storms on Earth. These storms are visually stunning, but they can set our modern infrastructure spinning.
On Jan. 19, scientists saw a solar flare in an active region of the Sun, along with a concentrated blast of solar-wind plasma and magnetic field lines known as a coronal mass ejection that burst from the Sun’s surface and appeared to be headed for Earth.
When these solar winds met Earth’s magnetic field, the interaction created one of the largest magnetic storms on Earth recorded in the past few years. The storm peaked on Jan. 24, just as another storm began.
“These new storms, and the storm we witnessed on Sept 26, 2011, indicate the up-tick in activity coming with the Earth’s ascent into the next solar maximum,” said USGS geophysicist Jeffrey Love.” This solar maximum is the period of greatest activity in the solar cycle of the Sun, and it is predicted to occur sometime in 2013, which will increase the amount of magnetic storms on Earth.
Magnetic storms, said Love, are a space weather phenomenon responsible for the breathtaking lights of the aurora borealis, but also sometimes for the disruption of technology and infrastructure our modern society depends on. Large magnetic storms, for example, can interrupt radio communication, interfere with global-positioning systems, disrupt oil and gas well drilling, damage satellites and affect their operations, and even cause electrical blackouts by inducing voltage surges in electric power grids. Storms can also affects airline activity — as a result of last weekend’s storm, both Air Canada and Delta Air Lines rerouted flights over the Arctic bound for Asia as a precautionary measure. Although the storm began on the 19th of January, it did not peak until January 24th.
While this particular storm had minor consequences on Earth, other large storms can be crippling, Love said. He noted that the largest storm of the 20th century occurred in March, 1989, accompanied by auroras that could be seen as far south as Texas, and sent electric currents into Earth’s crust that made their way into the high-voltage Canadian Hydro-Quebec power grid. This caused the transformer to fail and left more than 6 million people without power for 9 hours. The same storm also damaged and disrupted the operation of satellites, GPS systems, and radio communication systems used by the United States military.
While large, the 1989 storm pales in comparison to one that occurred in September 1859 and is the largest storm in recorded history. Scientists estimate that the economic impact to the United States from a storm of the same size in today’s society could exceed $1 trillion as a result of the technological systems it could disrupt.
The USGS, a partner in the multi-agency National Space Weather Program, collects data that can help us understand how magnetic storms may impact the United States. Constant monitoring of Earth’s magnetic field allows us to better assess the impact of these phenomena on Earth’s surface. To do this, the USGS Geomagnetism Program maintains 14 observatories around the United States and its territories, which provide ground-based measurements of changes in the magnetic field. These measurements are being used by the NOAA Space Weather Prediction Center and the US Air Force Weather Agencyto track the intensity of the magnetic storm generated by this solar activity.
In addition to providing data to its customers, the USGS produces models of the Earth’s magnetic field that are used in a host of applications, including GPS receivers, military and civilian navigational systems, and in research for studies of the effects of geomagnetic storms on the ionosphere (a shell of electrons and electrically charged atoms and molecules surrounding Earth), atmosphere, and near-space environment.
| 3.784155 |
Fact sheet on the historic and current conditions of mangroves of Dry Tortugas National Park, a cluster of islands and coral reefs west of Key West, Florida. Mangroves and nesting frigate bird colonies are at risk to destruction by hurricanes.
Doppler radar imaging is used for tracking the migration and behavior of bird populations. With the U.S. Fish and Wildlife Service and other agencies, USGS uses this technology to assist decision makers balance natural and industrial concerns
Updated summaries of research in the Arctic National Wildlife Refuge, Alaska, on caribou, muskoxen, predators (grizzly bears, wolves, golden eagles), polar bears, snow geese and their wildlife habitats with maps of land-cover and vegetation.
Fact sheet on the need to protect biological soil crusts in the desert. These crusts are most of the soil surface in deserts not covered by green plants and are inhabited by cyanobacterium (blue-green algae) and other organisms useful to the ecosystem.
Overview of research of the Ecology Branch on the ecological consequences of habitat degradation due to altered environment, nonindigenous species, and atmospheric alterations. Includes links to staff and research projects.
Assessment of the importance of the Conservation Reserve Program in preventing the decline of grassland breeding birds by preserving grassland habitats in North Dakota. Published as Wilson Bulletin v. 107 no. 4, pp. 709-718 (1995).
Study of wildland fire history and fire ecology such as plants in the Sierra Nevada forests, California shrublands, the Mojave, and Sonoran deserts to develop management techniques that will reduce hazards.
Catalog of bird species common to forest and rangeland habitats in the U.S. with natural histories including taxonomic information, range, and habitat descriptions to assist land managers in resource management. Text available as a *.zip file.
Macroinvertebrate data collected by USGS or USFS from 73 sites from 2000 to 2007 and algal data collected from up to 26 sites between 2000 and 2001 in the Eagle River watershed, with emphasis on methods of sample collection and data processing.
Plan for an upcoming study, at the microbiological scale, of the benthic communities (including corals) that reside in and around mid-Atlantic canyons, which are located at the edge of the continental shelf.
Homepage for the Northern Prairie Wildlife Research Center, Jamestown, ND, with links to announcements, science prgorams, biological resources finder, publications search option, contacts, and answers to common questions about the Center
Links to ornithology programs at Patuxent Wildlife Research Center, Laurel, MD, including large scale survey analysis of bird populations, research tools, datasets and analyses, bird identification, and seasonal bird lists.
Core web page from America's first wildlife experiment station and a leading wildlife management refuge, the Patuxent Wildlife Research Center, Laurel, Maryland with links to projects, publications, library, contacts, and how to get there.
Biomonitoring projects studying the status and trends of the nation's environmental resources and programs studying amphibians and birds. Links to long-term programs, resources and references, and related links.
Report on effects of the increase of atmospheric carbon dioxide on plants and animals, especially birds, in the Great Plains including effects of carbon dioxide fertilization, ultraviolet radiation, climate change, and harmful effects on bird habitats.
Project is designed to integrate studies from a number of researchers compiling data from terrestrial, marine, and freshwater ecosystems within south Florida. Links to publications, maps, posters, and data of studies.
Report prepared for the U.S. Fish and Wildlife Service with descriptions of exotic aquatic species introduced in the southeast United States with information on populations, geographic distribution, and origins.
Complex interactions among hydrologic events initiated by people and the behaviors and characteristics of animal species (both native and introduced) lead to important scientific and management problems.
Information on the NWRC Wetlands Ecology Branch, which conducts research related to sustainable management and restoration of the nation's coastal saltwater wetlands, freshwater wetlands, submerged aquatic ecosystems, and coastal prairie.
How will the increasing use of wind turbines affect populations of wild birds and bats? This shows which birds and bats we study, and the aspects of their ecology that may be affected by wind energy development.
Main page for accessing links for information and data on the San Francisco Bay estuary and its watershed with links to highlights, water, biology, wetlands, hazards, digital maps, geologic mapping, winds, bathymetry and overview of the Bay.
Clearinghouse for the description and availability of multiple geospatial datasets relating to Alaska from many federal, state and local cooperating agencies under the coordination of the Alaska Geographic Data Committee.
Links to volcanism, volcanic history, volcanic rocks, and general geology by state, by region, national parks and national monuments and a brief introduction to volcanism around the U.S. entitled: Windows into the past.
They're abundant in this area, but hard to count reliably. We outline a procedure for estimating the population sizes so that we can determine whether they're increasing or dwindling. We must both listen for their calls and visually confirm them.
Homepage for the Dept. of the Interior's Initiative coordinated by the USGS, for amphibian (frogs, toads, salamanders and newts) monitoring, research, and conservation. Links to National Atlas for Amphibian Distribution, photos, and interactive map serve
Links to information on species of frogs, toads, and salamanders located in the southeastern United States and the U.S. Virgin Islands, with information on appearance, habitats, calls, and status, plus photos, glossary, and provisional data.
We removed non-native fish from a section of the river and the endangered native species humpback chub increased in abundance. But it is not yet clear that decreased competition explains the rebound in population.
Background information and genetic sequencing data for more than 1,000 individual field isolates of the fish virus Infectious Hematopoietic Necrosis Virus (IHNV) collected in western North America from 1966 to the present, updated annually.
Wildlife you see in a national park or other reserved area don't know about the park boundary. Bobcat, martens, mink, and moose need different types of living space and habitat. Development outside the park affects their ability to inhabit the park.
Using genetic analysis of organic material found in aquatic environments it is possible to detect the presence of organisms without necessarily observing or capturing individuals. Explains terms, methods, and prospective utility of this approach.
Previous analysis showed this area to have reduced macroinvertebrate biodiversity, an important measure of ecosystem health. New observations indicate that conditions have improved; report includes methods and results of sampling.
Project of the Gulf of Mexico Integrated Science program that evaluates the transport and sedimentation of contaminates through the Mississippi River and Atchafalaya River delta to the near-shore Gulf of Mexico. Includes aerial photographs.
Small wetlands in this large area have hosted migratory birds for a long time, but with changes in agricultural practice and regional climate those habitats may not remain hospitable to the wild populations.
Brief review of bat research in the San Francisco Bay area and southern California providing land managers with information on the occurrence and status of bat species with links to bat inventories for California and related material.
A literature synthesis and annotated bibliography focus on North America and on refereed journals. Additional references include a selection of citations on bat ecology, international research on bats and wind energy, and unpublished reports.
A geologic and oceanographic study of the waters and Continental Shelf of Gulf of the Farallones adjacent to the San Francisco Bay region. The results of the study provide a scientific basis to evaluate and monitor human impact on the marine environment.
A web-enabled database that provides for the capture, curation, integration, and delivery of bioassessment data collected by USGS, principally macroinvertebrate, algae, fish, and supporting habitat data from rivers and streams.
Biomonitoring of Environmental Status and Trends (BEST) program is designed to assess and monitor the effects of environmental contaminants on biological resources with links to detailed information on specific species.
Explains biological soil crusts, organism-produced soil formations commonly found in semiarid and arid environments, with special reference to their biological composition, physical characteristics, and ecological significance.
Bird banding is used to study the movement, survival and behavior of birds. The Bird Banding Laboratory Site has links to the value, procedure and history of bird banding, how to report bird bands (English & Spanish), and resources for birders.
Geographical access to multiple bird checklists developed by others that indicate the seasonal occurrence of birds in a given area. A Record Documentation Form to document supporting details of rare bird observations is also available.
This web site is an outgrowth of an agreement between the USGS and the New England Aquarium, designed to summarize and make available results of scientific research. It will also present educational material of interest to wide audiences.
Description of bryophytes (mosses, liverworts, and hornworts) and lichens (dual organisms of a fungus and an alga or a cyanobacterium) that are part of forest ecosystems in the Pacific Northwest with information on habitat and conservation.
Manual for research program on the nesting habits of sea turtles of the Virgin Islands, with descriptions of species, nesting behavior, observation methods, record keeping, tagging, and tissue sample collection. (PDF file, 121 pp.)
Buffelgrass (Pennisetum ciliare) poses a problem in the deserts of the United States, growing in dense stands and introducing a wildfire risk in an ecosystem not adapted to fire. This report explains what we are doing to help mitigate its effects.
Combining genetic data with current and predicted climate scenarios, we are modeling the predicted future distributions of wildlife populations in the Arctic and identifying key environmental variables that determine important animal habitat.
Three themes of ongoing research: forecasting polar bear and walrus population response to changing marine ecosystems; measuring wildlife population changes in the Arctic coastal plain, and wildlife communities in the boreal-Arctic transition zone.
Identification of epiphytes (plants obtaining moisture and nutrients from the air and rain and usually living on another plant) on seaweed in Tampa Bay, Florida. Abstract of symposium presentation with photos.
Detailed publication on the classification system for an inventory of wetlands and deepwater habitats of the United States used to describe ecological taxa and arrange them in a system useful to resource managers.
Overview of interdisciplinary research studies in Glacier National Park to understand how this mountain wilderness responds to present climatic variability and other external stressors, such as air pollution, and links to detailed reports.
Home page for Coastal and Marine Geology with links to topics of interest (sea level change, erosion, corals, pollution, sonar mapping, and others), Sound Waves monthly newsletter, field centers, regions of interest, and subject search system.
Changes in both the ocean and coastal ecosystems may have negative effects on sea otter populations in the coastal Northwest and Alaska. A study underway will examine these factors and the overall health of sea otter populations.
Declines in fish and wildlife populations, water-quality issues, and changes in coastal habitats have prompted this USGS study of the region's nearshore life and environment. Includes links to data from published reports.
Coverage of the Coastal Prairie Ecology Research (CPER) Team, National Wetlands Research Center, providing scientific information to aid the conservation, management, and restoration of ecosystems in the greater coastal prairie region.
Website for the Columbia Environmental Research Center with links to staff, publications, databases, field stations, and projects including those on the Rio Grande, burrowing owls, sea turtles, and geospatial technology.
Links to Columbia Environmental Research Center online databases with text, data, and metadata on toxicity, Missouri River, biomonitoring of environmental status and trends, contaminants, and sediments.
Web site for an Internet Map Service (IMS) serving base cartographic data, USGS data, science applications and real time modelling analyses for the Columbia River basin using geospatial analysis technology.
Describes research to assess the effectiveness of the current system and distribution of marine reserves and protected areas in the Virgin Islands and Puerto Rico for conserving reef ecosystems and resources.
The information provided in the CEE-TV database profiles available geo-referenced information on contaminant exposure and effects in terrestrial vertebrates along the U. S. coasts. The database utilizes Microsoft's Access 2000 for Windows.
Report with mini-movie and photos on the hypothesis that the atmospheric transport of dust arising from the desertification in northern Africa led to algal infestation of corals, coral diseases, and the near extinction of associated sea urchins.
Shows how coral reef specimens are collected, the type of information gained from them, and the methods by which they are measured and studied to understand recent (past few centuries) changes in climate.
Locations for nine species of large constrictors, from published sources, along with monthly precipitation and average monthly temperature for those locations. Shapefiles for each snake species studied.
Three mathematical models using information about the geographic distribution and character of land surface characteristics along with proposed modifications or plausible events to determine the likely costs and benefits of actions and events.
Population size, foaling, deaths, age structure, sex ratio, age-specific survival rates, and more over a 14 year time span. This information will help land and wildlife managers find the best maintenance and conservation strategies.
Article from Wildlife Monographs no. 100 (1988) on the relationships of wetland habitat dynamics and life history to the breeding distributions of the various species of ducks with information on research methods and references.
Use of diatoms in biostratigraphy, coastal and estuarine studies, paleoceanology, paleoliminology, earthquake studies, environmental quality and forensic studies. Includes listing of USGS diatom projects and links to other diatom websites.
Maps of the ranges of tree species in North America compiled by Elbert Little and others were digitized for use in USGS vegetation and climate modeling studies. Can be downloaded as ArcView shapefiles and in PDF graphic files.
Comprehensive bibliography on the ecology, conservation, and management of North American waterfowl and their wetland habitats. Facilitates searching or downloading as *.zip files and use with ProCite utility.
Satellite images of geographic areas of interest, cities, deserts, glaciers, geologic features, disaster areas, water bodies, and wildlife linked with articles, maps, and other images such as AVHRR, photographs, and special project images.
Research and monitoring to develop fundamental understanding of ecosystem function and distributions, physical and biological components and trophic dynamics for freshwater, terrestrial, and marine ecosystems and the human and fish and wildlife communitie
Comparison of water in two adjacent watersheds before and after implementing a brush management strategy in one of the watersheds helps us see what water resource characteristics are sensitive to brush management and how.
By measuring the current and historical growth rates of coral skeletons, and using field experiments, we intend to find out whether rising atmospheric CO2 and rising sea levels will cause coral reefs to erode and cease to function.
Study of the effects of the practice of cycling municipal nutrient-enriched wastewater from holding ponds through forested wetlands. Studies were in the Cypiere Perdue Swamp, Louisiana, and the Drummond Bog, Wisconsin.
Sixty-five sampling sites, selected by a statistical design to represent lengths of perennial streams in North Dakota, were chosen to be sampled for water chemistry and mercury in fish tissue to establish unbiased baseline data.
Sixty-five sampling sites, selected by a statistical design to represent lengths of perennial streams in North Dakota, were chosen to be sampled for fish and aquatic insects (macroinvertebrates) to establish unbiased baseline data.
Integrated network of real-time water-level monitoring, ground-elevation modeling, and water-surface modeling that provides scientists and managers with current on-line water-depth information for the entire freshwater portion of the Greater Everglades.
Wetlands and oil wells shouldn't mix, but in some areas they do. This explains what problems may arise and how we study the effects of highly salty water produced by oil wells when it leaks into nearby wetlands and streams.
Constructed farm ponds represent significant breeding, rearing, and overwintering habitat for amphibians in the Driftless Area Ecoregion of Minnesota, Wisconsin, and Iowa. Links to fact sheet, brochure, annual reports, field manual, and final report.
This program is focused on the study of fishes, fisheries, aquatic invertebrates, and aquatic habitats, and evaluates factors that affect aquatic organism health, population fitness, biological diversity, and aquatic community and habitat function.
Explains how we assessed the quality of a wetland as indicated by its plant species composition and abundance for marsh and swamp sites, to summarize the effectiveness of restoration projects in Louisiana.
Home page of the Forest and Rangeland Ecosystem Science Center, Corvallis, providing research and technical support for ecosystem management in the western U.S. Links to projects, field stations, fact sheets, partnerships, and publications.
Homepage of the Fort Collins Science Center in Colorado with links to programs in ecological research programs, staff directory, products library, news and events, and research features and spotlights.
Overview with links to studies on the effects of human activity on the San Francisco estuary with loss of historic fresh and saltwater tidal marshes reducing habitats, introducing contaminants in waste, and creating dredging problems.
Home page for the Front Range Infrastructure Resources Project, a demonstration study of the northern Colorado Front Range urban corridor and the entire Rocky Mountain Front Range urban corridor with links to projects, datasets, and publications.
Program to keep common species common by identifying those species and plant communities that are not adequately represented in existing conservation lands. Links to projects, applications, status maps, and a searchable database.
Describes the value of molecular biology genetic tools in enhancing the delineation of the genetic diversity and the effects of environmental degradation on living species. Links to research, which differentiated two species of sage-grouse.
Geographic Analysis and Monitoring Program (GAM) conducts studies about land surface change, environmental and human health, fire and urban ecology, and natural hazards to help decision-makers in land-use planning and land management.
Field methods, topics of investigation, shoreline changes, publications, and satellite imagery related to geologic and hydrologic processes affecting Lake Pontchartrain and adjacent lakes which form a large estuary in the Gulf Coast region.
Description of the Geospatial Multi-Agency Coordination (GeoMAC) project, online maps of current wildland fire locations using Netscape Communicator or Microsoft Internet Explorer, and user guide on how to use mapping application.
Review of the size of breeding populations of Giant Canada geese by states in the Mississippi, Atlantic, Central, and Pacific flyways and the management problems caused by rapid increases of local breeding populations.
Site for Great Lakes Science Center, Ann Arbor, which provides information about biological resources in the Great Lakes Basin. Links to personnel, publications, data, library, facilities, research vessels, Great Lakes issues, and research.
Website of the Gulf of Mexico Integrated Science program to understand the framework and processes of the Gulf of Mexico using Tampa Bay as a pilot study. Links to publications, digital library, water chemistry maps, epiphytes, and field trips.
Tide stage, specific conductance, water temperature, and freshwater inflow at selected Hudson River (New York) gages updated every 4-hours to measure the effects of freshwater withdrawals and upstream movement of the salt front.
Airborne scanning laser surveys (LIDAR) are used to obtaining data to investigate the magnitude and causes of coastal changes that occur during severe storms. Links to examples of coastal mapping during specific hurricanes.
A brief definition and explanation of hypoxia with special reference to the Gulf of Mexico hypoxic zone along the Louisiana-Texas coast as well as extensive links to USGS and other related information resources.
Information about the causes and impact of hypoxia with links to USGS and other Federal agency information and activities related to nutrients in the Mississippi River Basin and hypoxia in the Gulf of Mexico.
Description of the use of a miniature video-camera system deployed at nests of passerine species in North Dakota to videotape predation of eggs or nestlings by animals such as mice, ground squirrels, deer, cowbirds and others.
Recent physical changes over time, including trends toward earlier snowmelt runoff, decreasing river ice, and increasing spring water temperatures, may affect salmon populations; we want to know how important these effects are.
Article from Status and Trends of the Nation's Biological Resources on the serious impacts to river systems due to damming and flow regulation, and rehabilitation, monitoring, and research on such rivers.
Landscapes of interwoven wetlands and uplands offer a rich set of ecosystem goods and services. Changes in climate and land use can affect the value of those services. We study these areas to understand how they may be changing.
We conducted a national landowner survey, evaluated short-term vegetation responses to land management practices (primarily grazing, haying, and burning), and initiating a long-term vegetation monitoring study for wetland buffers.
Description of research program for immediate and long-term management of grizzly bears (Ursus arctos horribilis) inhabiting the Greater Yellowstone Ecosystem. Includes links to reports in PDF format and cooperating organizations.
Links to research projects that will improve the ability to detect, monitor, and predict the effects of invasive species, including exotic animals, on native ecosystems of the Pacific Southwest (California, Nevada, Utah, and Arizona).
Will salt marshes survive if sea level rises quickly? The answer depends on whether the areas surrounding them can allow salt marsh fauna and flora to migrate there. Local topography, both natural and manmade, is the main factor limiting this migration.
Handbook on monitoring methods for lake management, including program design, sampling methods and protocol, biota and chemical sampling methods, laboratory methods, preservation of data and samples, glossary, and bibliography. (PDF file, 92 pp.)
This website is a gateway to information and data on Lake Tahoe with links to Lake Tahoe Initiative, geography, history, lake facts, GIS Data, DEM, DOQ, DLG imagery, bathymetry, satellite imagery, land cover, census, soils, pictures, and general maps.
Homepage for the Leetown Science Center in West Virginia conducting research on aquatic and terrestrial organisms and their supporting ecosystems with links to directions, general description, library, projects, fact sheets, and facilities.
| 3.306431 |
Interactive Tool: When Are You Most Fertile?
What does this tool measure? Back to top
Click here to find out when you are most likely to get pregnant.
This interactive tool estimates your peak fertility period, also known as your "fertile window." This is when you are most likely to get pregnant. Do not use this tool to prevent pregnancy.
To find your peak fertility period, the tool first calculates the day you are most likely to ovulate. This is the day an ovary releases an egg. In the tool, you will enter the typical length of your menstrual cycle, and you will click on the first day of your last menstrual period.
- To know how long your cycles are, track the number of days on a calendar for 2 or 3 months or cycles. Your menstrual cycle begins with the day your period starts and ends the day before your next period starts.
- If you do not know the number of days in your menstrual cycle, you can use 28 days. This is the average length of a menstrual cycle. But if your cycle is longer or shorter than that, or if it is not always the same length, this tool will not predict your fertile window very well.
This calculator is meant to give you a rough estimate. Women usually ovulate at day 15, but it's also normal to ovulate well before or after the 15-day mark.
For information about reading your body's signs to tell when you will ovulate, see Fertility Awareness.
Health Tools Back to top
Health Tools help you make wise health decisions or take action to improve your health.
|Interactive tools are designed to help people determine health risks, ideal weight, target heart rate, and more.|
What do the results tell me? Back to top
Your "fertile window" is up to 6 days long, once a month. It includes:
- The day you ovulate. This is when you have the best chance of becoming pregnant. (A human egg usually lives for only 12 to 24 hours after ovulation. This is why you are not likely to get pregnant by having sex a day after you ovulate.)
- The 5 days before ovulation. This is because sperm can live in a woman's body for 3 to 5 days after sex. When an egg is released, one of these sperm is ready to fertilize it.
If you want to become pregnant, try to have sex every day or every other day from your first fertile day to your last fertile day.
What's next? Back to top
If your periods are irregular, this calculator is not a good way to predict your ovulation dates. Do not use this tool to prevent pregnancy.
Source: Fritz MA, Speroff L (2011). Clinical Gynecologic Endocrinology and Infertility, 8th ed., Philadelphia: Lippincott Williams and Wilkins.
References Back to top
Other Works Consulted
- Fritz MA, Speroff L (2011). Clinical Gynecologic Endocrinology and Infertility, 8th ed., Philadelphia: Lippincott Williams and Wilkins.
Credits Back to top
|Primary Medical Reviewer||Adam Husney, MD - Family Medicine|
|Specialist Medical Reviewer||Kirtly Jones, MD - Obstetrics and Gynecology|
|Last Revised||October 29, 2012|
To learn more visit Healthwise.org
© 1995-2013 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
| 3.098275 |
Ear Mites (Otodectes) in Cats
What are ear mites?
The ear mite Otodectes cynotis is a surface mite that lives on cats, dogs, rabbits and ferrets. It is usually found in the ear canal but it can also live on the skin surface. The entire ear mite life cycle takes place on animals. Ear mites are highly contagious, and cats become infested by direct contact with another infested animal. The mite is barely visible to the naked eye and may be seen as a white speck moving against a dark background.
What is the life cycle of the ear mite?
It takes approximately 3 weeks for a mite to develop from egg to adult, going through a total of 5 stages. Adult ear mites live about 2 months, during which time they continually reproduce. The entire ear mite life cycle takes place on the host animal, although mites can survive for a limited time in the environment.
What are the clinical signs of ear mites?
Ear mites are the most common cause of feline ear disease and infection. They are the second most common ectoparasite (external parasite) found on cats; the most common is the flea. Infestations are most common in kittens and young cats although cats of any age can be affected. Clinical signs of infestation vary in severity from one cat to another and include combinations of:
1. Ear irritation causing scratching at the ears or head shaking
2. A dark waxy or crusty discharge from the ear
3. Areas of hair loss resulting from self-trauma - scratching or excessive grooming
4. A crusted rash around or in the ear
5. An aural hematoma - a large blood blister on the ear, caused by rupture of small blood vessels between the skin and cartilage - caused by scratching at the ears
Skin lesions most frequently affect the ear and surrounding skin but occasionally other areas of the body may be affected.
How are ear mite infestations diagnosed?
Typical clinical signs with a history of contact with other cats or dogs would suggest the involvement of ear mites. Although ear mites cause over half of all feline ear disease, other conditions can result in very similar clinical signs and must be ruled-out before treatment is begun.
A veterinarian makes the diagnosis by observing the mite. This is usually straightforward and may be done either by examination of the cat's ears with an otoscope or by microscopic examination of discharge from the ear. If the ears are very sore, the cat may need to be sedated to allow the ears to be properly examined and treated.
How are ear mites treated?
Three steps are required to treat ear mites successfully:
1. Treat the ears of all affected and susceptible pets
2. Treat the skin of all affected and susceptible pets
3. Treat the indoor environment because the mite is capable of limited survival off pets
"It is necessary for the entire course of treatment to last a minimum of three weeks."
Your veterinarian will advise you about which insecticidal products are suitable. There are several ear medications licensed for the treatment of ear mites in cats. No medication can penetrate the eggs or pupae, so treatment is directed at killing the adult and larval forms. Because of the length of the life cycle, it is necessary for the entire course of treatment to last a minimum of three weeks. There are no products licensed for use on the house or on an animal's skin but many products licensed for flea control are effective.
Your veterinarian may ask to re-examine the cat to ensure that the mites have been eliminated after the initial treatment has been performed.
Do ear mites affect people?
Ear mites may cause a temporary itchy rash on susceptible people if there are infested pets in the household. Eradication of the mites from the pets will cure the problem.
| 3.688318 |
The art of painting portrait miniatures has its origin with the illuminators of medieval times, whose tiny depictions of scenes from the Bible were incorporated into manuscripts. This art form developed and expanded in sixteenth and seventeenth century Europe when a demand grew for small mementos of wives and children or deceased relatives that could be carried when travellingmmuch as we carry a photogaph of our wife or family in our wallets today. Miniature portraits played an important role in the personal relations of the upper middle-class and the nobility of the time; they were tokens of affection or love. In that respect they were similar to mourning brooches containing plaited locks of the hair of the “dearly departed” that also became common in Victorian times.
Indeed, miniature portraits came to be used by the royal courts of Europe as things approaching a currency. They were given to royal favourites by the monarch, exchanged with other Kings, Princes or Ambassadors, or created to commemorate a royal engagement or marriage. As the American periodical Scribner’s Magazine commented in 1897:
the miniature, the little picture that could be covered by a kiss or hidden in the palm of the hand had an intimate and personal quality, it was a pledge of affection, often a gauge of stolen joys; it could be carried by the exiled in never so hurried a flight, could be concealed in the lid of a comfit case
Prior to the eighteenth century, miniatures were painted in an assortment of media: oil, watercolour or sometimes enamel — but watercolour nevertheless predominated. They were also painted variously on vellum, chicken-skin or cardboard, and even on copper. During the eighteenth century, however, watercolour on ivory became the standard medium, and this continued until the miniature was gradually replaced by daguerreotypes and photography about the end of the nineteenth century. The zenith of the popularity of miniature portraits, both in Europe and North America was in mid-Victorian times.
Miniatures were usually small and oval or round. Some were as tiny as 40 mm by 30 mm. They were often enclosed in a locket or a covered “portrait box”. Indeed, the housing for the portraits was sometimes decorated with elements of death or romance such as carved initials or flowers or braided locks of hair. When used for mourning, appropriate imagery was sometimes incorporated on the reverse of the locket or frame, such as mourners at a tomb. As the genre moved into Victorian times some miniatures grew larger (up to 150 mm by 200mm) and were painted in square or rectangular form, to be displayed on walls or in cabinets. The innovative use of ivory as a “canvas” was introduced by the Italian painter Rosalba Carriera in about 1700 as it provided a luminous surface for transparent pigments such as watercolour. The ivory was cut from the elephant’s tusk in thin sheets lengthways, sometimes so thin as to be almost translucent. Ivory is, however, difficult to paint on with watercolour, being greasy and non-absorbent. Miniaturists consequently roughened the surface with fine sandpaper or powdered pumice. They also bleached it in sunlight to make it more white. Another technique was to degrease it with vinegar and garlic, or by pressing it with a hot iron between sheets of paper. Some artists used a brush with a single hair, and added gum arabic to the paint to make it stickier. Some added liquid from the gall bladder of cows or bulls to make it flow more easily. Victorian society particularly appreciated the technical difficulties of painting such small fine portraits, and that led to a finer appreciation of the particular aesthetics of the genre. Generally-speaking Victorian miniatures encompassed a lighter palette of colour, monochromatic backgrounds and brushwork that exploited the translucency of the ivory on which it was painted.
Among the best-known English miniature painters of the early nineteenth century were John Engleheart (1784 to 1862) and his uncle George Engleheart (1750 to 1829) (who kept his colours in small, specially-made round ivory boxes with screw lids. He used only ivory palettes, ivory mixing-bowls, small ivory basins in sets, and ivory brush rests), Richard Cosway (1742 to 1821), and Sir William Charles Ross (1794 to 1860). Those best known in the latter half of the nineteenth century include: Alyn Williams (1865 to 1955), Maria Eliza Burt (1841 to 1931) (she married the well-known war artist, William Simpson) and the Australian Bess Norris (1878 to 1939).
Alyn Williams founded the Society of Miniature Painters in 1896, but its name soon changed to the Royal Miniature Society (on the granting of the Royal prerogative), and later still to the Royal Society of Miniature Painters, Sculptors and Gravers.
Last modified 2 May 2010<
| 3.263258 |
no news in this list.
Kusatsu-Shirane volcanoKusatsu-Shirane volcano is a complex of overlapping cones and 3 lake-filled craters (Karagama, Yugama, Mizugama) at the summit. It is located 150 km NW of Tokyo.
All historical eruptions have consisted of phreatic explosions from the acidic crater lakes or their margins. There are fumaroles and hot springs around the flanks of the volcano, and many rivers draining from the volcano are acid. Similar to Ijen volcano, the crater was the site of active sulfur mining for many years during the 19th and 20th centuries.
Background:The andesitic-to-dacitic volcano was formed in 3 eruptive stages beginning in the early to mid Pleistocene. The Pleistocene Oshi pyroclastic flow produced extensive welded tuffs and non-welded pumice that covers much of the east, south and SW flanks. The latest eruptive stage began about 14,000 years ago.
Yugama crater ("hot-water cauldron") contains the most active vent. Its lake is 270 m wide and up to 27 m deep and contains highly acidic (pH = 1.2 ) and salty (app. 1% salt) water.
The surface temperature of the lake is heated by vents at the lake bottom. In the interval 1910 to 1918, water temperatures reached a maximum of 100°C. Subaqueous fumaroles with molten sulfur pools exist in the center of the lake.
- Smithsonian / GVP
- B. Takano, K. Watanuki (1990) "Monitoring of volcanic eruptions at Yugama crater lake by aqueous sulfur oxyanions", Journal of Volcanology and Geothermal Research, Volume 40, Issue 1, January 1990, Pages 71-87
A small hydrothermal explosion likely occurred at Kusatsu-Shirane volcano on 7 February 1996.
Broken pieces of the ice sheet 20-30 cm in diameter were found washed ashore and discolored water was observed at the NW part of the lake. Probably, a sudden sudden discharge of fluid or a minor hydrothermal explosion was the cause.
A small eruption occurred from Yugama crater on 6 January 1989. It produced small amounts of volcanic ash on the frozen surface of the crater lake.
Kusatsu-Shirane volcano had 5 phreatic eruptions during 1982-83. Eruptions occurred on 26 October 1982, 29 December 1982, 26 July 1983, 13 November 1983, and on 10 December 1983. The water temperature at Yukama crater lake increased strongly and reached to 55.5 deg C after the first eruption, but did not increase further during the later eruptions.
(Source: GVP/monthly reports)
A phreatic eruption occurred at Mizugama Crater on 2 March 1976. A new crater 50 m in diameter and 10 m deep was formed in the NE part of Mizugama Crater.
The only event interrupting 34 years of quiet at Kusatsu-Shirane volcano between the eruptions in 1942 and 1976 was a small phreatic explosion in 1958.
An eruption occurred in 1942 and was unusual as it came from a different location than most activity at the volcano. A cluster of small craters was formed on the north flank of Mizugama Crater. A second group of craters formed on the south flank of Yugama Crater.
The largest historical eruption at Kusatsu-Shirane volcano was in 1932. It producing a 20 cm ash deposit reaching up to 2 km east of Yugama Crater.
| 3.392471 |
Papandayan is situated on the island of Java, at 7°32'
S and 107°73'E, rising 2.665 m asl, and
lies apprx. 175 km south east of the Indonesian capital Jakarta. Papandayan
is a complex stratovolcano with four large summit craters, the youngest
of which has breached to the
NE by collapse during a brief eruption in 1772 and contains active fumarole
lighter colored area on the left is Alun-Alun, the uppermost of four
The valley below in the center of the photo extending to the NE in the
breach left by collapse
in 1772. The volcano in the distance is Gunung Guntur, another historically
Photo: Tom Casadevall, 1986 (USGS)
The broad 1.1 km wide, flat-floored Alun-Alun crater
truncates the summit of Papandayan,
and Gunung Puntang to the north gives the volcano a twin-peaked appearance.
episodes of collapse have given the volcano an irregular profile and
avalanches that have impacted lowland areas beyond the volcano.
Since its first historical eruption in 1772, in which
a catastrophic debris avalanche destroyed
40 villages, only two small phreatic eruptions have occured from the
vents in the NE-flank
fumarole field, Kawah Mas.
rises ffrom a sulfur-encrusted fumarole at Kawah Mas (Golden Crater),
a frequently visited destination at Papandayan.
A lirge number of high-temperature fumaroles, with temperatures
of several hundred degrees C, are located within Kawah Mas.
Photo by Lee Siebert, 1995 (Smithsoinian Institution)
November 21st, 2002
Volcanic activity dominated by explosion and emission, which
has medium-high intensity.
These activities were accompanied by the crater-wall collapsing, near
activity at Kawah Baru, 17 November 2002
on picture for larger view
Up to 07.45 local time today, there were 98 times explosions, that produced
a white-grey ash plume which rose 200-600 m high and blew westward.
Seismograph remain showing volcanic tremosr, emissions ( medium-high
intensity), and continuous explosions .List of seismic record is: 10
shallow volcanic, 1 low frequency, and continuous tremor.
on 14 November 2002
on picture for larger view
November 20th, 2002
White thick ash plume emitted westward, and reached
100-1500 m asl. Heavy rain occurred
at 05.02 local time. Ash eruptions also occurred, reaching 1500 m asl,
ashfall drifting northeast, then north and northwest. This eruption
had its source in Nangklak
crater, one of eruption points. Ash fall reached 2 cm thickness in radius
of 2 km (northwest).
Seismic activity revealed as volcanic, tremor, and medium intensity
of continuous emission. Explosion earthquake noted 1 time. List of seismic
record is : 17 shallow volcanic, 1 deep volcanic, 2 tectonic, continuous
tremor, and 1 low frequency. Volcano status is in level 3.
at Tea Plantation (left) and vegetable fields right) caused by ashfall.
Looks like Norway during winter with snow all over..
November 19th, 2002
Volcanic activity dominated by ash emission, while medium pressured-ash
explosion occurred continuously.White-thin ash plume rose from the crater
westward, about 200-700 m high, with medium pressure, no rainfall around
the volcano. Seismic activity noted continuous emission earthquake,
continuous explosion, 2 volcanic, continuous tremor, and tectonic.
On November 18th, 2002, 12.00 local time, volcano status reduced into
November 17th, 2002
Despite the obvious
dangers, many people entered the safety area near Mount Papandayan's
crater to get a closer look at the huge majestic eruption. Hundreds
of villagers, foreigners, photographers and cameramen took the risk
of going in closer than one-kilometer radius cordon zone around the
volcano's crater, which was still spewing ash and thick smoke on Saturday.
(So if you ever wondered why people get caught in volcano-eruptions,
one of the answers are given here.....)
Damage on rice fields
caused by lahar flood on 11 November 2002
The volcano is still dangerous and no one is allowed
to enter the forbidden are. Despite the decreasing number of eruptions,
there are no signs that the 2,665-meter-high volcanic peak will slow
down completely and the situation is still similar to that on Friday.
There is nothing the authorities can do to help them if the volcano
starts to discharge hot lava that could flow down to farmlands and villages.
Nonetheless, it was a good scene for viewers on Saturday
morning to see the volcano's awesome power as it spewed hot ash and
thick smoke to a height of 6,000 meters in the air. The ash has been
blown to the volcano's southeast over several tea plantations in Bandung
© TEMPO/Hariyanto 2002-11-14
November 16th, 2002
According to monitoring and seismographic
records, the volcano, after its major eruption
early this morning (Friday), has shown increasing activity by continually
spewing hot ash and
thick smoke to a height of 6,000 meters.
So far the volcano has spewed out more than one million cubic meters
of volcanic material, including silica, magnesium and sulfur, which
would be dangerous to humans if it were mixed
with the mudflows moving toward the Cileutik, Cibeureum and Cimanuk
from the volcano.
Some 2,600 hectares of paddy(?) field in the regency
were suffering a shortage of water as the mudflows had damaged several
dams on the Cibeureum River. About Rp 7.5 billion (US$830,000) to Rp
10 billion would be needed to repair the damaged dams over the
next five years.
much to do when you have to evacuate
© TEMPO/Arie Basuki;2002-11-15
November 15th, 2002
Papandayan erupted early this morning, around
6:00 o'clock local time, belching thick smoke
up to 6.000 m into the sky, sending hot ash and lava onto nearby areas,
and triggering panic
among residents in nearby areas. (Indonesian Post said: 'There were
no signs of hot lava or clouds of poisonous gas that often accompany
A major eruption took place at 6:33 a.m. this morning.
After that more than 10 big eruptions
were recorded within an hour. Friday morning's eruption was the biggest
so far since the Papandayan volcano began to rumble early this week.
Before the eruption as many as 48 tremors were recorded in 12 hours.
The long-dormant Papandayan volcano has been causing
havoc in the area since Monday afternoon, when lahar -- accumulated
lava from past flows -- on the volcano's peak broke
off after heavy rains and poured down on nearby villages, burying 17
houses, an Islamic
boarding school and two bridges.
Officials on Friday issued their maximum alert status
for Papandayan in West Java province
after a dawn eruption. Residents have been evacuated out of a four kilometre
area around the volcano.
November 13th, 2002, 08:00 GMT
At least 5,000 people have fled their homes
near a volcano in Indonesia's West Java province which is emitting smoke
and mudflows. At least six out of
eight craters of the Papandayan
volcano began emitting white and black smoke on Tuesday.
The authorities have also closed an area of seven kilometres
from the peak of the volcano. And here comes what is confusing:
The volcano near the town of Garut had since Tuesday been emitting mudflows
known as lahar into the Cibeureum river. Since this is NOT a mud-volcano,
the only answer I can imagine, is that rain has fallen and brought ash
from the flanks downwards and that mixture has become mudflows. Anyone
else have any suggestions?
At least 18 homes in two villages had been destroyed
by the mudflows, and 5,000 people had
fled their homes compared to 2,000 on Monday, when the volcano belched
smoke and lava.
The refugees have taken shelter in a mosque, a sport stadium and a local
provincial government office in Cisurupan. The mudflows covered around
43 hectares of rice fields in one village.
Officials have set up four emergency medical posts to help the villagers.
© TEMPO/Arie Basuki;2002-11-14
November 13th, 2002, 07:00 GMT
There have been no reports of fatalities but
the lava has destroyed houses in a nearby village.
The authorities has rised the alarm-level,
as they expect more lava is trying to get out from the volcano in the
November 13th, 2002
'Papandayan volcano cools
down, 3,000 people refuse to return' - That's the next newsline
we read, and you get a bit confused, after having heard that this was
only a mudslide.
According to 'The Jakarta Post', Papandayan
now has stopped spewing out lava, but 3,000 frightened villagers are
still too scared to return home. On Tuesday afternoon only white smoke
was emitting from one of its craters, but during Monday the number of
residents who had
fled their homes rose from 2,000 to 3,000. There are 5 villages that
have been evacuated.
Lava and ash have flowed down the volcano-flanks into the Cibeureum
November 12th, 2002, afternoon
It might have been a mudslide, mistaken for an eruption.
November 12th, 2002
A news-article this morning says: 'Thousands Flee Volcano in Indonesia'.
It is the Papandayan volcano that should have erupted yesterday, forcing
about 1,000 people to flee their villages.
Lava has been spewed into the air around 3 PM local time. Inhabitants
of two nearby villages
have already moved to safer places, and the authorities are prepaired
to evacuate other citites.
It has now, Tuesday morning, been confirmed that there
has been an eruption here, and we
will be back later today with more news. However, there are no immediate
reports of damage
Latest news always above. Older eruptions
Also in the years 1923 and 1942 were small eruptions, both at VEI
, with much lesser damages and casualties than the great VEI 3
eruption of 1772.
of the summit of Papandayan on August 8, 1772, accompaniesd by a brief,
5 minute long explosive eruption, produced a debris avalanche that swept
over lowland areas to the east, destroying 40 villages and killing 2.957
The farmland in the foreground are underlain by the deposits from this
which traveled 11 km from the volcano.
Photo: Tom Casadevall, 1986 (USGS)
Bilder og tekst denne side: Photo
and text this page:
Kimberly, P., Siebert, L., Luhr,
J.F., and Simkin, T. (1998). Volcanoes of Indonesia, v. 1.0 (CD-ROM).
Smithsonian Institution, Global Volcanism Program, Digital Information
| 3.111243 |
Amniotic fluid index is a way of measuring the amount of
liquid that is around a baby (fetus) in the uterus during pregnancy. It is
usually done as part of the biophysical profile (BPP), which is a series of
tests that measure the health of the baby during pregnancy.
Amniotic fluid protects the fetus from
temperature extremes and from being bumped or hurt as the mother moves around.
It also allows the fetus to move around in the uterus and is important for lung
and limb development. A problem with the amount of amniotic fluid could point
to a problem with the growing fetus. Too much or too little fluid could also
cause problems during labor and delivery.
Doctors use ultrasound to calculate the amniotic fluid index. The doctor
looks at the amount of amniotic fluid in four different areas of the uterus.
The four areas are called quadrants. The doctor measures how much fluid is in
each quadrant. Then he or she adds up the numbers to get an idea of the total
amount of fluid that surrounds the baby.
June 18, 2012
Sarah Marshall, MD - Family Medicine & William Gilbert, MD - Maternal and Fetal Medicine
How this information was developed to help you make better health decisions.
To learn more visit Healthwise.org
© 1995-2013 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
We are happy to take your appointment request over the phone, or, you may fill out an online request form.
Disclaimer: The information on this website is for general informational purposes only and SHOULD NOT be relied upon as a substitute for sound professional medical advice, evaluation or care from your physician or other qualified health care provider.
| 3.424859 |
Attention Deficit Hyperactivity Disorder: Symptoms of ADHD
The symptoms of ADHD include inattention and/or hyperactivity and impulsivity. These are traits that most children display at some point or another. But to establish a diagnosis of ADHD, sometimes referred to as ADD, the symptoms should be inappropriate for the child's age.
Adults also can have ADHD; in fact, up to half of adults diagnosed with the disorder had it as children. When ADHD persists into adulthood, symptoms may vary. For instance, an adult may experience restlessness instead of hyperactivity. In addition, adults with ADHD often have problems with interpersonal relationships and employment.
| 3.278611 |
In the Star Wars: Where Science Meets Imagination exhibit, Luke Skywalker's Landspeeder is on display for the first time.
Click on image for full size
Courtesy of Landspeeder image © 2006 Lucasfilm Ltd. & TM Photo: Dom Miguel Photography
Star Wars Exhibition Brings Reality to Fantasy
News story originally written on April 16, 2008
A new museum exhibit shows that some of the robots, vehicles and devices from the Star Wars films are close to the types of things scientists have developed to use in space.
The exhibition--at the Science Museum of Minnesota in St. Paul, Minn., from June 13 until August 24--showcases landspeeders, R2D2 and other items from the Star Wars films. Visitors will learn how researchers today are pursuing similar technologies. The exhibit developers were surprised and excited to learn that many of today's scientists were inspired by the fantasy technologies they saw in the Star Wars movies. One of the goals of the exhibit is to be an inspiration for the kids will be the next set of future scientists.
The exhibit contains film clips, props, models and costumes. Visitors are encouraged to participate in hands-on exhibits and activities.
Shop Windows to the Universe Science Store!
Our online store
includes fun classroom activities
for you and your students. Issues of NESTA's quarterly journal, The Earth Scientist
are also full of classroom activities on different topics in Earth and space science!
You might also be interested in:
Scientists have learned that Mount Hood, Oregon's tallest mountain, has erupted in the past due to the mixing of two different types of magma. "The data will help give us a better road map to what a future...more
The Earth's mantle is a rocky, solid shell that is between the Earth's crust and the outer core, and makes up about 84 percent of the Earth's volume. The mantle is made up of many distinct portions or...more
Some geologic faults that appear strong and stable, slip and slide like weak faults, causing earthquakes. Scientists have been looking at one of these faults in a new way to figure out why. In theory,...more
The sun goes through cycles that last approximately 11 years. These solar cycle include phases with more magnetic activity, sunspots, and solar flares. They also include phases with less activity. The...more
Studying tree rings doesn't only tell us the age of that tree. Tree rings also show what climate was like for each year of a tree's life, which means they can tell us about climates of the past and about...more
Earth's first life form may have developed between the layers of a chunk of mica sitting like a multilayered sandwich in primordial waters, according to a new hypothesis. The mica hypothesis, which was...more
Acid rain plays a small role in making the world's oceans more acidic. But new research has found that acid rain has a much bigger impact on the coastal sections of the ocean. Acid rain is caused by pollution...more
| 3.175721 |
This photograph shows the stones of Stonehenge up close. Stonehenge is the most well known megalithic structure.
Click on image for full size
Image courtesy of Corel Photography
What are Megaliths?
Have you ever seen a megalith? Maybe you have and you just didn't know it!
A megalith is made of huge stones. They were put together by ancient people. Sometimes the stones look like a stone fort and sometimes they are just rocks that have been stood on end. Other megaliths look like big mounds of rocks, but those mounds have secret chambers inside of them!
Megaliths can be found all over. They are in Europe, Russia, the Americas, Africa, and Asia.
Some megaliths were road markers like our present-day road signs. Some were probably ancient sites of worship. Still others were graveyards. And others might have been astronomy observatories.
We do know that these structures must have been really important to the people who built them! Some of the megaliths are almost 7,000 years old. That means that 7,000 years ago people were moving stones of up to 180 tons in weight around. They did that without cranes or trucks or even simple horse-drawn carts!
The most famous megalithic site is Stonehenge in England. But there are other cool ones too. Take a look at Newgrange of Ireland, Balnuaran of Clava in Scotland, Pentre Ifan in Wales, the stones of Fossa in Italy and the Carnac stones of France.
Shop Windows to the Universe Science Store!
The Summer 2010 issue of The Earth Scientist
, available in our online store
, includes articles on rivers and snow, classroom planetariums, satellites and oceanography, hands-on astronomy, and global warming.
You might also be interested in:
Ancient people all over the world built stone structures. Some of the stone structures marked graveyards for these people. In Wales, there are some stones named Pentre Ifan that mark a grave. There is...more
The stones of Carnac, France, are very famous because there are a lot of them and because they are so old! The oldest stones found in Carnac are from about 4,500 B.C. That's older than the stones at Stonehenge!...more
You may have heard of the lake called Loch Ness, where people think they've seen the Loch Ness monster. Near Loch Ness there are three giant stone tombs you may not have heard of...they are called the...more
There are over 900 rings of stone located in the British Isles. The most famous of these stone rings is Stonehenge. Stonehenge is in England. Stonehenge is a mysterious sight. There's these huge stones...more
The stone rings and tombs of England and France are very famous. But, there are also stone structures in Italy. There are some neat stones in Fossa, Italy. They are standing stones. These stones form circles...more
Simple astronomy was probably practiced at Stonehenge over 4,000 years ago. These ancient observers would especially observe the movement of the Sun and Moon across the sky. Stonehenge actually lines up...more
Man has always observed the sky. By watching the Sun and Moon, early man could tell what season was coming next. They had to know this to be able to farm and hunt. Archeoastronomy started in the 1960's...more
| 3.105167 |
1516: Two Bavarian dukes issue a decree that limits the ingredients used in brewing beer to barley, water and hops.
Referred to today as the Reinheitsgebot (purity ordinance), the decree has come to be known as a beer-purity law that was intended to keep undesirable or unhealthy ingredients out of beer. But the original text doesn’t explicitly state the reasoning behind the regulation.
An English translation of the decree simply states,”We wish to emphasize that in future in all cities, markets and in the country, the only ingredients used for the brewing of beer must be barley, hops and water.”
In fact, the main intent of the decree had more to do with bread than beer.
“The government simply didn’t want people using valuable grains for beer,” said historian Maureen Ogle, author of Ambitious Brew: The Story of American Beer. “I think it was really just an attempt to keep beermakers from infringing on the territory of people who made bread.”
Ensuring cheap bread was critical in times of food scarcity, a real problem for 16th-century Bavaria. While barley is not very digestible and consequently does not make for good eating, grains like wheat and rye are great for bread. The Bavarian leadership wanted to head off competition for those grains, in order to keep the price of food down.
An unintended side effect of the regulation may have been a purer brew, but Ogle suspects the idea that purity motivated the rule may have germinated after World War II, when Germany’s economy was struggling.
“After the war, they were looking for ways to bolster their economy, and one thing they could do is export beer,” Ogle said. “My educated guess is it’s directly connected to this drive.”
The beer-purity angle probably really took hold in the United States in the 1960s as craft brewing was becoming more popular, Ogle said. The Reinheitsgebot is still discussed on beer blogs today.
“If here had been no craft-beer movement, there wouldn’t be anybody sitting around talking about this law today,” she said.
The name Reinheitsgebot did not appear in print until 1918, and wasn’t applied beyond Bavaria until 1871 when the German Empire was formed. According to the German Beer Institute, the law became an official part of the tax code in 1919 — despite the protestations of brewers — when Bavaria refused to join the Weimar Republic unless the law was enforced throughout the republic.
Until Germany joined the European Union in 1987, a version of the Reinheitsgebot was still part of the German tax code, with the addition of yeast (until Louis Pasteur came along, yeast’s role in fermentation wasn’t known), and the inclusion of ingredients that can also be used in other food, such as wheat.
Many German brewers still proudly claim to follow the Reinheitsgebot, and beers that do comply get special protections as a traditional food. It’s used as a marketing tool, and the beers have “Gebraut nach dem deutschen Reinheitsgebot” (brewed according to the German Purity Ordinance) on the label.
Today the penalty for not abiding by the Reinheitsgebot may only be the upturned noses of some American craft brewers. But in the 16th century, the consequences of brewing an offending beer were far more dire: They lost the beer.
“Whosoever knowingly disregards or transgresses upon this ordinance, shall be punished by the court authorities’ confiscating such barrels of beer, without fail.”
Image: No, these soldiers are not confiscating transgressive beer. The caption says, “Preparations for the Kaiser’s birthday celebration at the front. Beer delivered from home is unloaded.” On the left-hand inset, “Zensiert” means “censored.” Circa 1915-1918.
Courtesy New York Public Library
- April 7, 1933: Gimme a Tall, Cold One
- It’s Beer O’Clock! Watch Has Built-In Bottle Opener
- Why Geologists Love Beer
- How Wired.com Built Beer Robot, Our DIY Kegerator
- Beer Gear, From Can to Refrigerator Robot
- April 23, 1827: Shedding a Ray of Light on Rays of Light
- April 23, 1940: Batteries Included, and They Don’t Leak
- April 23, 1984: AIDS Virus Disclosed, and a Premature Promise Made
- Aug. 10, 1519: Magellan Sets Sail Into History
| 3.773398 |
Learn something new every day More Info... by email
A block diagram is a graphical method used to explain the concept of a system without the need to understand the individual components within that system. This type of diagram might be used in a variety of industries to illustrate and educate individuals about how a system operates, either in part or in its entirety. Block diagrams usually will have a logical, methodical flow from beginning to end. Engineers and software programmers are examples of individuals who might be familiar with block diagrams.
Block diagrams essentially are synonymous with flow charts, but a block diagram is generalized in nature. Sometimes block diagrams are used to conceal specific information or processes that might prove to be advantageous or detrimental, whichever the case might be. People who are being presented with a block diagram should be able to develop an understanding of what that block represents. To assist in understanding the block itself, lines should be drawn to the block representing various inputs, outputs or alternative choices.
Depending on the type of process being illustrated, blocks might serve in any capacity that is needed to adequately describe the process or parts of the process. For instance, a manufacturing cell of machine tools might include a drill press, a milling machine and a sanding machine. To illustrate a process within that cell, each machine tool might be represented by its own block. When the manufacturing process is illustrated in its entirety, a single block might be used to represent all of the components within that cell.
A block diagram also can be used to illustrate how a computer program works or how parts of a program work. If, for instance, a program is needed to calculate four different methods of interest rates, a block might represent each of these lines of code for one of these methods. In this way, a supervisor does not need to understand the code itself, as it is written, as long as the purpose of that block is communicated effectively.
Some block diagrams can be used as a way to map out a process as a top-down diagram. For instance, a person who has an inspired project might use a block diagram as a way to convey the idea as a series of individual blocks, each of which helps support the main topic. Later, these individual blocks might then be analyzed and further developed into additional block diagrams as needed. This method can be repeated until the process is mapped out to the satisfaction of all those involved with the project. If compiled and mapped out completely, the block diagram might resemble a pine tree type of structure of the entire project, which is typical for a top-down diagram.
| 3.971423 |
5th Forum Top 25 Highlights - Thematic Programme
During the scores of discussions in over one hundred sessions, some overall messages emerged:
- Water is a common denominator for many development issues and the key to successfully resolving those challenges.
- Because of the interrelatedness of water issues across so many different sectors, progress can only be achieved through an interdisciplinary approach. There is a need to reinforce the preliminary linkages made at the 5th World Water Forum and continue to think out of the box.
- Education, capacity development and financial support are needed in virtually every domain to support further progress.
- Solutions must be sustainable and flexibly adapted to specific local or regional circumstances: no “one size fits all” approach can be applied to water management.
- Stakeholders need to be engaged through participatory processes in the earliest stages of water resource development strategies.
- The 5th World Water Forum enabled greater focus and synergies to move forward on today’s water-related challenges and to create more political will.
Climate Change, Disasters and Migration
While climate change, disasters and migration are distinct in scope and challenges, joint reflection on these issues at the 5th World Water Forum concluded that good adaptation measures implemented for climate change and disasters will, in fact, assist in mitigating migration. One billion slum dwellers worldwide demonstrate that unsolved rural problems lead to urban problems. Therefore, more work is needed to continue to dovetail efforts before crises arise, despite disparities among these domains. In addition, the message that water is the medium through which climate change acts and the work on “hotspots” and recommendations formulated at the 5th World Water Forum will be channeled into the UNFCCC CoP15 processes, as well as to other international processes.
Advancing Human Development and the MDG’s
Regardless of whether or not the MDG’s are achieved, after 2015, the remaining half of the population will still need to be served. At the 5th World Water Forum, the main impediments to reaching the MDGs were identified as a lack of effective management, investment, institutional capacity and political priority. One suggested instrument to ensure coverage for all school-aged children was the creation of a global convention to implement WASH in schools.
However, the necessity was also made clear to move away from increasing crisis management toward a process steered by more long-term development objectives, in which the challenges are recognized as all being interconnected. This will be especially important in harmonizing water use between energy production, food production and other uses, so that these needs complement each other rather than compete against one another. The fundamental baseline associated with all development and environmental challenges is that by 2050, the world’s population will rise to over 9 billion people, and all will need water and sanitation.
Managing and Protecting Water Resources
This theme perhaps offered the most fertile terrain for building bridges between polarized viewpoints: on transboundary issues, on storage issues, on infrastructure and environment, and between policy and implementation. Generally, it was agreed that river basin organizations offer a vehicle through which a range of partners can work together. In addition, a “Handbook on Integrated Water Resources Management in Basins” was presented, providing useful advice on how to improve governance of freshwater resources in basins. It was also recommended that IWRM needs to be practiced at different scales in order for it to be helpful in enabling Governments and all stakeholders to determine how to allocate water appropriately and which global solutions are most appropriate for any given situation. But most of all, these recommendations must lead to action.
Governance and Management
A wide majority of stakeholders reaffirmed support for the right to water and sanitation, already extensively recognized by many States, and supported further efforts for its implementation. In addition, a better understanding of the complementary roles of public and private sectors was achieved, recognizing that specific circumstances call for specific solutions. Moreover, 10 priority issues for catalyzing institutional change and policies were identified. In an effort to address corruption issues, participants called for the creation of an international tribunal to address violations and launched an appeal to incorporate anti-corruption safeguards into project designs. The need for public participation as an essential component of good governance was also emphasized.
Through a series of panels, sessions and side events throughout the week, financing issues received much greater attention than ever before from Forum participants. Despite recognition that financing needs for the water sector are still enormous and remain a major constraint for further development, the discussions enabled a much better understanding of the fundamentals of water economics. It was agreed that funds need to be allocated where they can have the biggest impact. By flexibly balancing “The 3 Ts: Tariffs, Taxes and Transfers”, the sector is consciously shifting its operational paradigm from “full cost recovery” to “sustainable cost recovery.” Although higher priority for water should still be requested in national budgets, increased efficiency and greater innovation can actually reduce financing needs.
Education, Knowledge and Capacity Development
With a view of strengthening science and education, participants called for:
- Enhancing knowledge and capacity development within the water sector;
- Improving data gathering, sharing and dissemination mechanisms;
- Promoting knowledge-based integrated approaches and informed decision making in water resources management;
- Actively engaging professional associations and all stakeholders.
To accomplish these objectives, guiding principles for education, knowledge and capacity development were drafted. Both Youth and network associations were recognized as powerful agents for change in this domain, especially in the advent of new technologies that will improve interconnectedness in future water management strategies. Partners also committed to improve the organization and availability of water-related data, building upon existing systems.
| 3.079896 |
Information contained on this page is provided by NewsUSA, an independent third-party content provider. WorldNow and this Station make no warranties or representations in connection therewith.
/Centers for Disease Control and Prevention) - As a parent, you know how much you do to keep your little one safe and healthy. Even if you don't have kids, you still play an important role in protecting babies -- like your friend's newborn, your grandchild or even a baby you run into at the store. These children have something in common: they are vulnerable to whooping cough (or pertussis). We can all do something to protect them by getting immunized.
Whooping cough can take a toll on anyone, but it can be deadly for young children. Today, there are cases in every state, and the country is on track to have the most reported cases since 1959. From January through September 2012, about 30,000 cases of whooping cough were reported, along with 14 deaths. Most of those deaths were among babies younger than three months old. How can you protect yourself and help keep babies safe?
Protection can start before a baby is even born. Pregnant moms should get a Tdap vaccine
, which is a booster shot combining protection against whooping cough, tetanus and diphtheria. She'll be protecting herself so she won't spread whooping cough to her newborn, and some protection will be passed on to her baby.
The Centers for Disease Control and Prevention (CDC) also recommends that anyone around babies get the whooping cough vaccine at least two weeks before coming into close contact with an infant. In fact, every adult is recommended one dose of Tdap to protect themselves, even if they're not going to be around babies. During pregnancy, moms should talk to others about getting the Tdap vaccine. This includes her spouse, grandparents, siblings, aunts, uncles, cousins, babysitters and day care staff.
After the baby arrives, he'll get his first dose of DTaP (the childhood vaccine combining protection against whooping cough, tetanus and diphtheria) at two months of age. He should complete the vaccine series by getting additional doses at 4 months, 6 months, 15 through 18 months and 4 through 6 years of age. Since the protection the DTaP vaccine provides young children decreases over time, preteens need the Tdap booster shot at 11 or 12 years old.
Now is the time to do your part to protect yourself -- and babies, too. Visit www.cdc.gov/pertussis
for more information, and talk to a doctor about the whooping cough vaccine today.
| 3.067571 |
The case of the first toddler ever to be "functionally cured" of HIV could have wide-ranging effects on the global fight to end the AIDS epidemic.
"If we can replicate this in other infants ... this has huge implications for the burden of infection that's occurring globally," said Dr. Deborah Persaud, a pediatrician at the Johns Hopkins Children's Center. Persaud is the lead author of a report on the toddler's case that was presented at the 2013 Conference on Retroviruses and Opportunistic Infections in Atlanta on Monday.
"For the unfortunate ones who do get infected, if this can be replicated, this would offer real hope of clearing the virus."
Some 1,000 infants are born with HIV every day, according to the latest estimates from the UNAIDS Global Report. That means some 330,000 children are living with the deadly virus. The majority of these infections are in the developing world.
The most common way children get HIV is through perinatal transmission -- HIV transmission from an infected mother to a child while she is pregnant, giving birth or when she breast-feeds the child.
The number of infant infections in the United States has gone down some 90 percent since the mid-'90s, according to the Centers for Disease Control and Prevention; that's in large part because pregnant women are routinely tested. When a mother is identified as being HIV positive, her doctor is then able to administer preventive interventions that will, in most cases, keep the virus in check.
In developing countries, infants born to mothers with HIV are not so lucky. There, mothers are less likely to be treated with antiretroviral drugs that would prevent transmission during pregnancy. In North Africa and the Middle East, for instance, 3 percent of pregnant women with HIV received antiretroviral medications, according to the U.N. report. Some 23 percent in West and Central Africa did. Testing is also less sophisticated in these areas.
The unidentified Mississippi woman in this case had no prenatal care and was not diagnosed as HIV positive until just before she delivered the baby. That's why Dr. Hannah Gay, an associate professor of pediatrics at the University of Mississippi Medical Center, administered the drugs within 30 hours after the baby was born.
Typically, a baby born to a woman with HIV would be given two drugs as a prophylactic measure. Gay said her standard is to use a three-drug regimen to treat an infection. She did this on the Mississippi infant without waiting for test results to confirm the baby was infected with HIV.
Gay thinks the timing may be key, that the timing may deserve "more emphasis than the particular drugs or number of drugs used."
| 3.532501 |
In grammar terms, a participle is an adjective (descriptive word) made from a verb.(noun)
An example of a participle is "sleeping" in the phrase "sleeping dogs."
See participle in Webster's New World College Dictionary
Origin: OFr < L participium < particeps, participating, partaking < participare, participate: from participating in the nature of both v. & adj.
See participle in American Heritage Dictionary 4
Origin: Middle English
Origin: , from Old French
Origin: , variant of participe
Origin: , from Latin participium (translation of Greek metokhē, sharing, partaking, participle)
Origin: , from particeps, particip-, partaker; see participate. Usage Note: Participial phrases such as walking down the street or having finished her homework are commonly used in English to modify nouns or pronouns, but care must be taken in incorporating such phrases into sentences. Readers will ordinarily associate a participle with the noun, noun phrase, or pronoun adjacent to it, and misplacement may produce comic effects as in He watched his horse take a turn around the track carrying a racing sheet under his arm. A correctly placed participial phrase leaves no doubt about what is being modified: Sitting at her desk, Jane read the letter carefully. • Another pitfall in using participial phrases is illustrated in the following sentence: Turning the corner, the view was quite different. Grammarians would say that such a sentence contains a “dangling participle” because there is no noun or pronoun in the sentence that the participial phrase could logically modify. Moving the phrase will not solve the problem (as it would in the sentence about the horse with a racing sheet). To avoid distracting the reader, it would be better to recast the sentence as When we turned the corner, the view was quite different or Turning the corner, we had a different view. • A number of expressions originally derived from participles have become prepositions, and these may be used to introduce phrases that are not associated with the immediately adjacent noun phrase. Such expressions include concerning, considering, failing, granting, judging by, and speaking of. Thus one may write without fear of criticism Speaking of politics, the elections have been postponed or Considering the hour, it is surprising that he arrived at all.
Learn more about participle
| 3.483708 |
Most Viewed Stories
Farmworkers face special risks and health issues
Thanks to the agricultural workers that toil in the fields, millions of people are fed during the winter. But these farmworkers face different health issues and risks than the general population.
“They do a lot of repetitive work. There are a lot of back injuries,” noted Amanda Aguirre, CEO/president of the Regional Center for Border Health
It might be a challenge to keep their feet dry. “The fields are wet, muddy. If someone is diabetic, it might be a problem,” she said.
The same conditions increase their risk of getting fungus under their nails.
They have to protect themselves from the insecticides applied to the crops.
They are experts at using a knife to cut lettuce and broccoli, but they're still at risk of injuries, especially in cold weather, when fingers might be numb.
Tuberculosis, which spreads easily, is a concern because farmworkers spend a lot of time in small, confined spaces with large groups of people, either living together or traveling to job sites on buses.
Other concerns include sexually transmitted diseases, chronic diseases such as diabetes, lack of preventive care and continuity of care since migrant workers — and their families — move a lot.
“They follow the crops. Today they're here and tomorrow in Salinas,” Aguirre noted.
These migrant farmworkers are the reason the Sunset Community Health Clinic exists. Sunset began receiving federal funding through a Public Health Service grant to the Yuma County Migrant Health Program in 1972, according to the center's website.
Services were provided out of a trailer in Somerton during those early days. Today the center serves 27,000, including 6,000 migrant workers.
Some agricultural groups and major companies provide health insurance for their temporary workers, who might be on their payroll for six months or so.
Some unions, particularly in California, also provide health insurance, thanks to the movement started by Cesar Chavez. This type of insurance extends into Mexico, where farmworkers and their families can access health-care services and have it paid by American insurance.
Aguirre noted that Pan American Underwriters and Western Growers both provide health insurance for farmworkers, often with coverage for services in Mexico.
This is how many farmworkers and their families like it. “Because we're on the border, a lot of workers cross into Mexico for service,” Aguirre pointed out.
The San Luis Clinic, which is run by the Regional Center for Border Health, a private, nonprofit organization, is designated as a rural health care provider. It also sees farmworkers, takes all insurance and has a sliding fee scale.
The clinic will not deny anyone services and will donate medical services should someone not have the means to pay for them.
Some farmworkers might also qualify for the Arizona Health Care Cost Containment System, depending on the individual's work and insurance status. Or if they're temporary workers and they qualify for unemployment benefits during the off season, they might also be enrolled in AHCCCS.
While farmworkers face special health issues, Aguirre believes agricultural companies are paying more attention to worker safety.
“Education and awareness are very important,” she said, noting that the focus must be on keeping families healthy, impressing upon them the importance of good nutrition and giving them access to primary health care services.
“They have a difficult skill, a skill they know very well. It's very unique,” Aguirre added. “Not many people want to do that, so we need to value them as part of our community and make sure to offer them services as well as we can.”
Mara Knaub can be reached at [email protected] or 539-6856. Find her on Facebook at Facebook.com/YSMaraKnaub or on Twitter at @YSMaraKnaub.
| 3.140823 |
The Western Wall
Additional western wall pics
|History of the Wall||Power and the Wall||Gender and the Wall||Links|
History of the Wall
The Western wall is the last remnant of the Holy Temple in Jerusalem, which was destroyed in 70 CE by Titus and the Roman legions. Interestingly, the Romans did not destroy the protective wall which surrounded the Temple. This last remnant of the Temple quickly became one of the most celebrated religious sites in Jerusalem. The Kotel ha-Ma'aravi (Western Wall) stands upon the Temple Mount and stretches 200 meters. For many reasons the wall attracts visitors from around the world. It is one of the holiest sights in Jerusalem and some believed that here Abraham bound Isaac. Jewish people have been praying at the wall for two thousand years and travel from around the world to gather and pray at this holy site. Jews pray facing the wall three times daily, often in tears which is why some refer to the wall as the "Wailing Wall". In fact, the wall is an ancient praying site for Jews since the Middle Ages and the sheer longevity of the wall is phenomenal. This leads many Jews to believe that the wall is holy and has been blessed by God.
Power and the Wall
Aside from the historical significance of the wall, another attraction to the wall stems from the perception of the wall as an appropriate metaphor for the struggle of survival of the Jewish people themselves. For just as the Jewish peoples' existence has been endangered over the course of history, the wall's existence has been threatened as well. The Romans burned the entire Temple but inexplicably left the outer wall unscathed. Remarkably, the wall still exists and just as remarkably so do the Jews. This religious connection that many Jews feel with the wall, arguably makes the wall the most powerful and emotional religious site in Jerusalem. The amount of people that come to the wall to pray on a daily basis is amazing and the emotion that is so commonly exhibited by these people leaves many observers in awe. The wall also holds an important position within ancient Jewish prophecies. One being that the Holy Temple will eventually be reconstructed. Rebuilt in its original form, standing upon the Temple Mount surrounded by the wall. The second prophecy involves the arrival of the Messiah, for some Jews attest that only the true Messiah will cause enough power for the Temple's construction. Whether a person is Jewish or not, the power that the wall holds within the Jewish community is undeniable.
Gender and the Wall
Although the Wall is considered by Jews as a holy place of worship it does not allow for all members of the Jewish community to pray in the same area. Orthodox Jews believe that women should not don prayer shawls or lead services and should be separated from men during prayer. So, Jewish women are expected to pray at a specific area of the wall without Torahs. Some women, who have attempted to pray at the wall have even encountered violence from Orthodox men. Unfortunately, the space at the Wall is not entirely equal and the Israeli government, for fear of the power of the Orthodox within Jerusalem will not protect a woman who wants to pray at any part of the wall. Instead women must pray at a separate area of the wall and if they feel inclined to pray elsewhere there is a chance she would be met by men who use violence as a deterrent. The mob rule of the Western Wall is unfortunate but indicative of Orthodox gender practice in general, in which women are not permitted to read from the Torah, pray in the same area as men or allowed to lead religious services. We may conclude that the separatism that occurs at the wall is not an isolated event but rather an extension of Orthodox Jewish practice concerning gender.
Messianic prophecy concerning the Wall
Wailing Wall Kvitel Service Send a message to put in the wall.
Window On The Wall 24 Hour view of the Western Wall.
| 3.517983 |
Home Fire Safety for Family and Friends of Older Adults
When it comes to fire, adults over age 65 are at greater risk than any other group. Most fire deaths occur in the home and it is important for older adults to know how to protect themselves. If you have a relative, friend or neighbor in this high-risk group, please take a few minutes to complete this fire safety check of their home.
Conduct the following safety checks:
- Check that there are working smoke alarms on every level of the home and in sleeping areas.
- Make sure the older person can hear the alarm when it activates. If not, there are now a variety of smoke alarms on the market that combine sound and light to alert those with limited hearing that there is a fire in the home.
- Check that the smoke alarms have been tested. If not, test the smoke alarm by pressing the alarm test button. If it is difficult to reach, use a broom handle or ruler to test it.
- Check that the batteries have been changed within the past year. Batteries should be replaced each year. It is a good idea to mark the date on the batteries so that anyone will know when they were last replaced. Families are encouraged to change the batteries in the fall during "Change your Clock, Change Your Batteries" programs.
A chirping sound indicates a low battery, but this sound can be difficult for an older person to hear or recognize.
- Check for scorch marks on pots and pans. If you find scorch marks, discuss with the older person. He/she may be leaving cooking unattended.
- Check that clothing, bedding, furniture and floors are free of cigarette burns. If you find cigarette burns, discuss the situation with your older friend or relative.
- Do you know how to leave quickly if there is a fire?
- Check that the older person knows two ways out in case the main route is blocked by smoke or flames. Check that all doors and windows in the escape route can be easily opened.
- Do you have a neighbor who can help in an emergency?
- Is there a phone near your bed in case you need help?
- What would you do if the room filled with smoke?
- Demonstrate how to crawl low and go.
- For those living in apartments:
- Do you know the sound of the fire alarm and what to do if the alarm sounds? Find out correct procedures from building management.
- Do you ever leave cooking unattended?
- Most kitchen fires start because cooking food has been left unattended. Never leave items cooking unattended on the stovetop. It's best to turn off the stove before leaving the kitchen and to closely monitor things cooking in the oven. Use a timer or take an item such as a potholder as a reminder if there is something cooking in the oven.
- Do you know what to do if a pot on the stove catches fire?
- Keep a proper fitting lid nearby and safely slide it over the burning pot.
- If a grease fire starts:
- Using a potholder, place a lid over the burning pan.
- Turn off the heat source.
- Leave the lid on the pan until it is completely cool.
- Never use water on a grease fire.
- Don't try to carry a pan that is on fire to the sink or outdoors. Smother the fire with a lid and leave the pan where it is.
- If a grease fire starts:
- Are combustibles or things that can easily ignite, such as dish towels or curtains near the stove?
- Keep anything that can easily catch fire away from the stove.
- Do you wear tight-fitting or rolled up sleeves when you use the stove?
- Dangling sleeves can easily brush against a hot burner and quickly catch fire.
- Are you careful not to reach over hot burners?
- Use the front burners as much as possible.
- Do you keep portable heaters at least 3 feet from any combustible materials, such as drapes, clothing or furniture?
- Space heaters can quickly warm up a cold room, but they have also been the cause of many serious home fires. Remind your friend or relative that portable heaters should be at least three feet from all combustible materials, including paper, bedding, furniture and curtains. Never use your heater to dry clothing or shoes and make sure that all heaters are turned off before leaving your home or going to bed.
- If you smoke, do you consider yourself a careful smoker?
- Smokers should use large, deep ashtrays and never smoke when drowsy or in bed.
- Where do you empty your ashtrays?
- Soak cigarette butts and cigar ashes in water before discarding or in a non-combustible can. Ashes from a cigarette can smolder for hours before a flare-up occurs.
- Are you cautious when your drink and smoke?
- Drinking alcohol while smoking is a deadly combination and account for many fire deaths.
Take a few minutes to learn about safety in the home and get family members involved! With a little planning, many injuries and deaths from fires and unintentional injuries could be prevented. We put safety where you put your family - FIRST. Please don't hesitate to call the Montgomery County Fire and Rescue Service if you need help with any aspect of your home safety.
| 3.063434 |
Japan has been hit by the worst crisis since 1945, as an earthquake and tsunami have killed 10,000, destroyed tens of thousands of buildings, displaced hundreds of thousands, and left millions without power or water. As the nation braces for more aftershocks, people have resorted to using sea water in an attempt to prevent a nuclear meltdown from adding a third catastrophe, which has already leaked and caused a mass evacuation. According to Greenpeace,
"We are told by the nuclear industry that things like this cannot happen with modern reactors, yet Japan is in the middle of a nuclear crisis with potentially devastating consequences…The evolving situation at Fukushima remains far from clear, but what we do know is that contamination from the release of Cesium-137 poses a significant health risk to anyone exposed. Cesium-137 has been one if the isotopes causing the greatest health impacts following the Chernobyl disaster, because it can remain in the environment and food chain for 300 years.”
Whereas the first two catastrophe’s were natural and unpredictable, a nuclear meltdown is entirely unnatural and entirely predictable. According to the local anti-nuclear group, Citizens’ Nuclear Information Centre,
The nuclear crisis comes a month before the 25th anniversary of the Chernobyl disaster, the largest nuclear meltdown in history, which showered Europe in a radioactive cloud causing a quarter of a million cancers, 100,000 of them fatal. As of this writing the disaster in Japan is already the third worst in history, behind Chernobyl and the Three Mile Island partial meltdown in 1979, and comes only 12 years after a fatal overexposure of workers at a nuclear plant in Tokaimura, Japan. Even without the inherent risk of a meltdown, nuclear power is a threat to health. The problem is not just the few terrible times when they don't work, but the daily experience of when they do work. As climate campaigner George Monbiot wrote more than a decade ago,
“The children of women who have worked in nuclear installations, according to a study by the National Radiological Protection Board, are eleven times more likely to contract cancer than the children of workers in non-radioactive industries. You can tell how close to [the nuclear plant in] Sellafield children live by the amount of plutonium in their teeth.”
Add to this the morbidity and mortality or working in uranium mines and the dangers of disposing of radioactive waste, and you have negative health impacts at every stage of nuclear power (for a summary see the UK’s Campaign for Nuclear Disarmament). Despite this, governments have invested massively in the nuclear industry and globalized the risk. Canada has exported nuclear reactors while building seven of its own, and despite concerns about safety the Ontario government plans on investing $36 billion into nuclear power at the same time as its backing off wind power.
REASONS AND EXCUSES
While nuclear power is a clear and present danger to the health of the planet and its people, it is a thriving industry driven by economic and military competition. Vandana Shiva—who studied nuclear physics and now leads the climate justice movement in India—has exposed the hypocrisy of US hostility to Iranian nuclear power when it is doing the same thing to promote nuclear power and weapons in India as a bulwark against China:
As Shiva summarized in her book Soil Not Oil, “nuclear winter is not an alternative to global warming”, and it is a tragedy that Japan has become the test case against both military and civilian arms of the nuclear industry--from the atomic bomb 65 years ago to the nuclear meltdown today. But instead of admitting the problems of nuclear power, the nuclear industry and its supporters have greenwashed it and presented it as a solution to global warming. Some environmentalists, such as Gaia theorist James Lovelock, have fallen prey to these claims. Lovelock, whose ideas are driven by apocalyptic predictions and an extreme pessimism, has gone so far as to claim that “nuclear power is the only green solution”.While former US president George Bush defended his country’s 103 nuclear power plants as not producing "a single pound of air pollution or greenhouses gases”, Dr. Helen Caldicott has refuted the claim in her important book Nuclear Power is Not the Answer, which proves that even without meltdowns nuclear power is a threat to the planet:
The false dichotomy between carbon emissions and nuclear power is also refuted by those developing the Tar Sands, who have proposed using nuclear power to pump Tar Sands oil.
PEOPLE POWER, GREEN JOBS
Fortunately there are growing anti-nuclear campaigns uniting indigenous groups, NGOs and the broader climate justice movement to challenge nuclear power in all its stages—from mining to use to waste disporal. As Vandana Shiva writes in Soil Not Oil,
Meanwhile in Canada indigenous groups are leading opposition to transportation of nuclear waste through the Great Lakes and their surrounding communities, declaring “what we do to the land, we do to ourselves.” Last year the German government extended nuclear power against the will of the majority but after news of the leak in Japan, 50,000 people formed a human chain from a nuclear reactor to Stuttgart demanding an end to nuclear power.
Uniting these campaigns with the labour movement raises the demands of good green jobs for all, to transform our oil and nuclear economy into one based on ecological and social sustainability and justice. Instead of the billions in subsidies for the nuclear industry, governments could be investing in solar, wind and clean electricity, while retrofitting buildings, which could solve the economic and climate crises without the inherent dangers of nuclear power. As Greenpeace wrote,
"Our thoughts continue to be with the Japanese people as they face the threat of a nuclear disaster, following already devastating earthquake and tsunami. The authorities must focus on keeping people safe, and avoiding any further releases of radioactivity...Greenpeace is calling for the phase out of existing reactors, and no construction of new commercial nuclear reactors. Governments should invest in renewable energy resources that are not only environmentally sound but also affordable and reliable.”
| 3.177336 |
Search: Advanced search
Browse by category:
My question is how do we know whether to say Ittaqullah and not Attaqullah or Uttaqullah.
Assalamu Alaikum wa Rahmatullahi wa Barakatuh
Kindly explain the rules of how to read a word when one needs to seperate it from the word before it for example I know that when we seperate Ya Ayyuhallazina Amanuttaqullaha and want to say 'fear Allah' on its own, we say Ittaqullaha. My question is how do we know whether to say Ittaqullah and not Attaqullah or Uttaqullah. I remember learning the rule long ago, something to do with the sign (fathah, dhamma, kasra) on the next letter, but have since forgotten.
Please explain with examples from the Qur'an.
Jazak Allahi Kkhair
Wa alaikum assalaam wa rahmatullahi wa barakatuh,
The rules involved in the question are the hamzah al-wasl rules. We have to look at the word beginning with the hamzah al-wasl and determine whether it is a verb, a noun, or a participle. In this case, the word is a command form of a verb. In a verb starting with hamzah al-wasl, we look at the vowel on the third letter, if it is a fathah or a kasrah, then we start the hamzah al-wasl with a kasrah.
To see the detailed lessons on hamzah al-wasl, please see the following link. You will find many examples in the eight different lessons insha' Allah:
Wa iyyaaakum wa-l-muslimeen.
Powered by KBPublisher (Knowledge base software)
| 3.138163 |
If the city feels hotter to you in the summer, you're right.
The Japan Meteorological Agency has proved that all the asphalt and tall buildings and exhaust heat are indeed to blame.
"Urban heat islands" raised the daily August temperatures by 1 to 2 degrees in Japan's three biggest megalopolises of Tokyo, Osaka and Nagoya, the JMA said July 9.
This is the first time the JMA has analyzed the effects of urban heat islands, where asphalt, buildings and heat from the exhaust of automobiles and air conditioners and other factors contribute to a rise in temperatures.
The JMA used data from last August to simulate air temperatures on the assumption that all ground surface was covered by grassland and that there was no exhaust heat from human activities in the three megalopolises, and compared the numerical outcomes with what was actually recorded.
The urban heat islands accounted for rises of about 2 degrees in the cities' central areas and about 1 degree on their outskirts, JMA officials said.
It is believed that air temperatures have risen about 3 degrees in the three big cities during the last 100 years due to both global warming and urban heat islands, but the JMA has never evaluated to what extent the urban heat islands are responsible.
"Urbanization accounted for as much part of the temperature rises as global warming," a JMA representative said. "The situation is thought to be similar in other regions of advanced urbanization."
- « Prev
- Next »
| 3.129578 |
In a new study launched Wednesday by the International Energy Agency (IEA), weeks ahead of the Durban Climate Change talks, scientists warn that if the current trend to build high-carbon generating infrastructures continues, the world's carbon budget will be swallowed up by 2017, leaving the planet more vulnerable than ever to the effects of irreversible climate change.
According to the IEA's World Energy Outlook, today's energy choices are likely to commit the world to much higher emissions for the next few decades. The current industrial infrastructure is already producing 80% of the world's "carbon budget".
The report estimates that global primary energy demand rebounded by a remarkable 5% in 2010, pushing CO2 emissions to a record 30.6 gigatonnes (Gt) in 2010. Subsidies that encourage wasteful consumption of fossil fuels jumped to over $400bn (£250.7bn).
The IEA warns of a "lock in" effect; whereby high-carbon infrastructures built today contribute to the old stock of emissions in the atmosphere, thus increasing the danger of runaway climate change.
According to the report, there are few signs to suggest that the urgently needed change in direction in global energy trends is under way.
As the world gears up towards the Durban talks later this month and Rio+20 in seven months, the UN Environment Programme (UNEP) plans to launch a new report on 23 November 2011 that reviews the latest data on the gap between commitments by nations to reduce their emissions and the actual emissions reductions required to keep global temperature rise under 2 degrees C. The report also tackles the question - How can the gap be bridged?
The new report is a follow up to "Bridging the Gap", which was launched last December and became a key benchmark for the international climate negotiations in Cancun.
UNEP will also launch on 25 November a study that will outline the measures and costs of reducing black carbon and non-CO2 gases to slow climate change. The new UNEP report outlines a package of 16 measures which could reduce global warming, avoid millions of premature deaths and reduce global crop yield losses by tackling black carbon, methane and ground-level ozone - substances known as short-term climate forcers.
The report demonstrates that half of these measures can deliver net cost savings over their lifetime, for example, from reduced fuel consumption or the use of recovered gas.
| 3.704322 |
Photography - Overview
The millions of photographs in the Museum's collections compose a vast mosaic of the nation's history. Photographs accompany most artifact collections. Thousands of images document engineering projects, for example, and more record the steel, petroleum, and railroad industries.
Some 150,000 images capture the history, art, and science of photography. Nineteenth-century photography, from its initial development by W. H. F. Talbot and Louis Daguerre, is especially well represented and includes cased images, paper photographs, and apparatus. Glass stereographs and news-service negatives by the Underwood & Underwood firm document life in America between the 1890s and the 1930s. The history of amateur photography and photojournalism are preserved here, along with the work of 20th-century masters such as Richard Avedon and Edward Weston. Thousands of cameras and other equipment represent the technical and business side of the field.
| 3.059088 |
Yellow tangs, Zebrasoma flavescens, are reef fish found in the waters west of Hawaii and east of Japan in the Pacific Ocean. They mainly live off the coast of Hawaii, but are also found in the more western ranges of their habitat, including the islands Ryukyu, Mariana, Marshall, Marcus, and Wake. They prefer subtropical waters. (Agbayani, 2008; Waikïkï Aquarium, 1999)
Yellow tangs are reef-associated fish. Their preferred water temperature is around 21 degrees Celsius. They inhabit coral reefs in subtropical waters, but generally do not live in tropical seas. Yellow tangs mainly live in the sub-surge zone of a coral reef, this is the area with the least wave action. Zebrasoma flavescens live at depths of 2 to 46 meters. The clear larva of yellow tangs develop into marine plankton, in this stage they are carried close to reefs where they settle in coral crevices. (Agbayani, 2008; Ogawa and Brown, 2001; Reynolds and Casterlin, 1980; Waikïkï Aquarium, 1999)
Yellow tangs have a clear larval stage before developing into juveniles. Juveniles and adults have a narrow, oval body. They have an average length-weight ratio between 2.93 and 3.16. They have a long snout for eating algae, a large dorsal fin with four to five spines, and an anal fin with three spines. Like other surgeonfish and tangs (Acanthuridae), yellow tangs have a white, scalpel-like spine on both sides of the tail that can be used for defense or aggression. Yellow tangs are named for their bright yellow coloring; the only area that is not yellow is the white spine. At night, this bright yellow color changes to a darker, grayer yellow with a white lateral line. (Agbayani, 2008; Froese, 1998; Guiasu and Winterbottom, 1998; Waikïkï Aquarium, 1999; Wood, 2008)
Yellow tangs begin their lives as fertilized eggs floating in open water. After hatching, the clear, pelagic larvae develop in the plankton. They enter the acronurus larva stage where they develop an oval body, dorsal and ventral fins, and spines. After about ten weeks, they enter a planktonic stage. Here, waves carry them to a coral reef where they take refuge and continue to develop and grow. (Brough and Brough, 2008; Sale, et al., 1984; Waikïkï Aquarium, 1999; Wood, 2008)
Zebrasoma flavescens can spawn in groups or in pairs. When in groups, females release eggs and males release sperm into open water where fertilization occurs. When in pairs, the male courts a female by changing colors and exhibiting a shimmering movement. The two fish then swim upward and simultaneously release their eggs or sperm into the water. Males may spawn with multiple females in one session, while females typically spawn only once a month. (Brough and Brough, 2008; Waikïkï Aquarium, 1999; Wood, 2008)
Yellow tangs reproduce externally. Their spawning peaks from March to September, but some fish spawn at all times throughout the year. An average female can release about 40,000 eggs. (Agbayani, 2008; Detroit Zoological Society, 2008; Lobel, 1989)
There is no parental investment in yellow tangs beyond the fertilization of eggs.
Juvenile yellow tangs are often territorial. This trait usually diminishes as the fish mature and start to roam wider areas of the reef. Adult tangs live singly or in small, loose groups. These groups sometimes contain other species of fish, like sailfin tang (Zebrasoma veliferum). Yellow tangs are diurnal. During the day, tangs move from place to place, grazing on algae; at night, they generally rest alone in coral reef crevices. (Agbayani, 2008; Atkins, 1981; Brough and Brough, 2008; Wood, 2008)
When they are juveniles, yellow tangs have small home ranges that they defend, often staying within a few meters of one area. Not much is known about the home ranges of adult yellow tangs. (Parrish and Claisse, 2005)
When mating, males change colors and exhibit a shimmering movement to attract females. In defense or aggression, yellow tangs extend their fins to full length, greatly increasing their size. They also expose their scalpel-like scales on their fins as a warning sign. They use these not only to defend themselves from predators, but also to scare away competitors for food or territory. (Brough and Brough, 2008; Waikïkï Aquarium, 1999)
Yellow tangs have a long, down-turned mouth with small teeth that are specialized for grazing on algae. Because they are mainly herbivores, they spend a large amount of their time grazing either alone or in groups. A large portion of their diet consists of uncalcified and filamentous algae that grows on coral reefs. In addition to smaller types of algae, yellow tangs feed on macroalgae, such as seaweed. Yellow tangs will also eat some types of zooplankton. (Guiasu and Winterbottom, 1998; Waikïkï Aquarium, 1999; Wylie and Paul, 1988)
Predators of Zebrasoma flavescens include larger fish and predatory invertebrates such as crabs and octopi. Yellow tangs rely on camouflage and their scalpel-like fins to protect themselves. To humans, these fish appear bright yellow, but, to other fish, yellow tangs blend in very well with coral reef backgrounds. According to Marshall et al. (2003) wavelength differences between yellow and average reef color become negligible at the depths where yellow tangs are found. In addition to camouflage, Zebrasoma flavescens use their scalpel-like fins for defense. (Barry and Hawryshyn, 1999; Detroit Zoological Society, 2008; Marshall, et al., 2003; Waikïkï Aquarium, 1999)
Yellow tangs, along with other algae feeders, are crucial parts of coral reef ecosystems. They feed on algae and seaweed that grow on the reefs, preventing them from overgrowing and killing corals. Yellow tangs are also a food source for larger fish and invertebrates. (Detroit Zoological Society, 2008; Waikïkï Aquarium, 1999)
Yellow tangs are important for tourism and the aquarium trade. Their bright yellow color is well recognized by scuba divers and other tourists on Hawaiian reefs. They are also a valuable resource in aquarium trade; they are the number one collected fish for export out of Hawaii. Their coloring, hardiness, and low cost all attribute to their popularity in marine aquariums, making them one of the ten most popular fish. (Brough and Brough, 2008; Ogawa and Brown, 2001; Waikïkï Aquarium, 1999)
Yellow tangs, along with other surgeonfish (Acanthuridae), are not generally dangerous. When they are young, they possess venom glands. As they age into juveniles and adults, they lose these glands. If yellow tangs are provoked, they can inflict deep injuries with the sharp blades on their tails. (Agbayani, 2008; Waikïkï Aquarium, 1999)
Zebrasoma flavescens is not a threatened or endangered species.
Tanya Dewey (editor), Animal Diversity Web.
Kara Zabetakis (author), University of Maryland, Baltimore County, Kevin Omland (editor, instructor), University of Maryland, Baltimore County.
body of water between the southern ocean (above 60 degrees south latitude), Australia, Asia, and the western hemisphere. This is the world's largest ocean, covering about 28% of the world's surface.
having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria.
uses smells or other chemicals to communicate
having markings, coloration, shapes, or other features that cause an animal to be camouflaged in its natural environment; being difficult to see or otherwise detect.
humans benefit economically by promoting tourism that focuses on the appreciation of natural areas or animals. Ecotourism implies that there are existing programs that profit from the appreciation of natural areas or animals.
animals which must use heat acquired from the environment and behavioral adaptations to regulate body temperature
fertilization takes place outside the female's body
union of egg and spermatozoan
An animal that eats mainly plants or parts of plants.
having a body temperature that fluctuates with that of the immediate environment; having no mechanism or a poorly developed mechanism for regulating internal body temperature.
offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes).
seaweed. Algae that are large and photosynthetic.
A large change in the shape or structure of an animal that happens as the animal grows. In insects, "incomplete metamorphosis" is when young animals are similar to adults and change gradually into the adult form, and "complete metamorphosis" is when there is a profound change between larval and adult forms. Butterflies have complete metamorphosis, grasshoppers have incomplete metamorphosis.
having the capacity to move from one place to another.
specialized for swimming
the area in which the animal is naturally found, the region in which it is endemic.
reproduction in which eggs are released by the female; development of offspring occurs outside the mother's body.
the business of buying and selling animals for people to keep in their homes as pets.
the kind of polygamy in which a female pairs with several males, each of which also pairs with several different females.
structure produced by the calcium carbonate skeletons of coral polyps (Class Anthozoa). Coral reefs are found in warm, shallow oceans with low nutrient availability. They form the basis for rich communities of other invertebrates, plants, fish, and protists. The polyps live only on the reef surface. Because they depend on symbiotic photosynthetic algae, zooxanthellae, they cannot live where light does not penetrate.
mainly lives in oceans, seas, or other bodies of salt water.
breeding is confined to a particular season
remains in the same area
reproduction that includes combining the genetic contribution of two individuals, a male and a female
associates with others of its species; forms social groups.
uses touch to communicate
defends an area within the home range, occupied by a single animals or group of animals of the same species and held through overt defense, display, or advertisement
the region of the earth that surrounds the equator, from 23.5 degrees north to 23.5 degrees south.
an animal which has an organ capable of injecting a poisonous substance into a wound (for example, scorpions, jellyfish, and rattlesnakes).
uses sight to communicate
breeding takes place throughout the year
animal constituent of plankton; mainly small crustaceans and fish larvae. (Compare to phytoplankton.)
Agbayani, E. 2008. "Zebrasoma flavescens" (On-line). FishBase. Accessed April 08, 2008 at http://www.fishbase.org/summary/Speciessummary.php?id=6018.
Atkins, P. 1981. Behavioral determinants of the nocturnal spacing pattern of the yellow tang Zebrasoma flavescens (Acanthuridae). Pacific Science, 35: 263-264.
Barry, K., C. Hawryshyn. 1999. Effects of incident light and background conditions on potential conspicuousness of Hawaiian coral reef fish. Journal of the Marine Biological Association of the United Kingdom, 79: 495-508.
Brough, D., C. Brough. 2008. "Animal-World" (On-line). Accessed April 08, 2008 at http://animal-world.com/encyclo/marine/tangs/yellow.php.
Detroit Zoological Society, 2008. "Detroit Zoo" (On-line). Accessed April 09, 2008 at http://www.detroitzoo.org/zoo/index.php?option=content&task=view&id=562&Itemid=610.
Dodds, K. 2007. "Reef Resources" (On-line). Accessed May 03, 2008 at http://www.reefresources.net/RR_profiles/viewtopic.php?t=153.
Froese, R. 1998. Length-weight relationships for 18 less-studied fish species. Journal of applied ichthyology, 14: 117-118.
Guiasu, R., R. Winterbottom. 1998. Yellow juvenile color pattern, diet switching and the phylogeny of the surgeonfish genus Zebrasoma. Bulletin of Marine Science, 63: 277-294.
Lobel, P. 1989. Ocean current variability and the spawning season of Hawaiian reef fishes. Environmental Biology of Fishes, 24: 161-171.
Marshall, N., K. Jennings, W. McFarland, E. Loew, G. Losey. 2003. Visual Biology of Hawaiian Coral Reef Fishes. BioOne, 3: 467-480.
Ogawa, T., C. Brown. 2001. Ornamental reef fish aquaculture and collection in Hawaii. Aquarium Sciences and Conservation, 3: 151-169.
Parrish, J., J. Claisse. 2005. "University of Hawaii, Department of Zoology" (On-line pdf). Post-settlement Life History of Key Coral Reef Fishes in a Hawaiian Marine Protected Area Network. Accessed May 03, 2008 at http://www.hawaii.edu/ssri/hcri/files/res/parrish_c_noaa_final_2004.pdf.
Reynolds, W., M. Casterlin. 1980. Thermoregulatory behavior of a tropical reef fish, Zebrasoma flavescens. OIKOS, 34: 356-358.
Sale, P., W. Douglas, P. Doherty. 1984. Choice of Microhabitats by Coral Reef Fishes at Settlement. Coral Reefs, 3: 91-99.
Waikïkï Aquarium, 1999. "Marine Life Profile: Yellow Tang" (On-line pdf). Waikïkï Aquarium Educational Department. Accessed April 07, 2008 at http://www.waquarium.org/MLP/root/pdf/MarineLife/Vertebrates/YellowTang.pdf.
Wood, A. 2008. "Animal Life Resource" (On-line). Accessed April 09, 2008 at http://animals.jrank.org/pages/2212/Surgeonfishes-Relatives-Acanthuroidei-YELLOW-TANG-Zebrasoma-flavescens-SPECIES-ACCOUNTS.html.
Wylie, C., V. Paul. 1988. Feeding preferences of the surgeonfish Zebrasoma flavescens in relation to chemical defenses of tropical algae. Marine Ecology, 45: 23-32.
| 3.332436 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2012 June 23
Explanation: As seen from Frösön island in northern Sweden the Sun did set a day after the summer solstice. From that location below the arctic circle it settled slowly behind the northern horizon. During the sunset's final minute, this remarkable sequence of 7 images follows the distorted edge of the solar disk as it just disappears against a distant tree line, capturing both a green and blue flash. Not a myth even in a land of runes, the colorful but elusive glints are caused by atmospheric refraction enhanced by long, low, sight lines and strong atmospheric temperature gradients.
Authors & editors:
Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U.
| 3.190168 |
Yes, it's Protect Your Groundwater Day!
Here is some information from the National Ground Water Association:
Everyone can and should do something to protect groundwater. Why? We all have a stake in maintaining its quality and quantity.
- For starters, 95 percent of all available freshwater comes from aquifers underground. Being a good steward of groundwater just makes sense.
- Not only that, most surface water bodies are connected to groundwater so how you impact groundwater matters.
- Furthermore, many public water systems draw all or part of their supply from groundwater, so protecting the resource protects the public water supply and impacts treatment costs.
- If you own a well to provide water for your family, farm, or business, groundwater protection is doubly important. As a well owner, you are the manager of your own water system. Protecting groundwater will help reduce risks to your water supply.
Groundwater protectionThere are two fundamental categories of groundwater protection:
- Keeping it safe from contamination
- Using it wisely by not wasting it.Before examining what you can do to protect groundwater, however, you should know that sometimes the quality and safety of groundwater is affected by substances that occur naturally in the environment.
"When the well is dry, we learn the worth of water." -- attributed to several, including Ben Franklin
| 3.223602 |
New ideas and information have emerged recently that hold great promise for enhancing the impact of current prevention efforts.
"Prevention is the best treatment" is an oft-cited maxim, and one that certainly applies to drug abuse. Anyone who can be influenced to avoid abusing drugs is spared their harmful health and social effects, including increased risk for lethal infections, family disruption and job loss, confusion and despair, the difficult struggle of treatment, and -- for many -- the ravages of addiction and the ordeal of climbing back after relapse. From society's point of view, drug abuse prevention helps keep a tremendous burden -- related to disease and premature death, lost capacity for productive work, and crime -- from being even worse.
The bulk of current interventions to prevent drug abuse fall mainly into two groups. One set is designed to reduce risk factors associated with higher likelihood of drug abuse and increase protective factors associated with lower likelihood of drug abuse. When implemented in conformity with proven prevention principles (see "Risk and Protective Factors in Drug Abuse Prevention"), this strategy, the product of more than a decade of research and clinical experience, is effective and inclusive enough to apply to most populations. Moreover, researchers continue to learn more about how risk and protective factors relate, and practitioners are ever more adept at applying this knowledge. These efforts will continue to yield incrementally -- perhaps even dramatically -- higher impact interventions well into the future.
Nonetheless, there are limitations to the risk-and-protective-factors strategy. One feature that ultimately limits its impact, for example, is the nature of the factors themselves. They tend to be fundamental or deeply entrenched characteristics or experiences of a person, family, or community. Some are hidden, such as sexual victimization; others are prominent in society, such as adolescent depression or ready access to drugs of abuse. As a result, traditional risk factors generally can be modified only by relatively broad and long-term interventions. Certain factors may not be susceptible to modification, such as a genetic predisposition to risk-taking. In addition, for the most part, traditional risk factors pertain to an individual's vulnerability to drug abuse, rather than the actual choice to use drugs. As important as it is to lower vulnerability, on a given day, even someone with a relatively low vulnerability may opt to use drugs.
The second important group of preventive interventions complements and extends the risk-and-protective-factors strategy by focusing on the dynamic of situations, beliefs, motives, reasoning and reactions that enter into the choice to abuse or not to abuse drugs. Important applications of this strategy include normative education to refute the common belief that "everyone takes drugs," and equipping young people with the skills to refuse drug offers without feeling they are losing face. This strategy is full of untapped promise, and today likely offers the best prospects for rapid development of more effective prevention. A few of the many issues whose elucidation may yield improved interventions include why even very young children tend to expect positive experiences from drugs; how individuals' styles for processing language and visual images affect drug-taking decisions; the roles of curiosity and impulsivity in such decisions; and what logical processes people typically follow when deciding to use or not use drugs.
A recent dramatic finding in neurobiological research may greatly increase our understanding of adolescent decisionmaking and our ability to help adolescents choose wisely regarding drug abuse. Scientists have long suspected that the adolescent brain is still developing physically, and researchers have now demonstrated that new neural tissue and connections continue to form throughout the transitional years between childhood and adulthood. Further investigation of this growth process undoubtedly will yield important insights relevant to some of the cognitive issues affecting the appeal of drugs and drug-taking decisions. The impact on drug abuse prevention could be tremendous, especially in light of the fact that adolescence often is a critical period for initiation of drug abuse. Most chronic drug abusers start experimenting with intoxication in adolescence or young adulthood. While populations are constantly changing -- and while prescription drug abuse by older individuals today is a serious and mounting concern -- it remains generally true that people who do not abuse drugs during the decisive years before age 25 are unlikely ever to develop a serious drug problem.
It remains generally true that people who do not abuse drugs during the decisive years before age 25 are unlikely ever to develop a serious drug problem.
A tighter focus on decisionmaking regarding drug abuse should enable us to progress in a vitally important area: preventing escalation from early, experimental drug use to regular use, abuse, and addiction. We know that fewer than 10 percent of people who experiment with drugs become dependent or addicted. We also know that some of the factors that influence whether a person will become dependent or addicted are independent of the factors that influence whether he or she will initiate drug abuse. For example, research has suggested that, perhaps because of their particular brain chemistry, some individuals dislike the agitation cocaine can produce more than they like the euphoria it brings -- and so discontinue use after their initial experimentation. Interventions based upon such factors may curtail drug abuse before it reaches critical severity and thereby forestall most of its truly tragic health and social consequences.
NIDA's prevention agenda is to aggressively pursue research on risk and protective factors while also seeking to identify, develop, and integrate new science-based approaches into existing prevention programs. To accomplish these goals, NIDA recently launched the three-part Drug Abuse Prevention Research Initiative. (See "NIDA Conference Reviews Advances in Prevention Science, Announces New National Research Initiative.") Basic researchers will mine new neurobiological and other fundamental research discoveries for prevention applications. Basic, clinical, and applied researchers and practitioners will work together in Transdisciplinary Prevention Research Centers to synthesize knowledge from all the relevant scientific fields into powerful new prevention packages. Researchers and State and local practitioners will collaborate in Community Multisite Prevention Trials to rapidly assess proposed new prevention approaches and interventions in diverse communities and populations.
Exciting moments in science occur when the gradual accumulation of knowledge suddenly gives rise to new perspectives with the promise of new solutions to problems of living. In the area of drug abuse prevention, this is such a moment, and NIDA is moving swiftly to take full advantage of its potential.
| 3.023367 |
Researchers at Stanford University have brought about a unique marriage between computer chips and living cells that could greatly accelerate everything from tests for new drugs to screening for diseases such as leukemia.
The basic living element in every organism is the cell. Humans have at least 100 trillion of them, all designed to carry out various bodily functions. Each cell is a complex bag of enzymes and chemicals surrounded by a spherical membrane, and how a cell reproduces and works with other cells determines how efficiently the organism performs.
Scientists have struggled for years to understand the cell and especially the membrane that seems to control most of the crucial functions, but they have been hampered by the difficulty of growing cells in a laboratory culture for research.
A decade ago, Stanford researchers developed artificial membranes that were so like the real thing that living cells could be tricked into attaching themselves to them.
Two years ago, graduate student Jay T. Groves learned something interesting when he pulled a pair of tweezers through one of the artificial membranes. The parts separated permanently. He also found that he could manipulate the different parts by applying an electrical current.
About that time, Nick Ulman, an electrical engineer, joined Groves' research group, headed by chemistry professor Steven G. Boxer. Ulman brought with him an understanding of microelectronics.
The researchers discovered that the electric field generated by a tiny microchip could be used to separate the artificial membrane into tiny squares, which they called "corrals." The corrals were so small that millions occupied an area no bigger than a fingernail.
That gave the researchers something they had never had before: a means of isolating, cataloging and manipulating millions of cell membranes simultaneously.
"It's a little bit like having a parking lot with assigned spaces," Boxer says. If you leave your car in an unmarked parking lot at the airport, he says, you may be lucky to find it again. But if each parking space has a number, it becomes much simpler.
"Ultimately, you can say I've got a Ferrari in spot 2A, and I've got a Volkswagen over in space 3D," Boxer says.
Similarly, living cells can be "tricked" into attaching themselves to individual membranes. That is done by modifying the surface of each membrane.
"If you wax a car, after you wax it, water beads up on the surface," Boxer says. "Before waxing, the water just runs off. That's an example of modifying the properties of the surface such that water associates differently with the surface."
One potential use is for cell screening for leukemia patients. Some of the membranes on the chip could be "seeded" with proteins that bind to different kinds of cells. By flooding a glass plate embedded with chips with blood from the patient, the cells would attach themselves to designated areas, thus revealing how many cells of different types are present, and possibly even how well they are performing.
Joseph A. Zasadzinski, professor of chemical engineering at UC Santa Barbara, who has analyzed the Stanford research, sees many potential applications.
It could pave the way for a pharmaceutical researcher to "try 50 million different things" at the same time, Zasadzinski says.
"Right now, you grow cells in culture and you see which ones die, and that's very slow. Here you can imagine 20,000 little plates in a square inch, and each one of them you can tweak a slightly different way."
It could greatly increase the rate of testing for new drugs for viruses, he says, because the experiments could be repeated millions of times in a tightly controlled and manipulable environment.
Others see it leading to a test for AIDS in which thousands of blood tests could be conducted in the time it now takes to do just one.
The heart of the system is the computer chip.
"That's where the ultimate power of this comes in," Boxer says. "The same technology that's used to make integrated circuits, computer chips, is also being used to design a biocompatible surface."
That has led to a bit of unwelcome fallout, he adds. The researchers are constantly asked if they are on the road to the ultimate marriage between computers and living cells--the bionic man.
Boxer flinches at the suggestion. This is such a tiny step, he says, that it's ludicrous to think of it in those terms. Still, some see this as one more step toward creating computer-based biological systems.
"If you can optimize this, you can get a nice bio-sensor out of it," Zasadzinski says. "But it would be hard to imagine any sort of bionic man for an awful long time."
He says he is more worried about "that sheep in England."
"That's scary," he says.
Lee Dye can be reached via e-mail at [email protected]
| 3.741973 |
Do not imagine the Mona Lisa with a mustache! If you failed to carry out this instruction, it is because your power of visualization is so strong that it takes any suggestion, positive or negative, and turns it into an image. And as the maestro emphasized, “the thing imagined moves the sense.” If you think you cannot visualize. Chances are you answered these questions easily by drawing on your internal image data bank, the occipital lobe of your cerebral cortex. This data bank has the potential, in coordination with your frontal lobes, to store and create more images, both real and imaginary, than the entire world’s film and television production companies combined.
From How to think like Leonardo da Vinci, by Michael J. Gelb, published by Delacorte Press, 1998.
| 3.121583 |
Recently, as shoreline landowners in Lubec, my wife and I received a letter from Acadian Seaplants, providing their answers to persistent questions about the harvest of rockweed in Cobscook Bay and hoping for our support of their efforts when they resume cutting in 2010. Because I am a long-time environmental scientist, I did what comes naturally: I looked for scientific information about rockweed and its role in coastal ecosystems.
Rockweed is one of many “seaweeds” that sustain coastal marine ecosystems throughout the world. The growth of rockweed provides the food and shelter for invertebrates, which are the basis of the food web that sustains higher-level fisheries of coastal Maine. Rockweed also provides critical nursery habitat for young fish that are attractive to predators. Seaweeds are threatened in many areas, disappearing at a rate of 7 percent per year worldwide, according to a recent study published in the Proceedings of the National Academy of Sciences.
One need not look far to see the abandoned sardine factories and codfish fisheries of New England, signs of overexploitation of the marine environment. Good management of marine resources should balance catch against annual production, so that the economic livelihood of the sea is sustained for generations.
Unfortunately, cod and sardines were overharvested in many areas, and those fisheries are now gone.
Now that we have removed these higher-level fish and some invertebrates, such as sea urchins, the rockweed cutters are focused on the base of the food chain itself. Will we allow rockweed to be overharvested as well, so that it declines and disappears from Cobscook Bay and the surrounding waters? Protecting rockweed is important if we are to sustain periwinkle and mussel fisheries, which provide local jobs.
So, as an aid to the local economy, we registered our shoreline with Downeast Coastal Conservancy as a “no-harvest” area.
Acadian Seaplants vows that it will not remove more than 17 percent of the annual growth of rockweed — a level deemed sustainable in studies by R.L. Vadas of the biology department at the University of Maine. To ensure recovery, Dr. Vadas also advises not to harvest any area in consecutive years. The other major player, North American Kelp of Waldoboro, also plans to adhere to a maximum 17 percent harvest, after subtracting biomass that cannot be harvested from protected lands.
We can only hope that all those who harvest rockweed will comply with the new state regulations passed in June 2009, codifying a maximum 17 percent harvest, because no amount of tax will bring back the rockweed if it disappears from the coast of Maine.
The best policy would probably be to prohibit the harvest of rockweed, since it sustains the basis of the fish and shellfish industries in this region. Even so, it is unclear who has monitored and enforced the past levels of harvest and how accurately any of these parameters can be measured.
The Marine Resources Committee will soon hold a workshop to evaluate the success of the regulation of rockweed cutting during the 2009 season. Currently, a surcharge on rockweed harvest is earmarked to help pay for monitoring by the commissioner of marine resources. Indeed, the current legislation requires that the proper harvest of rockweed must be verified by a third party starting with the 2010 harvest season. Done well, these provisions would provide welcome oversight of the rockweed resource and protect its critical role in the coastal marine ecosystem.
Alternatively, the passing of rockweed would mark the end of the marine fisheries ecosystem — top to bottom — within the span of a couple of decades in eastern, coastal Maine.
William H. Schlesinger is president of the Cary Institute of Ecosystem Studies in Millbrook, N.Y. and a shoreline property owner in Lubec.
| 3.252044 |
If bones from dead deer could talk, they would reveal compelling stories. In late March 1989, snowmobilers reported several dead and frozen deer in a Canada Falls deer yard near historic Pittston Farm, 50 miles northwest of Greenville. As the state’s regional wildlife biologist at the time, I investigated the report and found 10 dead deer.
Curious to know if the deer had died from coyote predation or other natural causes, I broke open the femur (thigh) bones with a hatchet to examine the bone marrow. Bone marrow of healthy deer resembles the color and consistency of thick hand cream. During late winter, as deer exhaust all remaining energy reserves, their bone marrow turns the color and consistency of red jelly. Lethargic deer with swollen faces and visible rib cages are signs of very poor health.
Red jelly-like bone marrow in eight of 10 deer indicated severe malnutrition. A ninth deer died of a fractured hip caused by bullet fragments. Coyotes had eaten the deer based on tracks and scat, but did they kill the deer or merely perform euthanasia on animals weakened by inadequate food and shelter?
The question is more relevant today than it was in 1989 because the quality and quantity of deer yards has plummeted since then. Winter is the bottleneck of deer survival in northern Maine. Poor-quality deer-wintering habitat further compromises their survival. Coyotes kill healthy deer, but emaciated ones are easy prey in late winter.
Purchasing deer yards outright or protecting them with conservation easements — with binding timber harvest regulations — are the only viable long-term solutions to resolving Maine’s deer woes. Many hunters claim that designated deer yards without deer are solely attributable to coyote predation. During the winter of 1989, timber company foresters brought to my attention six designated deer yards without deer in the Moosehead Lake region.
Investigations revealed that three of these deer yards had been mistakenly mapped and were subsequently dropped from state regulation. The other three deer wintering areas lacked deer because timber companies had clear-cut around them, creating an island of spruce and fir forest surrounded by three- to four-foot deep snowfields. With no access to neighboring forests, deer abandoned the yards rather than remain vulnerable to coyote predation.
One winter evening, a Great Northern Paper Company forester surprised me by knocking on my house door. We had spent the day together with his boss, but in the woods he could not talk confidentially with me. He said the deer yard we had surveyed earlier that day supported winter deer for as long as he could remember. The forester explained that against his wishes, his boss instructed him to supervise a summer harvesting crew that cut all the timber around the deer yard.
He then did something unprecedented and bold. From long cardboard tubes, he extracted forest cover maps showing closely guarded future forestry operations. He proceeded to identify deer yards unknown to the state. He did this because he wanted those areas protected from being clear-cut. He said, “I like hunting deer, and we can work cooperatively with the state to harvest trees and have deer, too. These are not mutually exclusive goals.”
The Chub Pond deer yard in Hobbstown, south of Jackman, supported deer each winter from the 1940s until the early 1980s when Scott Paper Company clear-cut healthy trees abutting the deer yard, claiming it was a spruce budworm salvage cut. According to a Scott forester, also a deer hunter, deer abandoned the yard the following winter.
The demise of deer yards is the reason why deer are struggling in northern, western and eastern Maine. Lawmakers, under increasing pressure to “do something about declining deer numbers,” are considering extending coyote trapping seasons to help the struggling deer population.
However, passing LD 372 and other coyote control bills is as misguided as placing a Band-Aid on a compound fracture. The Legislature can enact infinite coyote-control bills, but the deer herd will not recover until the Legislature makes deer-yard protection a priority.
Ron Joseph, of Camden, is a retired Maine wildlife biologist and deer hunter.
| 3.055092 |
Infographics have become a common means of presenting information to people in an easy-to-understand visual format. But this ubiquity doesn’t mean that an infographic always means what a viewer might presume it to mean at first glance. Consider the following map of the United States.
Recently, the National Oceanic and Atmospheric Administration released this, and a number of other maps, showing what the summer climate has been doing recently. Unsurprisingly, the media quickly picked up on the dramatic story. During an interview with Secretary of Agriculture Tom Vilsack, Kai Ryssdal, senior editor of public radio’s Marketplace program, made the following observation, “This has been, as you know, the hottest summer on record in a lot places in this country.” On NPR’s news blog, “The Two-Way,” their piece on the heat starts out with “The all-red map tells the story.”
But does it?
If you look at the same data on the regional map, you see plenty of the nation sweated through a warmer than normal July. But the heat wasn’t record-setting in any single given region. The “118” that shows up on the national map is nowhere to be found on the regional one.
The statewide map is different again. We can see even more of the variation in the average temperature for July. And we see that across Virginia, it was hot – the “118” re-appears, although not so hot as to push the entire southeast region of the NOAA map into the red. In fact the mountain west seems to have a higher average temperature. We can also see that the “Near Normal” and “Above Normal” temperatures dominate states in New England, the Gulf Coast, the Southwest and the West Coast. People in these areas may have been surprised to learn that the summer had thus far been unusually warm.
And when you boil the data all the way down to the divisional level, the red spreads out to scattered parts of the country, but we learn that parts of Washington (like the Puget Sound area), Oregon, California, Texas and Louisiana had below normal average temperatures during July. (Don’t worry; we got ours over the first few weeks of this month.)
The contiguous states are divided into a total of 344 divisions. In all, 17 of these divisions, spread out over 12 of the 48 states, experienced record high average temperatures last month. While that means that a lot of people were looking for ways to stay cool, especially when you consider that Chicagoland is in one of the record-setting districts, many parts of the country, while warmer than normal, avoided pushing into new territory.
The culprit is, of course, averaging. Both spatially and temporally.
As an example, I’ve created a simple chart that measures a fictitious “Salamander Index” over a span of 15 years. The area being measured is divided into five separate regions – and the orange line on the chart represents the average value of all of the regions for that point in time. By year 15, the average is at record levels, yet, as you can see, only the East Region is in record territory; all of the other regions had scored higher on the index than that in the past – in some cases significantly so. In fact, although it isn’t immediately evident from the chart, the South Region (the violet line), which spends much of its time above the overall average, is at slightly below its average level, as across all 15 years, the South Region scores an average of about 31.8.
So it’s important to remember that while infographics, especially simple ones, make data easily digestible, they don’t always provide as accurate a picture as it might seem at first glance.
(Thanks to Aaron for the guest post!)
Denali, in an evaluation version, has been out for a little while now. I even have a copy on my computer and I’m working one of our analysts to learn the ins and outs- fun stuff! According to Microsoft, though, the official release will hit shelves April 1st. I’m not quite sure why Microsoft chose to release Denali on a day that’s well known for Google hi-jinx (Google Paper anyone?) but I’m sure they have their reasons.
Happy Holidays from Piraeus!
We had a busy holiday season here, but not too busy that we forgot how to have fun We hosted a Mad Men themed potluck, complete with cocktails. It was great to see everyone dressed to the nines. Thanks for playing along!
Here at Piraeus we aren’t starting to deck the halls before Thanksgiving, but a co-worker did send this interesting tree-related infographic my way and I thought I’d share it.
Just last night I had a rather lengthy conversation about the old-growth trees along the Olympic Peninsula. According to the Peninsula Daily News (from 2/2011), the Olympic National Park sees around 3,000,000 visitors annually, though there’s debate about the traffic counter’s accuracy. Are you one of the 3 million?
I wonder if anyone has tried to see all of these old trees. I suspect there’s a dendrologist somewhere with it on their bucket-list
Happy Halloween from the Piraeus Data crew
There were more costumes too, and some brilliant karaoke by even our newest crew members. Fun was had by all!
It’s on like Donkey Kong!
I know, I know, old news. This happened in Paris already.
But you know what? Post-it wars are fun. I saw an angry bird up in the window across the street and then noticed the space invader a few floors up and Pac Man a few doors over. We just had to join in.
Turns out it looks like the folks over at Health Solutions Network started with Pac Man and Runic Games did the space invaders. PlacePlay followed suit, setting off the folks across from us, and now there’s a smile face above them.
I hope this keeps going.
Thanks GeekWire, for getting this rolling with your article here with original Pine Street Space Invader attack .
Most people I know have a wordpress or blogger site at this point. My family even has one going that includes emergency contacts, updates on health and new family photos from weekend excursions to keep those of us away from the East Coast in the loop. The templates are easy to use and you don’t have to be a programmer to add a slide show or link back to an article. Remember an earlier day, one that included GeoCities? I have to say I’d completely forgotten about GeoCities, but then I stumbled across this visualization this morning, via Mashable. I like the idea of “The Deleted City” being an enormous virtual city that never really existed, but sort of did. (Video via Mashable)
I remember the Math Olympiad and the Science Olympiad in high school, but for some reason I never thought of the practice continuing out of the classroom. Since school is supposed to prepare you for the real-world, I don’t know why I was surprised to come across this project from Charlotte, North Carolina.
Welcome to the Business Intelligence Olympiad! Every two years, starting in 2008, the city of Charlotte pits business unit teams from different city departments against each other to address a fictional problem with analytics and data sets.
I’m sure the competition leads to plenty of laughter and good-hearted competition, but according to the article I read, it’s also created an environment where “a lot of information that previously had not be [sic] shared is now shared regularly between BI analysts throughout the city.” The competition in 2010 included a fictitious hurricane that bore a striking resemblance to Irene.
“The underlying benefit was with Hurricane Irene coming up the coast almost on the same track as the theoretical Hurricane Vixen from December, teams were more used to looking at contingencies and how they affected portions of their business,” [manager of data administration for the city of Charlotte] Raper said.
As far as I know there isn’t something comparable in Seattle, but maybe there should be. Is it time for a BI Olympiad with data from the viaduct?
For more about Charlotte’s Olympiad, see the original article from govtech.com here.
The teams here at Piraeus put in a lot of hard work, so it’s nice that we can also hang out together. We had our summer picnic on Friday and our fearless office manager managed to pick just about the nicest day all summer. The food was great, we had balloons and music, a little bit of ladder golf and a little bit of swimming. Most importantly, some really great people!
Oh yeah, and water balloons. I forgot how much fun water balloons are! I’ll update the slideshow as I get more photos.
| 3.372835 |
MRA is a study of the blood vessels using magnetic resonance imaging (MRI). Using a large magnet, radio waves, and a computer, an MRA makes two-dimensional and three-dimensional pictures.
Reasons for Test
This test is done in order to:
- Identify diseased, narrowed, enlarged, and blocked blood vessels
- Locate internal bleeding
MRIs can be harmful if you have metal inside your body such as joint replacements or a pacemaker. Make sure your doctor knows of any internal metal before the test. Some people may also have an allergic reaction to the contrast dye. Talk to your doctor about any allergies you have. Also, let your doctor know if you have liver or kidney problems. These may make it difficult for your body to get rid of the contrast.
What to Expect
Prior to Test
Before the test, your doctor may:
- Ask about your medical history
- Perform a physical exam
- Do blood tests
If your doctor prescribes a sedative:
- Arrange for a ride home.
- Do not eat or drink for at least four hours before the exam.
- Take the sedative 1-2 hours before the exam, or as directed.
At the MRI center, you will be asked if you have something in your body that would interfere with the MRA, such as:
- Pacemaker or implantable defibrillator
- Ear implant
- Metal fragments in your eyes or in any other part of your body
- Implanted port device, such as an insulin pump
- Metal plate, pins, screws, or surgical staples
- Metal clips from aneurysm repair
- Retained bullets
- Any other large metal objects in your body
You may be:
- Given earplugs or headphones to wear. The MRI machine makes a loud banging noise.
- Given an injection of a contrast dye into your vein.
- Allowed to have a family member or friend with you during the test.
Description of the Test
If contrast is used, a small IV needle will be inserted into your hand or arm before you are moved into the MRI machine. The contrast will be injected during one set of images. It helps to make some organs and vessels easier to see on the pictures. You might have an allergic reaction to the dye, but this is rare
You will lie on a special table. This table will be moved inside the opening of the MRI machine. Most MRIs consist of 2-6 sets of images. Each one will take between 2-15 minutes. You will need to lie still while the images are being taken. You may need to hold your breath briefly. Technicians will communicate with you through an intercom from another room.
- You will be asked to wait at the facility while the images are examined. The technician may need more images.
- If you took a sedative, do not drive or operate machinery until it wears off.
- If you are breastfeeding and receive contrast dye, you and your doctor should discuss when you should restart breastfeeding. Information available has not found any ill effects to the baby if a breastfeeding mother has had contrast dye.
- Be sure to follow your doctor's instructions.
How Long Will It Take?
Will It Hurt?
The test is painless. If contrast is used, you may experience a stinging sensation when the IV is inserted.
Your doctor will discuss the findings with you and any treatment you may need.
Call Your Doctor
Call your doctor if any of the following occur:
- New or worsening symptoms
- Allergic or abnormal symptoms if contrast material was used
If you think you have an emergency, call for medical help right away.
- Reviewer: Michael J. Fucci, DO; Brian Randall, MD
- Review Date: 05/2013 -
- Update Date: 05/20/2013 -
| 3.340831 |
Consider the following in Haskell:
let p x = x ++ show x in putStrLn $ p"let p x = x ++ show x in putStrLn $ p"
Evaluate this expression in an interactive Haskell session and it prints itself out. But there's a nice little cheat that made this easy: the Haskell 'show' function conveniently wraps a string in quotation marks. So we simply have two copies of once piece of code: one without quotes followed by one in quotes. In C, on the other hand, there is a bit of a gotcha. You need to explicitly write code to print those extra quotation marks. And of course, just like in Haskell, this code needs to appear twice, once out of quotes and once in. But the version in quotes needs the quotation marks to be 'escaped' using backslash so it's notactually the same as the first version. And that means we can't use exactly the same method as with Haskell. The standard workaround is not to represent the quotation marks directly in the strings, but instead to use the ASCII code for this character and use C's convenient %c mechanism to print at. For example:
Again we were lucky, C provides this great %c mechanism. What do you need in a language to be sure you can write a self-replicator?
It turns out there is a very general approach to writing self-replicators that's described in Vicious Circles. What follows is essentially from there except that I've simplified the proofs by reducing generality.
We'll use capital letters to represent programs. Typically these mean 'inert' strings of characters. I'll use square brackets to indicate the function that the program evaluates. So if P is a program to compute the mathematical function p, we write [P](x) = p(x). P is a program and [P] is a function. We'll consider both programs that take arguments like the P I just mentioned, and also programs, R, that take no arguments, so [R] is simply the output or return value of the program R.
Now we come to an important operation. We've defined [P](x) to be the result of running P with input x. Now we define P(x) to be the program P modified so that it no longer takes an argument or input but instead substitutes the 'hard-coded' value of x instead. In other words [P(x)] = [P](x). P(x) is, of course, another program. There are also many ways of implementing P(x). We could simply evaluate [P](x) and write a program to simply print this out or return it. On the other hand, we could do the absolute minimum and write a new piece of code that simply calls P and supplies it with a hard-coded argument. Whatever we choose is irrelevant to the following discussion. So here's the demand that we make of our programming language: that it's powerful enough for us to write a program that can compute P(x) from inputs P and x. This might not be a trivial program to write, but it's not conceptually hard either. It doesn't have gotchas like the quotation mark issue above. Typically we can compute P(x) by some kind of textual substitution on P.
With that assumption in mind, here's a theorem: any program P that takes one argument or input has a fixed point, X, in the sense that running P with input X gives the same result as just running X. Given an input X, P acts just like an interpreter for the programming language as it outputs the same thing as an
interpreter would given input X.
So here's a proof:
Define the function f(Q) = [P](Q(Q)). We've assumed that we can write a program that computes P(x) from P and x so we know we can write a program to compute Q(Q) for any Q. We can then feed this as an input to [P]. So f is obviously computable by some program which we call Q0. So [Q0](Q) = [P](Q(Q)).
Now the fun starts:
[P](Q0(Q0)) = [Q0](Q0) (by definition of Q0)
= [Q0(Q0)] (by definition of P(x))
In other words Q0(Q0) is our fixed point.
So now take P to compute the identity function. Then [Q0(Q0)] = [P](Q0(Q0)) = Q0(Q0). So Q0(Q0) outputs itself when run! What's more, this also tells us how to do other fun stuff like write a program to print itself out backwards. And it tells us how to do this in any reasonably powerful programming language. We don't need to worry about having to work around problems like 'escaping' quotation marks - we can always find a way to replicate the escape mechanism too.
So does it work in practice? Well it does for Haskell - I derived the Haskell fragment above by applying this theorem directly, and then simplifying a bit. For C++, however, it might give you a piece of code that is longer than you want. In fact, you can go one step further and write a program that automatically generates a self-replicator. Check out Samuel Moelius's kpp. It is a preprocessor that converts an ordinary C++ program into one that can access its own source code by including the code to generate its own source within it.
Another example of an application of these methods is Futamura's theorem which states that there exists a program that can take as input an interpreter for a language and output a compiler. I personally think this is a little bogus.
| 3.164789 |
Principles of Design
Balance in design is similar to balance in physics. A large shape close to the center can be balanced by a small shape close to the edge. A large light toned shape will be balanced by a small dark toned shape (the darker the shape the heavier it appears to be)
Graduation of size and direction produce linear perspective. Graduation of colour from warm to cool and tone from dark to light produce aerial perspective. Graduation can add interest and movement to a shape. A graduation from dark to light will cause the eye to move along a shape.
Repetition with variation is interesting, without variation repetition can become monotonous. If you wish to create interest, any repeating element should include a degree of variation.
Contrast is the juxtaposition of opposing elements eg. Opposite colours on the colour wheel – red/green, blue/ orange etc. Contrast in tone or value – light /dark. Contrast in direction – horizontal/vertical.
The major contrast in a painting should be located at the center of interest. Too much contrast scattered throughout a painting can destroy unity and make a work difficult to look at. Unless a feeling of chaos and confusion are what you are seeking, it is a good idea to carefully consider where to place your areas of maximum contrast.
Harmony in painting is the visually satisfying effect of combining similar, related elements. Eg. Adjacent colurs on the colour wheel, similar shapes etc.
Dominance gives a painting interest, counteracting confusion and monotony. Dominance can be applied to one or more of the elements to give emphasis.
Relating the design elements to the idea being expressed in a painting reinforces the principal of unity. Eg. A painting with an active aggressive subject would work better with a dominant oblique direction, course, rough texture, angular lines etc. whereas a quiet passive subject would benefit from horizontal lines, soft texture and less tonal contrast.
Unity in a painting also refers to the visual linking of various elements of the work.
| 3.494742 |
What is Fluorescence?
Fluorescence is the ability of certain chemicals to give off visible light after absorbing radiation which is not normally visible, such as ultraviolet light. This property has led to a variety of uses. Let’s shed some further light on this topic; consider the omnipresent “fluorescent” lights. Just how do they work? Fluorescent tubes contain a small amount of mercury vapor. The application of an electric current causes a stream of electrons to traverse the tube. These collide with the mercury atoms which become energized and consequently emit ultraviolet light. The inside of the tube is coated with a fluorescent material, such as calcium chlorophosphate, which converts the invisible ultraviolet light into visible light. The same idea is used to produce color television pictures. The screen is coated with tiny dots of substances which fluoresce in different colours when they are excited by a beam of electrons which is used to scan the picture.
But fluorescent materials had practical uses even before we dreamed of color television. One of the most amazing of all fluorescent materials is a synthetic compound, appropriately called fluorescein. Under ultraviolet light it produces an intense yellow-green fluorescence which during World War II was responsible for saving the lives of many downed flyers. Over a million pounds of the stuff were manufactured in 1943 and distributed to airmen in little packets to use as a sea marker. Since the fluorescence is so potent that it can be seen when the concentration of fluorescein is as little as 25 parts per billion, rescue planes easily spotted the men in the ocean. Aircraft carriers also made extensive use of fluorescein. The signal men on deck wore clothes and waved flags treated with the compound which was then made to glow by illumination with ultraviolet light. The incoming pilots could clearly see the deck and the need to use runway lights which would have drawn the attention of enemy aircraft was eliminated. Certain natural substances also fluoresce under ultraviolet light. Urine and moose fur are interesting examples. Prisoners have actually made use of this property of urine and have used it as a secret ink. What about the moose fur? Well, in Canada and Sweden there are hundreds of accidents each year involving the collision of automobiles with moose. Some of these result in fatalities. Some car manufacturers are now considering fitting their vehicles with UV emitting headlights to reduce moose collisions! How’s that for putting the right chemistry to work.
| 3.687877 |
February 3, 2010 | 9
Few things in my life have brought me as much joy as watching sea otters play in the waters near Monterey, Calif. So when I heard this week that the frisky yet endangered critters may be slightly expanding their habitat, I figured everyone would think that was good news.
Once hunted into near-extinction for their fur, the southern, or California, sea otter (Enhydra lutris nereis) now numbers around 2,600 to 2,700 animals, all of which live in a fairly small habitat range off the central California coast. The problem is that their extant habitat is the only place the U.S. Endangered Species Act (ESA) grants them protected status. (Although they are also protected under California state law and the U.S. Marine Mammal Protection Act, those laws do not govern habitat.) Everything south of their current habitat is designated a "no-otter zone".
The origins of this restriction shouldn’t surprise anyone. When otters were first listed as a threatened species under the ESA, they were protected everywhere, according to Allison Ford, executive director of The Otter Project in Monterey, Calif. But in order to protect species, the ESA requires the creation of a recovery plan. In this case the U.S. Fish and Wildlife Service wanted to try to move some otters to a new habitat. This "experimental population" would protect the southern otter from extinction in a catastrophic event, such as a major oil spill. But in order to create a new habitat for the otters, the government also created a no-otter zone, an area where the animals would not be able to impact the fishing or oil industries.
Unfortunately, "the experimental population never thrived," Ford says. But the otter-free zone remains.
And now some otters are swimming past that imaginary line in the surf in search of sea urchins and other tasty marine life in the forbidden zone. Fishermen are not happy with the encroachment. "Based on historic action, we think eventually they’ll wipe out the shellfish industry in California," Vern Goehring, executive director of the California Sea Urchin Commission, told the Associated Press.
So why are sea otters swimming into verboten territory? "Food supply is always an impediment to otter survival and expansion," Ford says. "Scientists believe that food limitation is an issue in certain parts of the otter’s range."
Ford says that large, bachelor otters "tend to go back and forth over the no-otter line. Certainly, abundant prey that otters like to eat exists in the no-otter zone," and because humans tend to like the same foods, that creates conflict.
"Otters eat voraciously," Ford says. "They have a strong appetite, and eat 25 percent of their body weight every day." Otters do not have blubber, and use their fur and their high metabolisms to keep warm.
Otters can impact fisheries and the industry’s ability to operate at the same productivity levels it is used to, Ford says, adding: "Sea urchin is where the big conflict is." But she points out that the very reason there is a sea urchin industry is because otters no longer exist in their historic habitats. "Otters are a keystone species, and they maintain sea urchins, which in turn eat kelp. When otters were removed from the ecosystem, you lost the kelp, which hurt total biodiversity." Restoring sea otters in other areas of California, Ford says, could actually increase biodiversity and create additional fishing markets.
No matter what happens, the sea otter expansion won’t be anything that happens overnight. Populations have dipped slightly the past two years, and only a few dozen otters make their way regularly into the sans otter zone. But for now, that’s enough to get some people worried—and angry.
Image: Sea otter, via Wikipedia
| 3.159514 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.