text
stringlengths 247
520k
| nemo_id
stringlengths 29
29
|
---|---|
Storms spoil NASA satellite launch attempt
The Radiation Belt Storm Probes prepped for launch. / CBS News
(AP) CAPE CANAVERAL, Fla. - Thunderstorms have ruined NASA's second attempt to launch a pair of science satellites.
For the second day in a row, NASA had to halt the countdown for its Radiation Belt Storm Probes.
Lightning and thick storm clouds prevented the unmanned rocket from taking off early Saturday from Cape Canaveral. On Friday, a tracking beacon on the rocket held up the flight.
NASA says it will try again Sunday.
The twin satellites are designed to study Earth's harsh radiation belts. Scientists say the two-year mission will improve space forecasting. The goal is to better guard against solar storms.
Spacecraft can be damaged, and astronauts hurt, from severe solar outbursts. Life here on the planet also can be disrupted.
Popular in SciTech
- Amazon proposes a colossal biospherelike Seattle campus
- Weird pirate ant comes with an "eye patch"
- Jennifer Lopez to open Verizon cellphone stores
- The 7 weirdest things made by 3D printing
- Apple's next iPhone may be coming in June
- Google to add Galapagos Islands to Street View
- "God particle": Why the Higgs boson matters
- Watch: Biggest solar storm of the year Play Video | fwe2-CC-MAIN-2013-20-41935000 |
Skip to content
Skip to navigation menu
15 November 2011
Genetic mutations that cause schizophrenia could be linked to systems in the brain responsible for learning and memory, a study suggests.
Researchers from Cardiff University and the University of Edinburgh have identified changes to genes – genetic mutations – in patients with schizophrenia who had not inherited the condition.
The study, published in the journal Molecular Psychiatry, showed that these mutations occurred among a set of proteins that play a key role in memory function.
The scientists took samples of DNA from more than 650 patients with schizophrenia and compared these with DNA from their parents – who did not have the condition – to identify the genetic differences.
Professor Michael Owen of Cardiff University, who led the research with colleague Professor Michael O’Donovan, said: "By studying such a large sample we have been able to provide the first clear insights into the sorts of basic biological processes that underlie schizophrenia.
"We hope that by identifying these mutations our findings will help us understand more clearly how schizophrenia arises and ultimately identify new targets for treatments."
The task of identifying what causes schizophrenia is difficult because the disorder does not occur as a result of a single genetic mutation, but reflects a large number of different risk genes.
Professor Seth Grant, of the University of Edinburgh, whose laboratory previously discovered dozens of proteins linked to learning and memory, said: "Although it has been known for some time that DNA mutations predispose individuals to the development of schizophrenia, it has remained a puzzle as to how these genes cause behavioural problems.
"The surprising finding was that DNA mutations that cause schizophrenia are interfering with the same proteins in the molecular machinery that controls learning and memory. The findings will help research into new drug therapies and in developing new diagnostic tests."
The genetic mutations disrupt the production of proteins found at synapses, which are the connections between different brain cells. The proteins are normally assembled together and process information that is passed from the environment to the memory systems in the brain. Disrupting the fundamental information processing systems in synapses results in behavioural disorders.
Professor Michael O’Donovan from Cardiff University added: "The main importance of the finding is that the new mutations were not randomly occurring in genes, instead they were concentrated in a relatively small number of genes which are crucial to the way nerve cells communicate with each other at junctions called synapses."
The study was funded by the Medical Research Council, the Wellcome Trust and the European Union.
Professor George Kirov also from Cardiff University and the study’s first author, said: "We already know that genetic factors increase the risk of schizophrenia, as well as non-genetic factors. However, we assumed that because schizophrenia sufferers are less likely than average to have children, genes with quite large effects on risk will be removed from the population by the process of natural selection.
"If this is true, this loss of disease genes must be compensated for by new mutations or the disease would no longer exist."
Rare genetic mutations that occurred either prior to or at fertilisation - do novo mutations – were found to occur among patients with schizophrenia.
Schizophrenia is a severe disorder affecting approximately one per cent of the population. Signs can be present from childhood, but usually the disorder is diagnosed in early teens and has an impact on adult life.
Notes to Editor
A copy of De novo CNV analysis implicate specific abnormalities of postsynaptic signalling complexes in the pathogenesis of schizophrenia published in Molecular Psychiatry, published online before print at doi:10.1038/mp.2011.154, is available, on request.
For further information or media interviews, contact:
Professor Mike OwenCardiff UniversityTel: 02920 687065
Professor Seth Grant
University of Edinburgh
Tel: 01223 494908
Tel: 029 20 874731
Press and PR Officer
University of Edinburgh
Tel: 0131 650 9836
Mob: +44 (0)7791 3455804
This is an externally hosted beta service offered by Google. | fwe2-CC-MAIN-2013-20-41943000 |
Suppose the daily benefit you get from spending another hour studying microeconomics is described by B(t) = 12t - tr^2.
a) What amount of studying time per day maximizes your benefits?
b) Derive the marginal benefits, MB(t)
c) Suppose that an alternative use of your time is to play video games that yields a constant benefit of 4 per hour, no matter how many hours you spend playing. How much time should you spend studying microeconomics? | fwe2-CC-MAIN-2013-20-41949000 |
On a quiet cove in Southern Maryland, a series of orange and white markers declares a stretch of water off limits to fishing. Under the surface sits spawning habitat for largemouth bass, a fish that contributes millions of dollars to the region’s economy each year and for whom two such sanctuaries have been established in the state. Here, the fish are protected from recreational anglers each spring and studied by scientists hoping to learn more about them and their habitat needs.
The largemouth bass can be found across the watershed and is considered one of the most popular sport fishes in the United States. While regional populations are strong, a changing Chesapeake Bay—think rising water temperatures, disappearing grasses and the continued arrival of invasive species—is changing bass habitat and could have an effect on future fish.
For decades, scientists with the Maryland Department of Natural Resources (DNR) have collected data on the distribution of largemouth bass, tracking the species and monitoring the state’s two sanctuaries in order to gather the knowledge needed to keep the fishery sustainable. Established in 2010 on the Chicamuxen and Nanjemoy creeks, both of which flow into the Potomac River, these sanctuaries have been fortified with plastic pipes meant to serve as spawning structures. And, it seems, these sanctuaries are in high demand during spawning season.
On an overcast day in April, three members of the DNR Tidal Bass Survey team—Joseph Love, Tim Groves and Branson Williams—are surveying the sanctuary in Chicamuxen Creek. Groves flips a switch and the vessel starts to send electrical currents into the water, stunning fish for capture by the scientists on board. The previous day, the team caught, tagged and released 20 bass; this morning, the men catch 19, none of which were tagged the day before.
“This [lack of recaptures] indicates that we have quite a few bass out here,” said Love, Tidal Bass Manager.
Indeed, the state’s largemouth bass fishery “is pretty doggone good,” Love continued. “That said, we recognize that the ecosystem is changing. And I don’t think anybody wants to rest on the laurels of a great fishery.”
As Love and his team learn how largemouth bass are using the state’s sanctuaries, they can work to improve the sanctuaries’ function and move to protect them and similar habitats from further development or disturbance.
“We can speculate where the best coves are, but this is the ground truthing that we need to do,” Love said.
In the fall, the team will return to the cove to count juvenile bass and report on juvenile-to-adult population ratios. While the assessment of the state’s sanctuaries is a small-scale project, it is one “aimed at the bigger picture,” Love said.
Love’s team is “doing what we can to improve the use of these coves by bass.” And protecting bass habitat and improving water quality will have a positive effect on the coves overall, creating healthier systems for neighboring plants and animals.
“By protecting these important areas, we are also protecting the larger ecosystem,” Love said.
Photos by Jenna Valente. To view more, visit our Flickr set.
Cover crops, streamside trees and nutrient management plans: all are exceptional ways to reduce nutrient pollution in the Chesapeake Bay. And for father and son duo Elwood and Hunter Williams, restoring the Bay begins with conservation practices and a shift in mentality.
“We knew coming down the road that we needed to do a better job with keeping the water clean,” Hunter said. “We decided that if there was going to be a problem with the streams it wasn’t going to be us.”
Excess nutrients come from many places, including wastewater treatment plants, agricultural runoff and polluted air. When nitrogen and phosphorus reach waterways, they can fuel the growth of large algae blooms that negatively affect the health of the Bay. In order to reduce these impacts, the U.S. Environmental Protection Agency (EPA) has implemented a Bay “pollution diet,” known as the Total Maximum Daily Load (TMDL).
Since the passing of the TMDL, many farmers in the watershed have felt the added pressure of the cleanup on their shoulders, but for the Williams family, having the foresight to implement best management practices (BMPs) just seemed like the environmentally and fiscally responsible thing to do.
”We don’t want to get to a point where regulations are completely out of control,” Hunter explained. “Farmers know what they’re putting on the ground so we have the ability to control it. Most people who have yards don’t have a clue what they’re putting on the ground when they use fertilizer. The difference has to be made up by the farmers because we know exactly what is going on to our soil.”
The Williams family began implementing BMPs on Misty Mountain Farm in 2006 by teaming up with the Potomac Valley Conservation District (PVCD). The government-funded non-profit organization has been providing assistance to farmers and working to preserve West Virginia’s natural resources since 1943.
The PVCD operates the Agricultural Enhancement Program (AgEP), which has steadily gained popularity among chicken farmers and livestock owners located in the West Virginia panhandle and Potomac Valley. While these two districts make up just 14 percent of West Virginia’s land mass, these regions are where many of the Bay’s tributaries begin—so it is important for area landowners to be conscious of pollutants entering rivers and streams.
AgEP is designed to provide financial aid and advice to farmers in areas that the Farm Bill does not cover. PVCD is run in a grassroots fashion, as employees collaborate with local farmers to pinpoint and meet their specific needs.
“It [AgEP] has been very well received,” said Carla Hardy, Watershed Program Coordinator with the PVCD. “It’s not [the U.S. Department of Agriculture] dictating how we spend our money, it’s the local, state and individuals saying, “These are our needs and this is how our money should be spent.” Farmers understand that in order to keep AgEP a voluntary plan they need to pay attention to their conservation practices.”
Hunter admits the hardest part of switching to BMPs was changing his mindset and getting on board. Originally, Hunter was looking at the Bay’s pollution problems as a whole, but with optimistic thinking and assistance from PVCD, he realized that the best way to overcome a large problem was to cross one bridge at a time.
It wasn’t long before the Williams family started to see results: fencing off streams from cattle led to cleaner water; building barns to overwinter cows allowed them to grow an average of 75 pounds heavier than before, making them more valuable to the farm.
By using BMPs, the Williams family has set a positive example for farmers across the watershed, proving that with hard work and a ‘sky is the limit’ mentality, seemingly impossible goals can be met.
Hunter points out, “We are proud to know that if you are traveling to Misty Mountain Farm you can’t say, “Hey these guys aren’t doing their part.”
Video produced by Steve Droter.
An investment in habitat conservation could be a smart one for fisheries and the economies that depend on them, according to a new report.
In More Habitat Means More Fish, released this week by Restore Americas Estuaries, the American Sportfishing Association and the National Oceanic and Atmospheric Administration, the link between healthy habitats and strong fisheries is made clear: without feeding or breeding grounds, fish cannot grow or reproduce, which means fewer fish and a decline in fisheries-dependent jobs, income and recreational opportunities.
Most of the nation’s commercial and recreational fish depend on coastal and estuarine habitats for food and shelter. Investments and improvements in these habitats can have immediate and long-lasting effects on fish populations.
The construction of an oyster reef, for instance, can provide food and shelter to a number of aquatic species. The conservation of marshes and underwater grass beds can boost the number and diversity of fish and their prey. And the restoration of fish passage to once-blocked rivers can open up new habitat to those species that must migrate upstream to spawn.
“Investing in coastal and estuarine habitat restoration is essential… for the long-term future of our fisheries,” said Restore Americas Estuaries President and CEO Jeff Benoit in a media release. “In order to have fish, we have to have healthy habitat. If we want more fish, we need more healthy habitat.”
Read more about More Habitat Means More Fish.
Restoring urban streams can help restore urban communities, according to a new analysis from the U.S. Geological Survey (USGS).
In a report released last week, the USGS documents the contributions that the restoration of an Anacostia River tributary made to the Washington, D.C., metropolitan area, from the creation of jobs to the creation of open space for residents. The yearlong restoration of a 1.8 mile stretch of Watts Branch is one in a series of case studies highlighting the economic impacts of restoration projects supported by the Department of the Interior.
Image courtesy U.S. Fish and Wildlife Service Northeast Region
Completed in 2011, the efforts to restore Watts Branch included the restoration of an eroded stream channel and the relocation and improvement of streamside sewer lines. The work—a collaboration between the U.S. Fish and Wildlife Service, the National Park Service, the U.S. Environmental Protection Agency, the District Department of the Environment, the D.C. Water and Sewer Authority and others—reduced erosion, improved water quality and wildlife habitat, and provided local residents with an urban sanctuary where green space is otherwise limited.
The restoration project also accounted for 45 jobs, $2.6 million in local labor income and $3.4 million in value added to the District of Columbia and 20 counties in Virginia, West Virginia and Maryland.
According to the EPA, $3.7 million in project implementation costs were funded by multiple agencies and organizations, including the EPA and National Fish and Wildlife Foundation.
Read more about Restoring a Stream, Restoring a Community. | fwe2-CC-MAIN-2013-20-41952000 |
It's a gloomy Sunday evening and Dovid Friedman of Chicago is reading The Very Hungry Caterpillar to his two children, 22-month-old Donny and 3-year-old Kivi. Each time Daddy gets to the end, the kids exclaim, "Again!"
Soon, the children decide they also want to eat through four strawberries, so everyone heads to the kitchen. In their rush to be first, the kids trip over each other, and Donny's cheek smacks into the edge of the door. He screams; Friedman picks him up, kisses him, gets an ice pack and in a few minutes, Donny is happily munching strawberries having completely forgotten the mishap.
The next morning, I drop by, see the bruise that now extends halfway across my grandson's cheek, and exclaim, "What happened to your face?"
Donny replies, "Kivi bite me."
My daughter, Shoshana, the kids' mother, sighs. "Actually, he was running and he crashed into the wall; but Kivi has bitten him so many times, that whenever he notices a bruise or a scrape, he just assumes that Kivi bit him."
While the situation is troubling to my daughter, it's not unusual, experts say. Since young children have limited communication skills, they may not be able to easily express strong emotions such as boredom, frustration or happiness. Combine this with a toddler's natural tendency to put everything in her mouth, and biting can easily result.
"For toddlers and young children, biting is a common behavior," says clinical psychologist Michele Nealon-Woods, assistant professor at the Chicago School of Professional Psychology. "Adults tend to see this as an aggressive act, but usually it's just a physical response to a burst of emotion."
Identify the triggers
It's not just other kids who are the targets; a child may bite a parent while putting up a fuss at naptime or bedtime, for example.
"Some biting incidents can be avoided by identifying what triggers them," says Nealon-Woods.
Just like other negative behaviors, biting is more likely to happen when children are tired, hungry or not feeling well. At these times, offering quiet activities, separating children, providing structure and keeping an extra eye and ear out for developing problems may be helpful.
But a bite can occur suddenly, without warning, even when children appear to be playing happily.
"That's why we sometimes have parents report that their children bite even when they don't seem angry or upset," Nealon-Woods says. Some kids even bite themselves on the hand or arm when they're excited. "A parent might be playing with her child, and the child gives the mother a big hug and then suddenly bites her on the shoulder."
When she was 2½, Rachel Zimmerman's daughter Devorah, now 4, was roughhousing with her father. Without warning, she bit him on the leg. He let out a yelp, and Devorah immediately burst into tears.
"She realized right away what she'd done," says Zimmerman of Chicago. "It was obvious that she wasn't trying to hurt him; she was just excited."
As children mature, gain self-control and develop more language skills, most outgrow their tendency to bite, generally by age 3 or so. If the biting is still occurring at age 3½ or 4, Skokie pediatrician Dr. Cathy Divincenzo says, it's probably time to seek outside help. Your family doctor or pediatrician, or your child's school, should be able to offer suggestions appropriate professionals or other resources.
Controlling the behavior
Biting may be developmentally normal, but it can't be ignored. There's the potential for serious injury to the one who's been bitten-usually another child-as well as the likelihood that a child who bites may be thought of by others-usually the parents of the bitten children-as a "bad" kid.
There are things parents can do to help control the problem. For example, Nealon-Woods suggests providing a bib or a squishy toy the child can bite on in an emotionally intense moment.
If the child bites someone else, she recommends a calm but immediate response. "The child needs to see what he's done, but it's the child who has been bitten who should get the parent's attention at that moment," she says.
Generally that means taking the victim and leaving the room, which teaches the biting child that biting doesn't automatically mean extra parental attention.
Any discussion following the biting should be brief, but can still help the biter understand the effects of his actions. This is also the time to model more appropriate behavior, says Nealon-Woods. "You can say to your child things like, 'I know you're angry,' and teach children the vocabulary they can use in order to express themselves with words."
Some children may need a further consequence. Divincenzo says a time-out after every biting incident is often the best way to curb this behavior. But whatever you do, make sure you do it right away, Nealon-Woods says. "You can't say to a 2- or 3-year-old, 'Wait till your father gets home,' for example," she explains. By that time-or even just an hour after the incident-the child is likely to have, at best, a hazy memory of the episode, and any discussion or punishment at that point isn't likely to be effective.
Some parents opt to respond to a child's biting by biting the child-gently, but enough to cause some discomfort. They reason that a child may not understand biting hurts, and this is a very effective way to show her. "If we're trying to teach the child that biting is wrong because it hurts, then biting the child sends a confusing message," says Nealon-Woods. "It just reinforces the child's belief that this [biting] is how we solve the problem."
Nothing seemed to stop Zimmerman's son, Ezzy, 2½, from biting his 4-year-old sister. "I talked to him about it each time, but I really think he was just too young to understand any explanations," Rachel says.
One day, following yet another biting incident, she took Ezzy into the kitchen and put a drop of dish detergent on his tongue. "I only left it there for a second, and then I rinsed his mouth out with water right away," she says. Ezzy's biting became much less frequent afterwards.
It's important to remember that the child who is bitten may need more than just a comforting hug. "With any bite that breaks the skin, there's a potential for infection," Divincenzo says, "especially when the bite is on the hand."
She advises washing the wound with soap and water and applying an over-the-counter antibiotic cream or ointment. It may take a week or so for the injury to heal; during that time, as with any cut, parents should contact their child's doctor if they notice signs of infection, such as increased redness, swelling or tenderness.
And comfort yourself with the knowledge that, most likely, this is just another childhood phase. Both Friedman and Zimmerman report that both their sons have been "bite-free" for several months.
Bites away from home
Preschool classrooms are a common setting for biting. At the early Childhood Education Center at Concordia University in River Forest, Director Doris Knuth says biting rarely is a recurring problem, but when it is, a meeting with the child's parents is arranged. "[U]sually, if the biting is happening here at school, it's happening in other settings as well, and the child needs to have consistent responses everywhere," she says.
Carla Young, principal of the Nursery School and Kindergarten at the University of Chicago Laboratory School, says parents need to be involved if biting becomes serious. "We don't necessarily call the parents if there is a single incident."
If the biting continues, the child may be sent home for the day, and the parents will be asked to come in for a discussion. "If it's a serious, recurring problem, we may ask a social worker to come in also," says Young.
In the intensity of discussions about the child's behavior, parents' feelings often take a back seat. "It's very embarrassing to be the parent of a 'biter,' " says Knuth. "There's a common perception that children who bite are 'bad' children."
That's not true, she says, but parents don't always understand that. "When a child is bitten, we don't tell her parents who did it," she continues, "but the kids certainly talk about it, so by the next day, everyone knows."
To correct misinformation about biting, the school sometimes provides handouts or articles. Nealon-Woods says the director of her child's daycare center sent home a brochure about toddlers and biting, with helpful advice for parents-for example, if you play "This little piggy," don't bite their piggies.
"The more information that parents have, and the more educated they are about kids and biting, the more likely they are to understand it," she says.
It's really no different than having a child who has head lice, Nealon-Woods says.
"You feel embarrassed if it's your child, but it doesn't mean you're a bad parent, or that your child is a bad kid."
Phyllis Nutkis is a writer and former preschool teacher living in Skokie. She and her husband have three grown children and two grandchildren.
This article appeared in the
edition of Archives.
Error parsing XSLT file: \xslt\article-detail.xslt | fwe2-CC-MAIN-2013-20-41955000 |
NASA's SpaceWeather web page reports the roughly 50-foot diameter space rock, that is officially named "asteroid 2012 TC4," will pass within 60,000 miles of Earth, about a quarter of the distance between Earth and the moon.
"There is no danger of a collision," says the website.
The asteroid which is not visible with the naked eye will come closes to earth at about 10:30 p.m. local time.
"NASA hopes to ping this object with radar, refining its orbit and possibly measuring its shape," says the web page.
The object was discovered on Sunday. | fwe2-CC-MAIN-2013-20-41956000 |
A giant squid, captured for the first time in its deep-sea habitat, swimming in July in the Pacific Ocean off northern Japan. / NHK/NEP/Discovery Channel/AP
The first images of the human-shy giant squid in its natural habitat, filmed in the black depths of the Pacific Ocean, have been released.
The silvery, 9-foot-long mollusk - scientists said it could have been up to 26 feet long if its two longest tentacles had not been severed - was filmed in July off Japan's Chichi island by a three-man crew from the National Museum of Nature and Science. The scientists followed the creature in their submersible to a depth of more than 2,700 feet as it vanished into the ocean darkness, where oxygen is scarce and pressure is enormous.
Japanese broadcaster NHK showed footage Monday. The squid, with black eyes the size of dinner plates, clutched a bait squid in its remaining arms as it swam against the current.
"It was shining and so beautiful," researcher Tsunemi Kubodera told AFP. "I was so thrilled."
The footage will be part of a Discovery Channel show, Monster Squid: The Giant Is Real, which will air Jan. 27 as the season finale of Curiosity.
Discovery News announced the find last month:
With razor-toothed suckers and eyes the size of dinner plates, tales of this creature have been around since ancient times. The Norse legend of the sea monster the Kraken, and the Scylla from Greek mythology, might have derived from the elusive giant squid.
This massive predator has always been shrouded in secrecy, and every attempt to capture a live giant squid on camera in its natural habitat has failed. Until now.
In 2006, Kubodera said, he filmed what he claimed was the first live video footage of a giant squid as it was hooked and brought aboard his ship.
Known to scientists as Architeuthis, the giants eat other types of squid and grenadier, a deep-sea fish, and can grow to more than 30 feet long and weigh 450 pounds.
Copyright 2013 USATODAY.com
Read the original story: First images of giant squid in the deep are released | fwe2-CC-MAIN-2013-20-41959000 |
Anesthesia is broken down into three main categories: local, regional, and general, all of which affect the nervous system in some way and can be administered using various methods and different medications.
Here's a basic look at each kind:
- Local anesthesia. An anesthetic drug (which can be given as a shot, spray, or ointment) numbs only a small, specific area of the body (for example, a foot, hand, or patch of skin). With local anesthesia, a person is awake or sedated, depending on what is needed. Local anesthesia lasts for a short period of time and is often used for minor outpatient procedures (when patients come in for surgery and can go home that same day). For someone having outpatient surgery in a clinic or doctor's office (such as the dentist or dermatologist), this is probably the type of anesthetic used. The medicine used can numb the area during the procedure and for a short time afterwards to help control post-surgery discomfort.
- Regional anesthesia. An anesthetic drug is injected near a cluster of nerves, numbing a larger area of the body (such as below the waist, like epidurals given to women in labor). Regional anesthesia is generally used to make a person more comfortable during and after the surgical procedure. Regional and general anesthesia are often combined.
- General anesthesia. The goal is to make and keep a person completely unconscious (or "asleep") during the operation, with no awareness or memory of the surgery. General anesthesia can be given through an IV (which requires sticking a needle into a vein, usually in the arm) or by inhaling gases or vapors by breathing into a mask or tube.
The anesthesiologist will be there before, during, and after the operation to monitor the anesthetic and ensure you constantly receive the right dose. With general anesthesia, the anesthesiologist uses a combination of various medications to do things like:
- relieve anxiety
- keep you asleep
- minimize pain during surgery and relieve pain afterward (using drugs called analgesics)
- relax the muscles, which helps to keep you still
- block out the memory of the surgery
How Does Anesthesia Work?
To better understand how the different types of anesthesia work, it may help to learn a little about the nervous system. If you think of the brain as a central computer that controls all the functions of your body, then the nervous system is like a network that relays messages back and forth from it to different parts of the body. It does this via the spinal cord, which runs from the brain down through the back and contains threadlike nerves that branch out to every organ and body part.
The American Society of Anesthesiologists (ASA) compares the nervous system to an office's telephone system — with the brain as the switchboard, the nerves as the cables, and body parts feeling pain as the phones. Here's how the ASA puts it into perspective:
- With local anesthesia, the phone (the small part of the body being numbed) is "off the hook" and, therefore, can't receive calls (pain signals) from the switchboard (the brain) or the phone cables (the nerves).
- With regional anesthesia, the phone cable (the nerves) is broken, causing all of the area's phones (entire area of the body being numbed) to be out of service.
- With regional anesthesia, the switchboard operator (the brain) is on a break and, therefore, can't connect incoming calls (pain signals).
Will I Get a Needle?
Often, anesthesiologists may give a person a sedative to help them feel sleepy or relaxed before a procedure. Then, people who are getting general anesthesia may be given medication through a special breathing mask or tube first and then given an IV after they're asleep. Why? Because many people are afraid of needles and may have a hard time staying still and calm.
What Type of Anesthesia Will I Get?
The type and amount of anesthesia given to you will be specifically tailored to your needs and will depend on various factors, including:
- the type of surgery
- the location of the surgery
- how long the surgery may take
- your current and previous medical condition
- allergies you may have
- previous reactions to anesthesia (in you or family members)
- medications you are taking
- your age, height, and weight
The anesthesiologist can discuss the options available, and he or she will make the decision based on your individual needs and best interests.
Reviewed by: Judith A. Jones, MD
Date reviewed: April 2009
||Note: All information is for educational purposes only. For specific medical advice, diagnoses, and treatment, consult your doctor.
© 1995-2010 The Nemours Foundation/KidsHealth. All rights reserved. | fwe2-CC-MAIN-2013-20-41961000 |
If you download this publication you may also be interested in these:
Facing an uncertain future
How forest and people can adapt to climate changeCenter for International Forestry Research (CIFOR)Bogor, Indonesia
The most prominent international responses to climate change focus on mitigation (reducing the accumulation of greenhouse gases) rather than adaptation (reducing the vulnerability of society and ecosystems). However, with climate change now inevitable, adaptation is gaining importance in the policy arena, and is an integral part of ongoing negotiations towards an international framework. This report presents the case for adaptation for tropical forests (reducing the impacts of climate change on forests and their ecosystem services) and tropical forests for adaptation (using forests to help local people and society in general to adapt to inevitable changes). Policies in the forest, climate change and other sectors need to address these issues and be integrated with each other—such a cross-sectoral approach is essential if the benefits derived in one area are not to be lost or counteracted in another. Moreover, the institutions involved in policy development and implementation need themselves to be flexible and able to learn in the context of dynamic human and environmental systems. And all this needs to be done at all levels from the local community to the national government and international institutions. The report includes an appendix covering climate scenarios, concepts, and international policies and funds. | fwe2-CC-MAIN-2013-20-41969000 |
Profile in Courage
In the political thriller Thirteen Days, Kevin Costner explores the Cuban Missile Crisis and how John and Robert Kennedy saved the world.
From the Print Edition:
Kevin Costner, Nov/Dec 00
(continued from page 1)
On the evening of October 22, 1962, President Kennedy addressed the American people from the Oval Office: Within the past week, unmistakable evidence has established the fact that a series of offensive missile sites is now in preparation on that imprisoned island. The purpose of these bases can be none other than to provide a nuclear strike capability against the Western Hemisphere.... It shall be the policy of this nation to regard any nuclear missile launched from Cuba against any nation in the Western Hemisphere as an attack by the Soviet Union on the United States, requiring a full retaliatory response upon the Soviet Union... While the Pentagon went to "Defcon 2" (Defcon 5 is peace; Defcon 1 is war) and the world anxiously held its breath, Salinger remembers walking with Kennedy one morning through the Rose Garden and the president confiding in him: "'You know, if this [blockade] doesn't work, if we don't succeed in bringing this crisis to an end,' Kennedy said, 'hundreds of millions of people are going to get killed.'"
To assure that didn't happen, Kennedy took a further step by initiating secret "back-channel" communications with Soviet Premier Nikita Khrushchev, using Robert Kennedy, Pierre Salinger, American journalists and two KGB officials in Washington, thereby being able to bypass both the Joint Chiefs and the Politburo. Finally, on Sunday, October 28, the Soviet government blinked and agreed to remove the missiles from Cuba, and in return Kennedy agreed to withdraw Jupiter ballistic missiles from Turkey. But according to Salinger, Kennedy had already planned to withdraw those missiles four months before the crisis, as they were considered obsolete and were scheduled to be replaced by submarine-launched Polaris missiles. "But this way, Kennedy allowed Khrushchev to save face with his country," says Salinger.
Negotiating a peaceful settlement that Sunday, however, left the Joint Chiefs irate. They believed a U.S. attack on Cuba was still justified. "We've been had!" fumed Adm. George Anderson. And Gen. Curtis Le May angrily insisted, "Why don't we just go in and make a strike anyway!" The morning after the resolution to the crisis, Kennedy told Salinger that "the military have gone mad!"
JFK's diplomacy had left the military and intelligence officers more insistent than ever that he was "soft on communism," says Salinger, adding that shortly before he was assassinated, Kennedy had planned on dropping the embargo on Cuba and normalizing relations with that nation. Otherwise, Kennedy told Salinger, "The Soviet Union is going to completely run that place and we don't want them dominating that area." Salinger also reveals that Kennedy planned on opening relations with China, but couldn't do so until his second term, "because people would label him a Communist," says Salinger.
Costner, who was seven years old during the crisis and living with his family in Compton, California, remembers those tense days as "stirrings of things." He sits on the steps outside his movie-set trailer and talks of hearing about bomb shelters and a hoarding of food. "But my parents were very careful to not let on that there was imminent danger. I just remember there was definitely something in the air. And I remember getting under the desk in school and doing the drills. It was great for me," he says with a laugh, "because it wasn't math! I mean, my theory on it was just as good as anybody else's, because you knew bomb shelters weren't going to work, so for me it was extended recess."
As the crew busily prepares for the next scene, Thirteen Days director Roger Donaldson--who previously worked with Costner on the hit 1987 spy thriller No Way Out--confers briefly with Costner and Bruce Greenwood on the last shot of the day. Greenwood, who researched voluminously for his portrayal of JFK, admits he watched "hundreds of hours of tapes," read a stack of books "that literally comes up to my belt," and also had a breakfast meeting with Salinger to help prepare for the role.
"To be able to sit in the same room and talk with someone who had actually been there made it very personal for me," Greenwood says. "All I knew about what happened was through history books, but it was a profound jump to be placed in this room with a guy who was in the middle of it all--it really helped place me there." Greenwood feels that what will surprise audiences most about Thirteen Days, due out December 22, is how close the world actually came to annihilation. "They'll be simply astounded that it was only through the efforts of a few good men that Armageddon was sidestepped," says Greenwood, who starred as the villainous husband opposite Ashley Judd and Tommy Lee Jones in last year's surprise hit Double Jeopardy. "The pressure that was on [John] Kennedy to respond militarily and politically was incalculable. And that strength of character and courage to avoid making a decision that would have changed the world forever is something people maybe don't quite realize."
Costner figures that the resolve the Kennedy brothers showed in refusing to bow to military pressure stemmed directly from the fiasco of the Bay of Pigs invasion in 1961, which had been initiated by President Eisenhower a year before JFK took office. "When the Kennedys came into office, they were really being pushed around by [long-time CIA director] Allen Dulles and these military guys saying, 'Hey, this plan has been in the works for two years, whereas you just won the election, so don't fuck this up for us!' And Kennedy, being new and inexperienced and giving a certain amount of respect to them thought, 'Oh, OK.' Yet none of those guys were standing around to accept the blame. And there were the Kennedys, like on a beach with a giant tsunami coming to sweep them up. But to their credit, I think the Bay of Pigs is what saved us all in the end because they didn't repeat the weakness, or maybe what people perceived as weakness."
Costner, Greenwood and Culp are summoned back on set for what will be one of the final shots of the film--a walk back from the Rose Garden along the portico. The setting sun of a gorgeous autumn afternoon is duplicated on the soundstage and it illuminates the enormous White House set with a brilliant orange hue. As the three actors pass the white neoclassical pillars of the cloister and walk out of the frame, Donaldson keeps the camera stationary on the long shadows being cast by these men on the outside wall of the Oval Office. It's a truly poignant moment, one that will have enormous visual impact in the finished film. After some minor adjustments in the blocking, a second take is filmed and deemed the better of the two.
Filming is wrapped for the weekend, but Costner stays to view the scene on a playback monitor. A contented smile spreads across his face. "You know, the Kennedys really were golden during this and the world definitely needed them," says an obviously wistful Costner, watching the flickering video images of the last scene. "I think if you believe in God at all, you believe God had his hand on this country and on them. This is our tipping of the hat to some true heroism that occurred through sheer intellect and the power of these young men's character."
You must be logged in to post a comment. | fwe2-CC-MAIN-2013-20-41970000 |
Alpha-beta pruning is a search algorithm which seeks to reduce the number of nodes that are evaluated in the search tree by the minimax algorithm. In Computer science, a search algorithm, broadly speaking is an Algorithm that takes a problem as Input and returns a solution to the problem usually If you're looking for game tree as it's used in game theory (not combinatorial game theory please see Extensive form game. Minimax (sometimes minmax) is a decision rule used in Decision theory, Game theory, Statistics and Philosophy for mini mizing It is a search with adversary algorithm used commonly for machine playing of two-player games (Tic-tac-toe, Chess, Go, etc. Chess is a recreational and competitive Game played between two players. ). It stops completely evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. Alpha-beta pruning is a sound optimization in that it does not change the result of the algorithm it optimizes.
Allen Newell and Herbert Simon who used what John McCarthy calls an "approximation" in 1958 wrote that alpha-beta "appears to have been reinvented a number of times". Allen Newell ( March 19, 1927 - July 19, 1992) was a researcher in Computer science and Cognitive psychology at the Herbert Alexander Simon ( June 15, 1916 February 9, 2001) was an American Political scientist whose research ranged John McCarthy (born September 4, 1927, in Boston, Massachusetts) is an American Computer scientist and Cognitive Arthur Samuel had an early version and Richards, Hart, Levine and/or Edwards found alpha-beta independently in the United States. Arthur L Samuel (1901 – July 29, 1990) was a pioneer in the field of computer gaming and artificial intelligence The United States of America —commonly referred to as the McCarthy proposed similar ideas during the Dartmouth Conference in 1956 and suggested it to a group of his students including Alan Kotok at MIT in 1961. The Dartmouth Summer Research Conference on Artificial Intelligence was the name of a conference now considered the seminal event for Artificial intelligence as a This article is about Alan Kotok who was associate chair of W3C. Alexander Brudno independently discovered the alpha-beta algorithm, publishing his results in 1963. Alexander Brudno ( Aleksandr L'vovich Brudno) (Александр Львович Брудно (born 1918 is a Russian computer scientist, best known for Donald Knuth and Ronald W. Donald Ervin Knuth (kəˈnuːθ (born 10 January 1938) is a renowned computer scientist and Professor Emeritus of the Art of Computer Moore refined the algorithm In 1975 and it continued to be advanced.
The benefit of alpha-beta pruning lies in the fact that branches of the search tree can be eliminated. The search time can in this way be limited to the 'more promising' subtree, and a deeper search can be performed in the same time. Like its predecessor, it belongs to the branch and bound class of algorithms. Branch and bound (BB is a general Algorithm for finding optimal solutions of various optimization problems especially in discrete and Combinatorial The optimization reduces the effective depth to slightly more than half that of simple minimax if the nodes are evaluated in an optimal or near optimal order (best choice for side on move ordered first at each node).
With an (average or constant) branching factor of b, and a search depth of d ply, the maximum number of leaf node positions evaluated (when the move ordering is pessimal) is O(b*b*. In Computing, Tree data structures and Game theory, the branching factor is the number of children of each node. In two-player Sequential games a ply refers to one turn taken by one of the players In mathematics big O notation (so called because it uses the symbol O) describes the limiting behavior of a function for very small or very large arguments . . *b) = O(bd) – the same as a simple minimax search. If the move ordering for the search is optimal (meaning the best moves always searched first), the number of positions searched is about O(b*1*b*1*. . . *b) for odd depth and O(b*1*b*1*. . . *1) for even depth, or . In the latter case, the effective branching factor is reduced to its square root, or, equivalently, the search can go twice as deep with the same amount of computation. In Mathematics, a square root of a number x is a number r such that r 2 = x, or in words a number r whose The explanation of b*1*b*1*. . . is that all the first player's moves must be studied to find the best one, but for each, only the best second player's move is needed to refute all but the first (and best) first player move – alpha-beta ensures no other second player moves need be considered. If b=40 (as in chess), and the search depth is 12 ply, the ratio between optimal and pessimal sorting is a factor of nearly 406 or about 4 billion times.
Normally during alpha-beta, the subtrees are temporarily dominated by either a first player advantage (when many first player moves are good, and at each search depth the first move checked by the first player is adequate, but all second player responses are required to try and find a refutation), or vice versa. This advantage can switch sides many times during the search if the move ordering is incorrect, each time leading to inefficiency. As the number of positions searched decreases exponentially each move nearer the current position, it is worth spending considerable effort on sorting early moves. An improved sort at any depth will exponentially reduce the total number of positions searched, but sorting all positions at depths near the root node is relatively cheap as there are so few of them. In practice, the move ordering is often determined by the results of earlier, smaller searches, such as through iterative deepening. Iterative deepening depth-first search (IDDFS is a State space search strategy in which a Depth-limited search is run repeatedly increasing the depth limit with
The algorithm maintains two values, alpha and beta, which represent the minimum score that the maximizing player is assured of and the maximum score that the minimizing player is assured of respectively. Initially alpha is negative infinity and beta is positive infinity. As the recursion progresses the "window" becomes smaller. When beta becomes less than alpha, it means that the current position cannot be the result of best play by both players and hence need not be explored further.
function alphabeta(node, depth, α, β) (* β represents previous player best choice - doesn't want it if α would worsen it *) if node is a terminal node or depth = 0 return the heuristic value of node foreach child of node α := max(α, -alphabeta(child, depth-1, -β, -α)) (* use symmetry, -β becomes subsequently pruned α *) if β≤α break (* Beta cut-off *) return α
Further improvement can be achieved without sacrificing accuracy, by using ordering heuristics to search parts of the tree that are likely to force alpha-beta cutoffs early. Pseudocode is a compact and informal high-level description of a Computer programming Algorithm that uses the structural conventions of some Programming language heuristic (hyu̇-ˈris-tik is a method to help solve a problem commonly an informal method For example, in chess, moves that take pieces may be examined before moves that do not, or moves that have scored highly in earlier passes through the game-tree analysis may be evaluated before others. Iterative deepening depth-first search (IDDFS is a State space search strategy in which a Depth-limited search is run repeatedly increasing the depth limit with Another common, and very cheap, heuristic is the killer heuristic, where the last move that caused a beta-cutoff at the same level in the tree search is always examined first. In competitive two-player games the killer heuristic is a technique for improving the efficiency of Alpha-beta pruning, which in turn improves the efficiency of the This idea can be generalized into a set of refutation tables.
Alpha-beta search can be made even faster by considering only a narrow search window (generally determined by guesswork based on experience). This is known as aspiration search. In the extreme case, the search is performed with alpha and beta equal; a technique known as zero-window search, null-window search, or scout search. This is particularly useful for win/loss searches near the end of a game where the extra depth gained from the narrow window and a simple win/loss evaluation function may lead to a conclusive result. If an aspiration search fails, it is straightforward to detect whether it failed high (high edge of window was too low) or low (lower edge of window was too high). This gives information about what window values might be useful in a re-search of the position.
More advanced algorithms that are even faster while still being able to compute the exact minimax value are known, such as Negascout and MTD-f. NegaScout or Principal Variation Search is a Negamax algorithm that can be faster than Alpha-beta pruning. MTD(f, an abbreviation of MTD(nf (Memory-enhanced Test Driver with node n and value f is a Minimax search algorithm better than the basic Alpha-beta pruning algorithm
Since the minimax algorithm and its variants are inherently depth-first, a strategy such as iterative deepening is usually used in conjunction with alpha-beta so that a reasonably good move can be returned even if the algorithm is interrupted before it has finished execution. Depth-first search ( DFS) is an Algorithm for traversing or searching a tree, Tree structure, or graph. Iterative deepening depth-first search (IDDFS is a State space search strategy in which a Depth-limited search is run repeatedly increasing the depth limit with Another advantage of using iterative deepening is that searches at shallower depths give move-ordering hints that can help produce cutoffs for higher depth searches much earlier than would otherwise be possible.
Algorithms like SSS*, on the other hand, use the best-first strategy. SSS* is a Search algorithm, introduced by Stockman in 1979 that conducts a State space search traversing a Game tree in a best-first fashion Best-first search is a Search algorithm which explores a graph by expanding the most promising node chosen according to some rule This can potentially make them more time-efficient, but typically at a heavy cost in space-efficiency. | fwe2-CC-MAIN-2013-20-41971000 |
The array of official privileges that made the SUV possible all had as a necessary precondition its classification as a light truck. This was not unreasonable, given that SUVs began by being mounted on the chassis of pick-up trucks. American Motors, maker of the Jeep, lobbied successfully for this decision in the early 1970s, at a time when any other choice risked putting that company out of business.
Light trucks had the political advantage that they were driven by farmers and small businessmen, often self-employed. Also, because they were a tiny fraction of the automotive market, regulators and environmentalists discounted their importance. Many favors followed.
There were tax breaks. In 1978 Congress enacted a gas guzzler tax applying to the purchase price of cars that exceeded mileage standards—sports cars, principally—but not to light trucks. After 1984, depreciation rules in the tax code enabled purchasers of light trucks, but not luxury cars, to write the cost off if they used the vehicles for business purposes. The change was meant to close a loophole that had enabled real estate agents, for example, to write off the cost of a Cadillac or Lincoln sedan. In 1990 Congress exempted light trucks from a 10 percent luxury tax that it imposed on cars costing more than $30,000.
There were looser standards or no standards for fuel mileage, safety, and emissions. The car mileage standard reached 27.5 miles per gallon in the mid-1980s, whereas the light truck standard leveled off at 20.5 miles per gallon. Safety rules regarding headrests, steel beams inside doors, stopping distances, bumper heights, and the durability of tires were applied to cars but not light trucks. The Clean Air Act of 1990 limited car emissions to 0.4 gram of nitrogen oxides per mile but allowed 1.1 gram for pick-ups.
With different political origins, there were also tariff protections. The light trucks of foreign manufacturers are subject to a tariff of 25 percent.
Initially, officials made these decisions without anticipating their effects on the automotive industry. The heavy tariff on light trucks grew out of a dispute with Western Europe over American exports of frozen chickens in the early 1960s. A staff economist who worked on automotive taxes in Congress told Bradsher, "Nobody anticipated the move to make...sport utilities into limousines. There were no such things as the Lincoln Continentals of sport utility vehicles...."
Another innocent decision-maker was the redoubtable Joan Claybrook, head of the National Highway Traffic Safety Administration (NHTSA) during the Carter Administration, who increased the gross vehicle weight limit for light trucks, beyond which fuel economy standards would not apply, to 8,500 pounds. At the time, this seemed like a high number. Claybrook raised the exemption limit from 6,000 pounds because manufacturers had begun exceeding the lower number in order to escape regulation. Predictably, in time they exceeded the higher number as well, and today they manufacture SUVs with weights of 8,550 to 8,600 pounds, thus escaping all fuel-economy regulations and some emission standards as well.
In time, Washington's protections became knowing and purposeful. Why not? SUV's were the salvation of the domestic automobile industry, whose demise had been chronicled by Brock Yates and David Halberstam in the 1980s. Unable to compete with Japan in the manufacture of heavily-regulated cars, the American industry would rise once more—high, mighty, and artfully gaming the government's rules—on the axles of the SUV.
There were two breakthroughs. One came in the mid-1980s with the popularity of the Jeep Cherokee, a mid-sized SUV with four doors and two rows of seats, which made it suitable for families. Mid-size SUVs went from one-tenth of one percent of the auto market in 1980 to 3.55 percent in 1989. Whereupon Chrysler bought American Motors for $1.5 billion in order to get the Jeep division and Ford and General Motors made plans to build their own mid-sized SUVs. Ford's Explorer and GM's Chevy Blazer were instantly popular when they debuted in 1990.
The breakthrough in the luxury class came after the lenient air quality standards of 1990 enabled manufacturers to build SUVs with bigger engines. By 1996 they commanded half of the luxury market, and new models continued to appear, such as the Lincoln Navigator in 1997 and the Cadillac Escalade in 1998. The triumph of the SUV, now leather-seated for a Manhattan soiree, was complete. SUV sales in 2001 passed those of mid-sized cars, a category that includes the Honda Accord, Toyota Camry, Ford Taurus, and Volkswagen Passat.
No wonder that politicians of both parties combined in 2001 to defeat higher mileage standards for light trucks. The vote in the House was 269 to 160, and in the Senate, 62 to 38. Michigan's 75-year-old John Dingell, the longest-serving member of the House, put an avuncular arm around Bradsher's shoulder at a political deck party at Dingell's house, and confided that he looked forward to more such votes. "I see things with great objectivity," he said, "and a strong auto industry is in the interests of this country."
Bradsher does not absolve the industry from responsibility for making an unsafe product, prone to killing its occupants in rollovers and spearing objects (and people) with its steel truck frame in collisions. Yet this book is not the anti-industry polemic that reviews such as Gregg Easterbrook's in The New Republic ("Axle of Evil," Jan. 20, 2003) would lead one to expect. Blurbs to the contrary notwithstanding, Keith Bradsher is not Ralph Nader. He locates the industry's political power more in the UAW and in dealerships than in the corporate heads, who are not especially big campaign contributors. He gives William Clay Ford credit for wanting to swim against the corporate tide and sell social responsibility before capsizing in the vicious family fight with Firestone over who was to blame for rollovers of the Explorer.
Bradsher is also even-handed enough to ask where the environmental movement was while SUVs were overtaking cars, and to answer that its leaders were driving SUVs and giving priority to saving whales. Implicitly, without being immodest about his own superior coverage of the industry, he places some responsibility on the automotive press, which the industry woos with junkets to the Himalayas. There, reporters who love to drive can test the ability of SUVs to climb peaks higher than any to be found in the suburbs of Los Angeles, Miami, or Manhattan, the more usual venue of the vehicle type.
Nor does Bradsher spare the purchasers, who are impossible to portray as hapless victims of an exploitative industry when they are among the country's wealthiest and best-educated adults. He says they fear crime and have fantasies about life in the rough, off-road. Yet it is not evident that consumers have demanded the high ride they get with SUVs. Under the government's regulations, an SUV can qualify as a light truck only if the manufacturer vouches that it is "capable of off-highway operation." This requires considerable ground clearance. The high ride may have more to do with the industry's perfectly predictable exploitation of the government's rules than Manhattanites' wanting to pretend that they live in the mountains.
Bradsher ends pessimistically, arguing that the menace of SUVs will grow as they spread to other countries where there are more pedestrians and cyclists, with fewer curbs and sidewalks, while in this country used vehicles will trickle down to drivers less skilled, responsible, and educated than the original purchasers. Bradsher sees few ways to minimize the harms of existing SUVs other than to raise gasoline taxes, raise liability insurers' rates so as to penalize models with poor safety records, and prohibit the use of grille guards in cities, where the chances of fatal accidents are greatest.
To govern future production, he urges NHTSA to set standards for stability, crash compatibility (to safeguard occupants of other vehicles), and headlight heights. He would close tax loopholes and tighten emission standards, but fuel economy he concedes to be a very complicated matter, hard to regulate without perverse consequences. Before doing anything else, he would have Congress eliminate the distinction between cars and light trucks on the theory that this would encourage manufacturers to make more cars in order to keep their fleet averages within regulatory standards.
Bradsher takes no comfort in the rise of so-called crossover utility vehicles, which are built on car platforms with unitized bodies and car powertrains modified for all-wheel drive. Established by Toyota with the RAV4 and Lexus RX300, this tamer type quickly became popular after 1996, especially with women. It now includes domestic versions—the Ford Escape, Pontiac Aztec, and Buick Rendezvous. Growth is expected to continue as new models appear, including this year the Volvo XC90, which is being heavily advertised for safety features ("the first SUV in the world with roll stability control") and freedom from guilt ("the first SUV in the world to transform ozone into oxygen").
Bradsher objects that consumers are using crossovers to replace cars rather than bigger SUVs and that manufacturers use them to game mileage standards by bringing down fleet averages. Yet the fact is that since crossovers were introduced, sales of the larger SUVs have leveled off. At the beginning of 2003, sellers for the first time were offering huge rebates on the biggest models—as much as $8,500 on the Ford Expedition and $6,600 on the Dodge Durango. Meanwhile, crossovers have become the fastest growing segment of the automotive market, with unit sales up by 78 percent in 2000, 87 percent in 2001, and 23 percent in 2002.
The mileage of crossovers almost equals that of cars. To meet regulatory requirements, their manufacturers still claim that they are capable of off-road travel, but the engineering evidence for this is modest, as in skid plates that protect their engines and transmissions from rock damage or tires that can grip in mud. They could make their way up a rutted mountain road to an owner's second home, but not haul a horse trailer or boat. They are easier to handle and park than big SUVs, and have a quieter ride. At least for the occupants of other vehicles, they are presumably safer than the higher-riding SUVs built on truck frames that puncture their crash objects with steel.
One wonders what American highways would look like if car-based utility vehicles had been manufactured at the beginning of the SUV boom instead of ten years later. Did the menacing version prevail initially because it appealed to the "darkest shadows of human nature," in Bradsher's lugubrious phrase, or because the industry's economics and the government's regulations perversely interacted to propel it to predominance? Indubitably popular, it may not in the long run prove to be more so than a scaled-back, smoother riding, more sensible alternative.
Perhaps it is not claiming too much to interpret crossovers as consumers' modestly qualified contribution to the current backlash against SUVs. If the big ones do suffer a prolonged decline, at least relatively, it is hard to predict the extent to which Bradsher's book will be credited. Easterbrook compares Bradsher to Nader and Ida Tarbell but then complains that no one is paying attention to him. With less information but more punch, critics ranging from Arianna Huffington to the Evangelical Environmental Network ("What would Jesus drive?") have dramatized the issue. The Bush Administration's head of NHTSA, Jeffrey Runge, said he would not let one of his children drive an SUV with a poor safety record if it was the last one on earth. While the backlash rages on billboards and TV—with help from higher gas prices and the publicity given to rollovers—the author is half a world away, now serving as the Times's bureau chief in Hong Kong.
Bradsher's well-researched book deserves a wide audience among connoisseurs of public policy. From paints to pesticides, cellphones to shower heads, today's consumer products are often profoundly shaped by government's decisions, mostly in ways that citizens do not perceive and often with effects that the office-holders do not anticipate. "We cannot regulate ourselves out of this mess," Runge remarked in exasperation at SUVs. Perhaps not. But to a considerable extent, we regulated ourselves into it. | fwe2-CC-MAIN-2013-20-41978000 |
The U.S. Department of Energy (DOE) is ramping up its CSP research, development, and deployment efforts, leveraging both industry partners and the national laboratories. DOE’s goals include increasing the use of CSP in the United States, making CSP competitive in the intermediate power market by 2015. DOE’s goals also include developing advanced technologies that will reduce systems and storage costs, enabling CSP to be competitive in the baseload power market by 2020.
DOE plans to achieve these goals through cost-shared contracts with industry, advanced research at its national laboratories, and collaboration with other government agencies to remove barriers to deploying the technology.
About Concentrated solar power
Concentrating Solar Power is a technology to convert solar radiation into electricity. CSP technologies use mirrors or lenses to concentrate light energy from the sun. This light energy is converted to heat to create steam to drive a turbine that generates electrical power. - view more | fwe2-CC-MAIN-2013-20-41980000 |
In Northeast Ohio, coyotes trot across runways at Burke Lakefront Airport within sight of Cleveland's skyscrapers, and dodge traffic on busy highways like U.S. 422 near Solon.
In Chicago a couple of years ago, one sauntered into a sandwich shop in the downtown Loop and plopped down in the beverage cooler.
In New York, a frisky 35-pound male led cops, reporters and TV news helicopters on an hours-long romp around Central Park before being shot with a tranquilizer dart.
Once the denizens of southwestern deserts and Great Plains prairies, mobile and highly adaptable coyotes have been on a relentless eastward march for much of the 20th century, aided by suburban land-clearing and the elimination of their chief competitor, the gray wolf.Beginning in the 1990s, their presence escalated in Cleveland and other big Midwestern and Eastern metropolises, not just in fringe parklands, but in neighborhoods and the urban core.
City-dwellers who spot them in parks and back yards are startled to find that a formidable tracker and hunter at the top of the food chain – what ecologists call an apex predator – is in their midst.
"People are walking within a few feet or yards from coyotes every day and they don't know it," said Ohio State University biologist Stan Gehrt, whose decade-long-study of the Chicago area's estimated 2,000 coyotes prompted him to dub them "ghosts of the city." "There are a whole lot more out there than what we see."
Much remains unknown about these elusive creatures. They're one of the largest animals to dramatically expand, rather than shrink, their range in response to humans, though researchers still haven't figured out if they've settled in urban realms in spite of us, or because of us.
"For an animal that's lived around humans for this long, we don't know much about it," said Cleveland State University biologist Robert Krebs.
But new research, including a DNA study by Krebs and several CSU colleagues, is gradually revealing more about the habits of urban coyotes.
The investigations are unearthing some surprises about how the animals got here, what they're doing now, how much of a threat they pose (hint: not a lot, although that may change) and their role in an ongoing evolutionary experiment that could play out in and around Ohio during the next few decades.
The urban coyotes among us now are the descendants of an expanding Western coyote population spilled from the grasslands of Iowa, Missouri and Indiana into Ohio in the early 1900s. The animals' eastward progress across the state was slow; they didn't spread to western Pennsylvania until 1947.A separate wave of Western coyotes took another expansion route and followed a different genetic strategy that gave them a big advantage over the slow-moving Buckeye group, according to a led by New York State Museum biologist Roland Kays.
The coyotes skirted northeastward, moving above and around the Great Lakes. In the Canadian woods, they encountered remnant populations of wolves, which hadn't been eradicated like their American counterparts.
Although natural rivals, a few coyotes and wolves cross-bred, DNA testing shows. "It may have been both sides trying to make the best of things," Kays said.
The resulting Northeastern coyote-wolf hybrids have big bodies, broad skulls and strong jaw muscles, making them better suited to take down deer than Ohio's mainly rodent- and roadkill-eating coyotes.
Kays' research shows the hybrid coyote-wolves spread across Ontario and southward into upstate New York five times faster than the purebred Western coyotes that colonized Ohio and the lower Midwest.
Like advancing armies, the two expansion fronts, one composed of smaller non-hybrid coyotes and the other of larger coy-wolves, are just starting to encounter each other in western New York and western Pennsylvania.
What happens next – further genetic mixing, one side out-competing the other and taking over its territory, or some kind of peaceful but separate co-existence – isn't clear. "That's playing out right now," Kays said.
Whatever their future, Ohio's purebred coyotes have become an enduring, if still mostly stealthy, fixture on the urban landscape.When Lake Erie freezes, U.S. Department of Agriculture wildlife biologist Randy Outward sees coyotes using the icepack to move east and west along the downtown Cleveland shoreline. "It's like a highway for them," he said.
Suburban police departments, especially those near the Cleveland Metroparks and the Cuyahoga Valley National Park, get complaints from residents awakened by coyote howling, or alarmed to see coyotes jogging through their yards. The parks also hear from people demanding they do something about "their" coyotes.
"A lot of folks think we're a refuge for them. We're not," said Rick Tyler, the Metroparks' senior natural resource manager. "Their home ranges use a lot of our parks, but some cross our boundaries freely. They're well-entrenched in municipalities around them. They're doing pretty well in the suburbs."
The federal and regional park districts keep loose track of the coyotes within their boundaries using "howling surveys," remote trail cameras triggered by the animals' movement, and contact reports from park visitors.
The annual howling surveys, where wildlife specialists and volunteers play amplified recordings of coyote calls and count the number of responses, are an inexact measure. But they indicate that between 100 and 150 coyotes are present in the 33,000-acre national park.
The population climbed during the 1990s but seems to have leveled off, said Lisa Petit, the national park's chief of science and resources management."Anecdotally, I'd say there is greater pressure on the [coyote] population," Petit said. That's possibly due to activities outside the park, such as the trapping of nuisance coyotes in neighborhoods and the culling of deer whose carrion otherwise would be a coyote food source.
In the Metroparks, the Brecksville and Bedford reservations both appear to have resident coyote family groups – a dominant "alpha" male and female, their pups and several subordinate animals, Tyler said.
Other coyote family groups seem to be based outside the Rocky River, Mill Stream Run, West Creek, and North and South Chagrin reservations, but include parts of those parks in their "home range,"the area within which they hunt for food, and shelter and raise pups.
Collars equipped with radio transmitters and GPS locators would enable researchers to precisely track the coyotes' movements, a major aid in managing them. But funding isn't currently available for the devices, which cost several thousand dollars apiece.
So CSU graduate student Beth Judy is using a year's worth of Metroparks howling survey results and computer mapping software to try to figure out the animals' habitat and range. She's still crunching data but already knows "they're not just staying in the parks."
Though mobile, Greater Cleveland's coyotes aren't mixing and freely inter-breeding as one big group. Instead, DNA collected from coyote droppings, or "scat," shows three clusters of animals, each genetically distinct from the others. Something is isolating the coyote groups, preventing a more wholesale blending of genes.
The locations of the genetic clusters – one east of the Cuyahoga River, one west of the river but still in the national park, and the third in the Rocky River watershed – suggests that big north-south multi-lane roads, not rivers, are to blame, acting as physical barriers.
"There's no problem with a coyote crossing the Cuyahoga [River]," said Krebs, the CSU biologist who led the analysis. "They don't have the east-west issue except for the highways. [Interstates] 71 and 77 form far stronger boundaries for their movement than the natural features."
The Cleveland-area coyote DNA samples also show a surprising amount of genetic differences from individual to individual. To Kays, that indicates the Ohio population arose from a large, diverse original wave of coyote pioneers arriving from the west, not just a few stalwart explorers.
Krebs thinks the genetic diversity shows that new coyote immigrants are continuously arriving in the area. Greater Cleveland may be what biologists call a "sink habitat," he said – an area where animals can survive, but don't reproduce well enough to sustain a population without replenishment from nearby "source habitats" where coyotes thrive.
In that regard, Cleveland is like a kitchen sink beneath a running faucet. The steady pipeline of rural coyotes that resupply the urban population means that trapping or killing is only a temporary solution to nuisance complaints.
"Solitary animals float around the landscape and have huge home ranges," Gehrt said. "They're looking for gaps and will fill them quickly. If a city wants to hire a trapper, he's got a permanent job."
While wildlife officials acknowledge that removing aggressive coyotes is necessary, their overall strategy is to teach people how to co-exist with the animals.
They aim to prevent coyotes from becoming dependent on people which should reduce the chance of conflicts. That means keeping cats indoors, not leaving small dogs unattended outside, and securing potential food sources such as garbage and pet food.
It also means accepting that urban coyotes are here to stay, even as scientists
Some evidence indicates they gravitate to urban sites. When they're captured and relocated to the country, they inevitably try to return. And yet in the city, coyotes avoid humans as much as possible, even altering their behavior and restricting their activity to nighttime to minimize contact.
"They are a walking paradox," Gehrt said. "Every time you say they're a certain way, they do the exact opposite. All these things work against each other. But they work." | fwe2-CC-MAIN-2013-20-41982000 |
Short-acting beta 2-agonist bronchodilators (SABAs) are also called quick-relief, reliever, or rescue medicines. These medicines are used as needed to treat asthma attacks. You and your child should learn to recognize the symptoms of an asthma attack so your child can take this medicine as soon as symptoms start.
This medicine is not used on a regular, daily basis to prevent asthma symptoms. Your child may need a different type of medicine called a controller to keep from having asthma attacks. Controller medicines are taken on a regular schedule to prevent asthma symptoms.
Asthma symptoms are caused by 2 different problems in the airways.
Asthma symptoms often start after your child is exposed to a trigger. Asthma triggers can include pollen, animals, mold, colds, exercise, cold air, and air pollutants. It’s important to know what things trigger your child's asthma symptoms. Help your child avoid the things that trigger an asthma attack. Your child should keep reliever medicine with him at all times in case he has an asthma attack.
SABAs work fast to relax the muscles of the airways and to keep them from getting too tight. When the airway muscles are more relaxed and less tight, your child will have fewer symptoms and be able to breathe better.
The medicine can be taken in different ways. For example:
If you have any questions, ask your healthcare provider or pharmacist for more information. Be sure to keep all appointments for provider visits or tests. | fwe2-CC-MAIN-2013-20-41988000 |
Cascading Importance: Wolves, Yellowstone, and the World Beyond. A talk with William Ripple. Jonathan Batchelor Winter 2013.
Large Predators and Ecological Health WAMC Northeast Public Radio August 23.
Top Predators Protect Forests The Wildlife Professional Summer 2012.
Cougars Encourage Lizards in Zion Year of the Lizard News July 2012.
Predators and Plants Science Update April 26.
Herbivores take toll on ecosystem The Register Guard April 10.
Loss of predators affecting ecosystem health OSU Press Release April 9.
Wolves to the Rescue Defenders of Wildlife Defenders Magazine Winter 2012.
Wolves help Yellowstone, researchers say Local 10, CNN January 5, 2012.
How Wolves Are Saving Trees in Yellowstone Good Environment January 4, 2012.
Study says that with more wolves and fewer elk, trees rebounding in portions of Yellowstone The Washington Post January 3, 2012.
Yellowstone transformed 15 years after the return of wolves OSU Press Release Dec 21, 2011.
Lopped Off Science News November 2011.
The Crucial Role of Predators: A New Perspective on Ecology Yale Environment 360 September 15, 2011.
For Want of a Wolf, the Lynx Was Lost? Science Magazine September 9, 2011.
Red wolf comeback in N.C. helps other animals thrive The Charlotte Observer August 13, 2011.
The case for large predators The Oregonian July 23, 2011.
Study tracks effects of declining predator numbers The Register-Guard July 17, 2011.
Loss of top predators causes chaos, including fires and disease The Vancouver Sun July 15, 2011.
Loss of large predators disrupting multiple plant, animal and human ecosystems OSU Press Release July 14, 2011.
Loss of Top Predators Has Far-Reaching Effects PBS Newshour July 14, 2011.
Oregon State researchers: Predators Important To Ecosystems OPB Earthfix July 14, 2011.
Using Wolves and Other Predators to Restore Western Ecosystems Eugene Natural History Society November 2010.
Sharks and Wolves: Predator, Prey Interactions Similar on Land and in Oceans US News Nov. 15, 2010.
New Theory for Megafaunal Extinction American Archaeology Fall 2010.
New theory on what killed off the woolly mammoths Science Fair, USA Today July 2, 2010.
Study probes role of key predators in ecosystem disruption Corvallis Gazette-Times July 1, 2010.
Ripple Marks: The Story Behind the Story Oceanography June, 2010.
Destination Science 2010: The reintroduction of wolves has helped bring a severely damaged ecosystem back from the brink Discover Magazine April, 2010.
Mess O' Predators The Discovery Files January 20, 2010.
Top predators' decline disrupts ecosystems, says study The Epoch Times October 14-20, 2009.
Ripple receives Spirit of Defenders Award for Science The Barometer October 7, 2009.
Wolves, jaguars are out, coyotes, foxes are in: New global study The Arizona Daily Star - Blogging in the desert October 2, 2009.
Decline in big predators wreaking havoc on ecosystems, OSU researchers say The Oregonian October 1, 2009.
Where Tasty Morsels Fear to Tread The New York Times: The Wild Side September 29, 2009.
Wolves to the Rescue in Scotland ScienceNOW Daily News (Science Magazine) July 22, 2009.
Can wolves restore an ecosystem? Seattle Times January 25, 2009.
Wolf Loss and Ecosystem Disruption at Olympic National Park Island Geoscience Fall 2008.
The Silence of the Wild William Stolzenburg essay, Powell's Books 2008.
Century without the wolf The Oregonian July 30, 2008.
Monitoring cougar in Yosemite Valley Difficult San Mateo County Times June 22, 2008.
Lack of predators harms wild lands San Mateo County Times June 21, 2008.
Cougar decline resuls in critical changes to Yosemite ecosystem Land Letter - E&E Publishing Service May 8, 2008.
Yosemite: Protected but Not Preserved. Science Magazine May 2, 2008.
How humans, vanishing cougars changed Yosemite San Francisco Chronicle May 2, 2008.
Wolves and Elk Shape Aspen Forests CurrentResults.com 2007.
Return of the Wolves. Weekly Reader December 2007.
Oregon State is No. 1 in conservation biology. The Oregonian via OregonLive.com September 6, 2007.
Yellowstone's Wolves Save Its Aspens. The New York Times August 5, 2007.
Presence Of Wolves Allows Aspen Recovery In Yellowstone. Science Daily (ScienceDaily.com) July 31, 2007.
Apsens Return to Yellowstone, With Help From Some Wolves. www.sciencemag.org July 27, 2007.
Yellowstone trees get help from wolves. MSNBC.com July 27, 2007.
It All Falls Down: A plummeting cougar population alters the ecosystem at Zion National Park. Smithsonian Magazine/Smithsonian.com December, 2006.
Cougar Predation Key To Ecosystem Health. ScienceDaily.com / University of Toronto October 25, 2006.
The Ecology of Fear. emagazine.com March 2006.
Hunting Habits of Yellowstone Wolves Change Ecological Balance in Park. The New York Times Oct. 18, 2005.
Episode 3 "Predators", Strange Days on Planet Earth. National Geographic April 2005.
Ecological changes linked to wolves. The Seattle Times Jan. 12, 2005.
Mystery in Yellowstone: wolves, wapiti, and the case of the disappearing aspen. Notable Notes, Oregon State University 2004.
A Top Predator Roars Back. On Earth Summer 2004.
Research Shows Wolves Play Key Role in Ecosystems. ABC News Dec. 15, 2004.
Who's Afraid of the Big Bad Wolf? The Yellowstone Wolves Controversy. Journal of Young Investigators Nov. 2004.
Lessons from the Wolf. Scientific American Jun. 2004.
Wolves linked to vegetation improvements. Wyoming Tribune-Eagle Mar. 18, 2004.
Endangered Wolves Make a Comeback. National Public Radio Feb. 20, 2004.
Wolves' Leftovers Are Yellowstone's Gain, Study Says. National Geographic News Dec. 4, 2003.
Wolves enhance biodiversity in Yellowstone, report says. Oregonian Oct. 29, 2003.
Wolves linked to tree recovery. Billings Gazette Oct 29, 2003.
A top dog takes over. National Wildlife Federation Oct./Nov. 2003.
OSU student maps L&C wildlife observations. Corvallis Gazette-Times Mar. 28, 2003.
Aspens wither without wolves. Herald and News Nov. 19, 2000.
Observatory: Fates of wolf and aspen. New York Times Sep. 26, 2000.
Quiet Decline: Fewer wolves and wildfires may have led to aspen's decline. ABC News Sep. 21, 2000.
Support for the Leopold site is provided by: Dept. of Forest Resources, OSU,
280 Peavy Hall, Corvallis, OR 97331.
phone: 541-737-4951 | fax: 541-737-3049
Copyright 2003, Oregon State University | Disclaimer. | fwe2-CC-MAIN-2013-20-42005000 |
Superstorm Sandy dramatically affected the East Coast. In fact, with an estimated $50 billion in damage and economic losses, Sandy could go down as the second-costliest hurricane in U.S. history. This infographic from InsuranceQuotes.com shows some of the financial horrors of hurricanes.
But are Americans prepared for the financial, physical, emotional and mental toll of another Sandy? A survey commissioned by the National Geographic Channel shows they aren’t. While 58 percent of Americans think the U.S. will be hit by another “significant” hurricane, 56 percent say they’re not prepared for a major disaster like a hurricane. Fortunately, 61 percent of Americans say Sandy has made them think more about being ready for a potential disaster.
“Americans should prepare now more than ever before, as a major disaster, manmade or natural, can hit at any time,” says David Kobler, a consultant for the National Geographic Channel. | fwe2-CC-MAIN-2013-20-42012000 |
Open to district residents only. Limit of 12 people, age 16 and up.
This class is for the true beginner. You'll receive an introduction to the fundamental rudiments of general music, guitar music and guitar tablature. Learn to play by practicing finger exercises and note reading. Also covered are the three most common "open" major chords and their progression, fingerings, strum patterns, guitar assessment, construction, preventative and periodic maintenance, strings and changing them, guitar technique and tuning. Students should bring a guitar to class (acoustic guitars preferred; however, only very small amplifiers, 5 - 10 watts maximum, if bringing electric).
Monday, 04 March, 2013
Other Dates For This Event:
Save this Event:iCalendar
Windows Live Calendar
Share this Event:Email to a Friend | fwe2-CC-MAIN-2013-20-42018000 |
Conservative Dictionary Project (R)
- True conservative meaning - Disparaging people or a group of people because of the color of their skin, such as when liberals claim minorities cannot succeed without affirmative action
- False liberal redefinition - The idea that society is the reason minorities cannot get ahead
- True conservative meaning - A reminder to God and Man of God's promise to never flood the entire world again.
- False liberal redefinition - A symbol used by the homosexuals to promote their agenda; alternatively, a product of refraction with no spiritual significance
- True conservative meaning - A liberal government system designed to by pass the free market and distribute goods and services based on bureaucratic whims.
- False liberal redefinition - A way of sharing limited resources.
- Republican form of government
- True conservative meaning - a form of government in which the powers of sovereignty are vested in the people and are exercised by the people, either directly or through representatives chosen by the people, and in which individuals retain sovereign prerogative over their person, labor, and property
- False liberal redefinition - mobocracy
- True conservative meaning - a just claim or title, whether legal, prescriptive, or moral
- False liberal redefinition - privileges extended by government at its own pleasure, subject to revocation for the alleged collective good | fwe2-CC-MAIN-2013-20-42019000 |
- Total Population: 113,724,226 (2011)
- Life Expectancy: 76 years (2011)
- Per capita income (ppp): $13,800 US (2010)
-Mestizo 60%; Amerindian 30%, Caucasian 9%; other 1%
- Major export products: manufactured goods, oil and oil products, silver, fruits, vegetables, coffee, cotton
- Monetary unit: 1 USD ~ 11 pesos
There is a popular saying in Mexico, “So far from God and so close to the United States." Thanks to U.S. advice and friendly pressure, Mexico's "economic restructuring" has resulted in a classic economical portrait of our times. At the same time that it has benefited the financial elite, it has squeezed a once thriving middle class and has had a devastating impact on Mexico's poor.
Since the mid-1980s when Mexico suffered an economic crisis due to the petroleum price crisis (very similar to the current situation precipitated by the coffee price crisis), international lenders have been pushing Mexico towards neo-liberal politics, free-trade practices and economic austerity measures. Under former-president and Harvard alumni Carlos Salinas de Gotari these pressures became institutionalized practices in Mexico's move towards "modernized" economic policies, culminating in NAFTA.
But from the moment of its signing, the treaty that was campaigned as Mexico's "golden key" to transform itself into a "First World" nation, has been riddled with conflict and contradictions. But the most notorious, of course, was the Zapatista Uprising.
The first of January 1994, after ten years of organizing, the Zapatista insurrection shook Mexico out of its stupor and the world out of its enchantment with free trade. The date of their public debut was chosen to coincide precisely with the enactment of NAFTA and Mexico's "Entrance into the First World". Likewise, the Zapatista declarations identified neo-liberal politics as the main target and source of their extreme poverty and marginalization. "Entrance into the First World?" the Indigenous people questioned, "Entrance for whom?"
In 1528 Renaissance era "free trade" enthusiasts conquered Chiapas in the search for easy profits. Despite famed human rights defender Fray Bartolomé de Las Casa influences as bishop of the dioceses of Ciudad Real, not even the Indians branded with the word "free" on their arms could escape being made slaves. Four hundred and sixty years later free trade proponents in Mexico tried to brainwash Chiapan Indians with promises of a better life.
But the indigenous peoples' historical memory is strong… and the impact of the 1989 liberalization of the coffee market has only confirmed their fears.
In their overwhelming majority, the modern-day Zapatistas are descendants of the Mayan people who first resisted the European colonization some 500 years prior. In the 10 years of the Zapatista military and political campaigns since the 1994 uprising, they have undertaken a mostly peaceful and inspiring struggle to defend their culture and fundamental rights, and to construct practical mechanisms for change.
"I can tell you, we are very tired of false promises. When we rose up in arms, we declared to the nation and to the world the political, social, economic and cultural causes of our struggle," explained Commandante David of the Indigenous Clandestine Committee. "We rose up in arms because our people are dying of hunger. We don't want more promises; we want to see action."
Chiapas is one of the most marginalized states in all of Mexico, infamous for being one of the states richest in natural resources, yet with one of the poorest populations in all of Mexico. The state is characterized by having one of the highest rural populations, the least developed health infrastructure, the lowest levels of income and education, and the highest malnutrition rates in Mexico. In addition, its inhabitants have one of the lowest life expectancy rates (67 years) and the highest infant mortality rates in the nation (averaging 55 per 1,000 in the state, but with considerably higher rates in Indigenous communities).
The Zapatista movement has provoked creative initiatives to transform the entrenched politics of exploitation in Chiapas and at large. The creation of “Autonomous Zones” – replete with parallel governments, independent schools, health clinics and economic projects – is one example.
Producer cooperatives are common in Chiapas, and coffee is one of the main products farmers can organize around. Chiapas is the largest coffee producer in Mexico and considered the largest organic coffee producer in the world. The first organic farm to be certified in the world is in Chiapas. But it is the cooperative style coming out of the Zapatista movement and the concept of Indigenous Autonomy that has captured our imagination. | fwe2-CC-MAIN-2013-20-42026000 |
Mitchell Springs: Water under the rock
Photo Courtesy of June Head
Mitchell Springs may be referred to as the first town site for the City of Cortez. The first in this series on Mitchell Springs in 1882 may be found in the January issue of Looking Back published by the Cortez Journal.
What prompted the Mitchell family to establish a trading post? W.L. Glenn, the Veterans Service Officer in Cortez did some research in the county records on Mitchell Springs and this is his report: "The springs are found under a rock - it was a large pit 8-by-10 foot deep where water collected. The springs had good drinking water but flow was not great. Across the creek west another spring was flowing." The original springs known as Mitchell Springs was located on the east side of the creek and was fenced off by a cedar pole fence. It was accessible by going down into the creek not too far off of County Road H and then up the creek to the springs. It was a favorite picnic area for the young people when they rode their horses in the spring's area and then let themselves down over the rock ledge by rope and saddle.
As previously mentioned, the town of Toltec was probably deserted about 1888 when the post office moved to the new town located on that "hill that no one wanted." Water was scarce in early Cortez. It was brought up from Mitchell Springs by wagon and tank. One source said it sold for 25 cents a bucket - another source said 50 cents a barrel. Many of the farmers and ranchers south of town hauled their water from Mitchell Springs and it was said they paid $1 per barrel for the water. The people in the new town and the surrounding area needed this water to survive.
At a local history seminar held in 1978, Nettie Ince Talcott Woodard mentioned she came to Cortez as a baby in 1909. This was her description of Mitchell Springs. People would go down to the springs if they were lucky enough to have a horse and a barrel to bring it up, and if not, you carried it from Market and Main Streets. They had a cistern there and there were men who would take big tanks and go down to Mitchell Springs and bring up water and put it in the cistern - then the townspeople would go up there and get a bucket of water. She said there was no charge for this water. She didn't know who the men were who hauled the water for the town. She said later the town got water by a three-mile-long wooden flume that went to a ditch that was near the old City Park on North Market. After the ditch came in almost everyone then had a cistern. They put up a flagpole at Main and Market where the original cistern was located and during World War I, everybody would go up and gather around the flagpole and sing songs and have a good time. People living in the new town of Cortez probably didn't do much washing of clothing. One of the first early day residents of Cortez said they wanted to have some trees planted - so they took their dishwater and wash water and watered the trees. Another later said they wanted trees so badly they planted them in buckets and put them out in front of the houses to grow in hopes that someday they would have water to use for trees, etc.
I mentioned families in that area that depended on the springs for their use. In 1915, two of the sons of Frank Greenlee were using burros to pull little sleds with wooden water barrels to haul the water. Many other families living in this area depended on the springs for their water. When the Irrigation Company (MVIC) came in, the ditches were used to fill cisterns to store the water.
In 1939 Charlie Blackmer obtained the 160 acres west of Mitchell Springs plus another 160 acres at the present site of Mitchell Springs. His sons, Joe, Frank and Fred said the springs on both sides of the canyon were used. It was their understanding the springs where the water was hauled from to Cortez was on the west side of the creek. People living in the valley used to come to the east side to get water. On this side, initials had been carved into the rock. There were roads on both sides of the wash and it was called the "cross-over road." There might still be evidence of the old wagon road in the rocks. In the area of the springs on the east side, the Blackmer boys found a lot of old army brass. The army probably camped at the site when the Mitchells had the trading post. It was reported the Mitchells were going to build a flour mill on this side where the road turns into the "Best Logs" as there was evidence of an old foundation there for years. Perhaps the water on this side caused them not to continue with the building. It has been mentioned there was a total of five springs in the area. Residents of that area mention "Poison Springs" where old car bodies were placed in the springs to prevent livestock from drinking the water. It is located on the lower southern edge of McElmo Wash just off of Road H. In past years this was the site of car accidents - some fatal - when they "didn't make the curve" on this road.
When I visited Mitchell Springs in 2000, the springs on both sides of the canyon were still flowing.
In the future, Montezuma County Historical Society would like to mark the location of the "first town site" by a sign and parking area. It is doubtful that the springs would be open to the public because of their location.
June Head is the historian for the Montezuma County Historical Society, and can be contacted for comments, corrections or questions at 565-3880. All interested persons are invited to join the Historical Society. | fwe2-CC-MAIN-2013-20-42031000 |
Tips to Facilitate Workshops Effectively
Facilitators play a very important role in the creation of a respectful, positive learning environment during a workshop. Here you will find some tips to facilitate workshops effectively.
- Make sure everybody has a chance to participate. For example, through small group activities or direct questions to different participants. Help the group to avoid long discussions between two people who may isolate the rest of the/other participants. Promote the importance of sharing the space and listening to different voices and opinions.
- Be prepared to make adjustments to the agenda – sometimes you have to cross out activities, but the most important thing is to achieve the general goals of the workshop.
- Make every possible thing to have all the logistics ready beforehand to then be able to focus on the workshop’s agenda.
- Pay attention to the group’s energy and motivation – Plan activities where everyone is able to participate and to stay active and engaged.
- Provide space for the participants to be able to share their own experiences and knowledge. Remember that each one of us has a lot to learn and a lot to teach.
- Relax and have fun! Be a part of the process – You are learning, too, so you don’t have to know it all nor do everything perfect.
- Be prepared for difficult questions. Get familiarized with the topic, know the content of the workshop but remember you don’t have to know all the answers! You can ask other participants what they know about the topic, or you can find out the answers later and share them with the participants after the workshop.
- Focus on giving general information – Avoid answering questions about specific cases. Usually, this can change the direction of the conversation and might be considered as providing legal advice without a license to do so.
- Your work as facilitator is to help the group learn together, not necessarily to present all the information and be the “expert” in the topic.
- Try to be as clear as possible – especially when you are giving the exercises’ instructions. Work as a team with the other facilitators during the whole workshop. | fwe2-CC-MAIN-2013-20-42042000 |
Have a great time teaching your little ones about letters using our printable file folder alphabet games! This file folder game is all about the letter A and alligators. Put all the pieces in place to build the alligator. And if you laminate the file folder game there is a practice line at the bottom that can be wiped clean if you use erasable or washable markers and/or crayons.
Studies have shown that repetition is key for early learners, so what's better than having a game that can be played over and over? If you get all of our printable alphabet file folder games you'll have a stash of fun learning tools not only for beginners, but to help preschoolers and kindergartners build a solid base of alphabet identification and sounds!
All of our products are for home, church, or small classroom use only.
Check out our website for more great products! http://greenjellowithcarrots.com or email us if you have any questions [email protected]
Have you made this pattern?
Share with the Craftsy community.
We breathe printables! Whether it's for church, home, preschool, or for fun we have tons and tons and tons AND TONS of printables! We have file folder games, card games, early learning worksheets and games, LDS clipart, digital scrapbook kits, Primary talks, YW handouts, and more! | fwe2-CC-MAIN-2013-20-42043000 |
Advanced Operating Systems
Advanced Systems Teaching (ASysT) LabThe ASysT lab is organised around 64-bit computers (``U4600'') based on a 100 MHz MIPS R4700 processor. The U4600 was developed by Kevin Elphinstone (former UNSW PhD student) and Dave Johnson. It is especially designed to allow experimentation with operating systems code. Presently, these machines run the L4 microkernel.
Technical details:The nodes are based on a locally designed and manufactured ATX form factor motherboard. The motherboard features:-
The nodes are hosted on UNIX computers, presently PC's running Solaris. These contain a development environment which allows you to compile code, link it with the L4 microkernel, and download it to the U4600 via ethernet. The hosts also interface to the serial port on the U4600 for console I/O.
How to use:Make sure that ~cs9242/bin is in your PATH, and that the environment ARCH is set to pc.i86.linux. The former is normally achieved by using the newclass command, the latter is set up automatically by the default shell initialisation files. Also make sure that you are using GNU make (this is also ensured by the default initialisation files).
Use the Makefiles supplied with the examples (like the one in ~cs9242/public_html/src/hello_world/). Typing
Note: make must be done locally on the host machine connected to your U4600, or the boot image will end up in the wrong boot directory.
You also need to run a terminal emulator on the host to be able to talk to the U4600. Use the command
Turn on the U4600, you should get another greeting message and a
If you do not get the prompt, some program is actually executing on the U4600. Press the INTERRUPT button (the smaller of the two buttons on the box) for about 1/2 second, and you should get a ``KDBG>'' prompt. Type
If you type
Note that you are running an operating system, which is not expected to terminate. To stop execution, hit the INTERRRUPT key (reset button) on the U4600. This will get you into the L4 kernel debugger, which is unlikely to be of much help for you (but if you want to try, it is documented in Appendix C of the L4 Reference Manual). However, you can then type
Alternatively you can hit Control-D when in the kernel debugger. This will exit the debugger and resume the execution of the interrupted code.
A failed assertion (assert(0)) has the same effect as hitting the interrupt key. Alternatively, you can send a BREAK from the terminal emulator (Misc->Break).
If you INTERRUPT the U4600 but it doesn't get into the kernel debugger, you may have found a bug in our L4 microkernel. Please report it to us.
Last modified: 27 Jul 2005. | fwe2-CC-MAIN-2013-20-42053000 |
Archive for October 12th, 2009
Most cuckoo clocks today are made in the “traditional style” to hang on a wall in your home or office. In the long history of clock making and time keeping, cuckoo clocks play a large role in the appreciation of art in clocks. The traditional style of the cuckoo clock is a wooden case decorated with carved leaves and animals and an automation of a bird that appears through a small door while the clock is striking. A cuckoo clock is typically pendulum driven, striking the hour and half hour, using bellows and pipes that imitate the cuckoo call. Today’s cuckoo clocks are almost always driven by weights. The weights are made of cast iron in a pine cone shape.
As early as 1650, the call of the cuckoo bird in a clock was being heard in parts of East Germany and a region of the Czech Republic. It took nearly a century for the cuckoo clock to find its way to the Black Forest. The Black Forest cuckoo clock, as we know it, comes from the region in southwest Germany, where a tradition of clock making started late in the 17th century. The cuckoo clock is a favorite souvenir of travelers in Germany, where there are several different firms making the whole clock or parts of it. The people who make cuckoo clocks are dedicated craftsmen whose products are works of art. Black Forest cuckoo clocks and German cuckoo clocks command big prices and are highly sought after in antique stores, flea markets and retail shops. They are valuable because of their elaborate hand carvings and unique artistry.
Tomorrow, we’ll look at “Striking the Hour” | fwe2-CC-MAIN-2013-20-42056000 |
|« Milk fat causes weight gain in humans||Green Tea for Melanoma »|
A paper published in the Journal 'Blood' entitled "HIV-1 incorporates ABO histo-blood group antigens that sensitise virions to complement-mediated inactivation" suggests that transmission of HIV-1 is modified by both ABO blood group and the immune system enzyme complement. The premise is based on research showing how the ABO antigen (blood group marker) of the infected person is incorporated into the HIV virus that is replicated in their cells. Because the virus is coated in the person's blood group antigen, it then acts in the same way a red blood cell would when someone with an incompatible blood group becomes exposed to it, and the part of the immune system that would normally cause an incompatible blood transfusion reaction is activated against the virus, helping to protect the recipient against infection. This would mean that it would be harder for an individual of blood group O (the 'universal donor') to contract HIV infection from people of any other blood group apart from blood group O, as the recipient will have both anti-A and anti-B antigens in their blood. Conversely those with blood group AB (the 'universal recipient') who have no opposing blood group antibodies would contract HIV infection more easily from people of any blood group.
This paper follows previous research on how complement is activated by anti-B IgM (the immune complex involved in incompatible transfusion reactions where the donor is blood group B or AB and the recipient is blood group A or O) and other factors, in blood from HIV-negative donors. In the research by Saarloos et. al. complement was however more easily activated against HIV by antibodies to HIV itself as a result of HIV infection than by IgM.
Later research suggests that the immune system of some people with AIDS (PWA) who are blood group A or AB may form anti-A IgA, IgG and IgM (antibodies against their own blood group).
The HIV virus made in cells of an HIV-infected person will show their blood group antigen only when the originating cell expresses ABO antigens or is a lymphocyte (white blood cell). As ABH non-secretors have fewer cells expressing their blood group, it follows that they may produce more HIV viruses without blood group antigens than would ABH secretors. This could mean that it is as easy to become infected with HIV-1 from non-secretors of any blood group as it is from secretors of transfusion-compatible blood groups.
ABH non-secretors would be at some disadvantage in protection against HIV infection transmitted via mucous membranes, as they secrete lower levels of immune-protective substances .
HIV positive individuals and PWA should always take steps to avoid transmission of the HIV virus, whatever their blood group or secretor status. Neil and colleagues have however demonstrated a key concept in the relationship between blood groups and immunity, which is mirrored in numerous other blood group-disease connections. It also gives new meaning to the idea of universality in terms of blood group transfusion with relation to infection susceptibility.
1. Neil SJ, McKnight A , Gustafsson K, Weiss RA
HIV-1 incorporates ABO histo-blood group antigens that sensitise virions to complement-mediated inactivation.
2. Saarloos MN, Lint TF, Spear GT
Efficacy of HIV-specific and 'antibody-independent' mechanisms for complement activation by HIV-infected cells.
Clin Exp Immunol. 1995 Feb;99(2):189-95.
3. Friedli F, Rieben R, Wegmuller E et. al.
Normal levels of allo- but increased levels of potentially autoreactive antibodies against ABO histo-blood group antigens in AIDS patients.
Clin Immunol Immunopathol. 1996 Jul;80(1):96-100.
4. D'Adamo PJ.
Eat Right 4 Your Type Complete Blood Type Encyclopedia. p.320.
Pub. Penguin, 2002.
Feedback awaiting moderation
This post has 1 feedback awaiting moderation...
Comments are not allowed from anonymous visitors. | fwe2-CC-MAIN-2013-20-42061000 |
Solar power is taking off around the world. Europe is planning to deploy various types of solar power to the Sahara to provide for the European Union's energy needs. Meanwhile, here in the U.S., California is expanding its solar efforts as well.
However, amid the progressing adoption of solar technology, one perpetual criticism that persists is that solar power is inefficient and expensive. To some extents this is true. The current generation of photovoltaic solar panels -- the type of solar power perhaps most associated with the field -- is only around 20 percent efficient and thus costs remain relatively high, like many forms of alternative energy.
A new breakthrough from U.S. Department of Energy's National Renewable Energy Laboratory (NREL) is looking to solve those problems. It pushes solar cells to uncharted technology with a record 40.8 percent efficiency. The new work shatters all previous records for photovoltaic device efficiencies.
The researchers first used a special type of cell, an inverted metamorphic triple-junction solar cell. The custom cell was designed, fabricated, and independently measured at NREL. The next step was to expose the solar cell to concentrated light of 326 suns, yielding the record-breaking efficiency. A sun is a common measure in the solar power industry which represents the amount of light that hits the Earth on average.
The new cell targets a variety of markets. One potential market is the satellite solar panel business. Satellites natural absorb more intense sunlight, thanks to no atmospheric interference. Another possible application is deployment in commercial concentrated PV cells. Concentrated PV is a burgeoning field, with several companies currently contracted worldwide to build the first utility grade plants.
The new record was welcome news, but little surprise at NREL -- they held the previous record as well. In order to beat their old design, one key was to replace the germanium wafer at the bottom junction with a composite of gallium indium phosphide and gallium indium arsenide. The mixture splits the spectrum into three parts, each of which gets absorbed by one of the junctions. Both the middle and bottom junction become metamorphic in the new design. This means their crystal lattices are misaligned, trapping light in the junction and absorbing more of it. This yields an optimal efficiency.
One key advantage is the new solar cell can be conveniently processed by growth on a gallium arsenide wafer. It is also both thin and light. The NREL believes this cell will be cheaper than current commercial models, while delivering far more power.
Some of the credit for the work goes to NREL's Mark Wanlass, who invented the cell's predecessor. The new cell was redesigned by a team led by John Geisz.
The NREL is operated by the DOE by Midwest Research Institute and Battelle.
quote: Efficiency is already at a point where you can power a home from significantly less than 100% of your roof area. | fwe2-CC-MAIN-2013-20-42073000 |
Some researchers believe that the solar cycle influences global climate changes. They attribute recent warming trends to cyclic variation. Skeptics, though, argue that there's little hard evidence of a solar hand in recent climate changes.
Now, a new research report from a surprising source may help to lay this skepticism to rest. A study from NASA’s Goddard Space Flight Center in Greenbelt, Maryland looking at climate data over the past century has concluded that solar variation has made a significant impact on the Earth's climate. The report concludes that evidence for climate changes based on solar radiation can be traced back as far as the Industrial Revolution.
Past research has shown that the sun goes through eleven year cycles. At the cycle's peak, solar activity occurring near sunspots is particularly intense, basking the Earth in solar heat. According to Robert Cahalan, a climatologist at the Goddard Space Flight Center, "Right now, we are in between major ice ages, in a period that has been called the Holocene."
Thomas Woods, solar scientist at the University of Colorado in Boulder concludes, "The fluctuations in the solar cycle impacts Earth's global temperature by about 0.1 degree Celsius, slightly hotter during solar maximum and cooler during solar minimum. The sun is currently at its minimum, and the next solar maximum is expected in 2012."
According to the study, during periods of solar quiet, 1,361 watts per square meter of solar energy reaches Earth's outermost atmosphere. Periods of more intense activity brought 1.4 watts per square meter (0.1 percent) more energy.
While the NASA study acknowledged the sun's influence on warming and cooling patterns, it then went badly off the tracks. Ignoring its own evidence, it returned to an argument that man had replaced the sun as the cause current warming patterns. Like many studies, this conclusion was based less on hard data and more on questionable correlations and inaccurate modeling techniques.
The inconvertible fact, here is that even NASA's own study acknowledges that solar variation has caused climate change in the past. And even the study's members, mostly ardent supports of AGW theory, acknowledge that the sun may play a significant role in future climate changes. | fwe2-CC-MAIN-2013-20-42074000 |
It is very common to confuse cloud computing with virtualization. Since they are both relatively new and since organizations are calling it the saving face of new age technology, I assumed we might want to look into what exactly the two technologies are and how diverse they are from each other.
Cloud is essentially a highly scalable platform where you can store data, build and run applications that can be accessed through the internet only. Cloud is a mode to mobilize all applications so that you can remotely access your organization data through any device that has access to internet. Data center hosts or collocation hosts who are interested in cloud technology provide software as a service packages to their clients. Cloud makes it possible to have your servers in a secure environment in any part of the world and your clients still can access and modify the data if they have required security clearance. Cloud makes use of virtualized resources in order to fulfill its requirements. A cloud host provides hardware and hosting facilities depending on the usage requested by the client.
Virtualization, on the other hand, is a technique of creating a virtual pool of servers, operating systems, storage devices and network resources. It enables a single user to access multiple physical devices at the same time. With this, one operating system can control the operation of multiple computers or vice versa.
Building your own data center takes a lot of capital investment; and maintaining it is a nightmare you do not want to go through if your main aim is to focus on your business. Hiring a service is a better option. Unlike the cloud, in a data center, you have to note that you will merely be storing your servers on someone else’s property. So you are responsible for upgrading your servers as and when technology takes a giant leap. The drawback with data centers is the challenge you will face while scaling up as and when the need arises. Your data center host must have rack space to accommodate an extra server or two and also must be equipped to handle an increase in cooling and power needs. Of course, there is a problem of your resources going on standby mode when not in use, too. Cloud may be an ideal solution from an economic point of view. Like we have mentioned before, you only pay for the services you are using; not for idle or standby services.
Virtualization is all about the control. Pure, unparalleled control over multiple devices using a single point of operation. With virtualization, for instance, you can run a very large application even though your system individually cannot support it. In other words, your system interacts with the other systems connected to the virtualization network, notes which system is available and uses part of the available system’s resources in addition to your own to run your application. It’s like your system has temporarily expanded its capacity to run your application successfully.
Through virtualization, you can install a software only once and be rest assured that everyone will have access to it. You don’t need multiple licences to make the software available to all your employees. Since you are technically installing it only on one system, you are not violating any laws either. Same is true with storage. This technique avoids the need for data replication, thus saving storage space.
So you see, one technology has nothing to do with the other; and they, most certainly, are not the same thing. Virtualization, to an extent, makes the cloud operable.
Data Center Talk updates its resources everyday. Visit us to know of the latest technology and standards from the data center world.
Please leave your views and comments on DCT Forum. | fwe2-CC-MAIN-2013-20-42082000 |
Planning the search
Consider the main focus of your research, and the specific ideas that you're investigating. It can be helpful to focus your topic by asking yourself the following questions:
- What type of information do I need, and what don't I need?
- Do I want current or historical information?
- Am I interested in a particular geographic region?
- How much information willbe enough?
Your search terms are derived from the key concepts and they need to be selected carefully to ensure a successful search. Create search terms that are:
- relevant to your topic
- broad enough to include other related materials OR
- narrow enough to find really specific and specialised information
Need more help?
- Check out the Smart Searcher module on Analysing a Topic | fwe2-CC-MAIN-2013-20-42087000 |
Potential buyers look over a home in Las Vegas, where one in five people moved locally during the recession. / Steve Marcus for USA TODAY
The Great Recession has upended the American tradition of moving to greener pastures. Instead of moving to a bigger home or for a higher-paying job, more Americans moved because they can't afford to stay where they are.
During the 2007-09 recession, 9% of Americans - about 4 million - moved locally, the highest level in a decade. And a growing number moved to cheaper housing or doubled up with family and friends, according to an analysis out Wednesday of Census data through 2010.
By contrast, moves across county and state lines declined.
People moved the most in metropolitan areas with the highest unemployment and highest foreclosure rates, particularly in hard-hit parts of the Sun Belt, shows research by US2010, a project funded by the Russell Sage Foundation and Brown University that examines changes in American society.
In Las Vegas, for example, one in five people moved locally during the recession.
"Typically, over the last couple of decades, when Americans moved, they moved to improve their lives," says Michael Stoll, author of the research and chairman of UCLA's public policy department. "This is the shock: For the first time, Americans are moving for downward economic mobility. Either they lost their house or can't afford where they're renting currently or needed to save money.
At the peak of the recession, falling fortunes was a prime reason for moving. Before the economy tanked, 41.3% moved locally to own a home or settle in a better neighborhood. During the recession, only 30.4% moved for those reasons. By contrast, more than 23% moved for cheaper housing during the recession, up from 20.8% before the bad times.
The theme song of The Jeffersons, a hit TV show in the 1970s about a black family that makes it big in the dry cleaning business and moves from Queens to Manhattan's East Side, is aptly titled Movin' On Up.
Yet this economic downturn has taken a disproportionate toll on the upward mobility of African Americans, the research shows.
"Most distressing is the evidence that black residents have been particularly affected by this trend - more likely to be pushed into a short-distance move by these economic conditions," Stoll says. "Blacks may have had less savings, fewer family members who could contribute, onerous debt from refinancing or subprime mortgages or greater expenses."
Unemployment and foreclosure rates are higher among blacks than whites.
"It's going to be years before (blacks) return to their pre-recession trajectory," says Roderick Harrison, a demographer at Howard University in Washington, D.C. "It's going to take a job, probably several months or years working at that job to maybe return to their own apartment or to get back into home ownership."
Copyright 2013 USATODAY.com
Read the original story: More Americans on the move are moving on down, not up | fwe2-CC-MAIN-2013-20-42090000 |
Between these Birmingham chapters came what may be the signature moment of the civil rights movement, the March on Washington for Jobs and Freedom on Aug. 28, 1963.
"By special train, plane, buses by the thousand, private automobiles and even in some cases on foot, the marchers poured into the capital," an AP story reported. An estimated 250,000 people, mostly black but many white, met at the foot of the Lincoln Memorial to hear King pronounce, "I have a dream..."
Civil rights advances of 1963 spilled into a broader sense of possibilities.
Many people had long hoped for relief from the specter of atomic war — what Kennedy called the "darkening prospect of mass destruction on earth" as he announced the Limited Nuclear Test Ban Treaty in July.
"Yesterday a shaft of light cut into the darkness," he said. "For the first time, an agreement has been reached on bringing the forces of nuclear destruction under control."
For years, people had staged "ban the bomb" street demonstrations — but almost unnoticed in 1963, they were joined by a few early protesters against U.S. involvement in the Vietnam war, where Kennedy had been sending American military "advisers."
In the continuum of popular culture, no single year is definitive. Still, by 1963, record buyers, radio stations, even jukebox operators were embracing a broadening range of entertainment. There was the "Motown sound" of black pop songs — singer-songwriter Smokey Robinson has spoken of "the barriers that we broke down with music" — and audiences would soon embrace the "British invasion."
"The '60s revolution in music and style began somewhere, maybe here," say the liner notes for a just-released Beatles collection, "First Recordings: 50th Anniversary Edition," which received a nomination for this year's Grammy awards.
The music never really went away (the Rolling Stones' recent tour was playfully called "Fifty and Counting"). Spivey's '60s class ends with a sing-along, and Varon at the New School marvels at how many of his students know the old lyrics.
A quieter revolution made 1963 "a lever," in the words of historian Stephanie Coontz, who also teaches '60s courses. In February of that year, writer Betty Friedan published "The Feminine Mystique."
At the time, magazines and TV constantly reinforced a view of the American woman and her assigned place: She would marry, raise children, and not work outside the home, which she would maintain with products and appliances designed to make her middle-class life efficient and ideal.
The trouble, Friedan recognized, was that for many this was not ideal, but suffocating, said Coontz, author of the 2011 book "A Strange Stirring: The Feminine Mystique and American Women at the Dawn of the 1960s."
When she reviews the sexism of those days with her students today — "head and master" laws in many states making wives legally subject to husbands, help-wanted ads seeking "pretty looking, cheerful gal" for office work, and the like — "jaws literally drop," she said.
For middle-class women who read Friedan's book, it was a revelation. They'd been told "they should not want anything more out of life — and were 'sick' when they did. These people Friedan literally rescued," Coontz said in an interview. "People I interviewed said ... they were considering suicide."
The book told them they were not alone and change might come.
Transformative change is a central theme of '60s courses; some even offer '60s-style civic outreach projects as substitutes for traditional research papers. Students learn how Kennedy pushed variations of this message in 1963.
- Boy Scouts open membership to all boys,...
- Defending the Faith: A case for the...
- Abercrombie & Fitch CEO posts statement on...
- One third of millenials regret going to college
- Brave woman tried to reason with London...
- Stories behind viral Oklahoma tragedy photos...
- Facts about the Boy Scouts of America
- Why $1 billion doesn't mean what it used to...
- Defending the Faith: A case for the... 50
- Journalists criticize Obama... 38
- Boy Scouts open membership to all boys,... 27
- Associated Press CEO calls records... 23
- IRS official Lerner invokes Fifth... 22
- Former IRS chief to Congress: Can't say... 21
- More Obama aides knew IRS targeted... 19
- Supreme Court to weigh in on... 17 | fwe2-CC-MAIN-2013-20-42095000 |
--SourceTheravada means the ‘doctrine of the elders’. The term Hinayana has also been used for this form of Buddhism, but it is a misnomer. This term has been used by the Mahayana Buddhists, who reckoned that they were followers of the ‘greater vehicle’. The Mahayanists to differentiate themselves from the Theravadins called the latter Hinayana, the lesser vehicle. In the pre-Mahayana period there was truly a collateral sect called the Hinayana, but this sect is not the Theravada of today. This confusion was unfortunate, and therefore, it is better to avoid the term Hinayana altogether. Any attempt to label two different forms of Buddhism as ‘greater’ and ‘lesser’ is odious.
This is confusing to me. The term "Hinayana" was used before the break-up of the Sangha by Theravadan practitioners? What does it mean that the "Hinayana" of that time was different from the Theravada of today?
I've always thought that the term Hinayana was only pejorative. But I once saw Retro mention that there are appropriate uses for the word. Please play nice. I'm not trying to create divisions or open the door for ridiculing Mahayana. I put this in the "Discovering Theravada" section in hopes of just just getting information and clarification. | fwe2-CC-MAIN-2013-20-42098000 |
Ht. 50-70 Spread 50
HABIT: Inconspicuous flowers in spring. Clusters of red berries on female trees in late summer. Fast growing shade tree with open structure, yellow, red, and orange fall color sometimes all at once. Compound leaves with 10-16 paired leaflets. Light, smooth bark when young. Branching structure is poor when young but quickly fills out.
CULTURE: Easy to grow in any well-drained soil, drought tolerant.
USES: Shade tree, fall color.
PROBLEMS: Tip growth sometimes burns in early summer from too much water. The female trees tend to yellow and get weak looking in the late summer as the fruit ripens. A very large percentage of these trees are planted too deep in the ground and havve circling and girdling roots. The root flares should be exposed with the Air Spade and the choking roots removed.
NOTES: Incorrectly called Chinese pistache. One of the best fast growing trees. Native to China but acts like a native Texan. Pistacia texana, the evergreen Texas pistache, is native to South Texas. Has some freeze problems in North Texas. Looks more like a big shrub than a tree. Normal height is 15-20 but can grow to 30 or more. The pistachio that produces the delicious nut is Pistacia vera, a desert plant that cant take much water at all.
This information comes from the Dirt Doctor's newest book, Texas Gardening - The Natural Way. CLICK to purchase. | fwe2-CC-MAIN-2013-20-42105000 |
February 20th, 2013
Birds' Breathtaking System
In a previous issue of Think & Believe (Vol.2, No. 5), we discussed the unique features relating to flight in birds, including positioning and control of feathers, size and structure of bones, and efficiency of the circulatory and digestive systems. In this article we consider the amazing respiratory system, which, according to Dr. Michael Denton “seem(s) to defy plausible evolutionary explanations.” (Evolution: A Theory in Crisis, p. 210)
Most vertebrates draw air into their lungs through a series of branching tubes which finally terminate in tiny air sacs. The air must enter and exit through the same tubes, leaving a certain amount of residual (“dead”) air in the lungs.
Birds have a totally different system, though. Special air sacs extend from the lungs into all major parts of the bird’s body. They do not function directly in gaseous exchange, but serve as “bellows” to maintain a constant flow through tiny air tubes where the exchange actually takes place. These tiny air tubes branch profusely, permeating the lungs, and then join together again. Special valves in the tubes ensure that air flows in only one direction through the lungs, providing the continual supply of fresh air needed for flight.
This is a very amazing system, which poses a serious challenge to evolution. According to Denton:
No lung in any other vertebrate species is known which in any way approaches the avian (bird) system. Moreover, it is identical in all essential details in birds as diverse as humming birds, ostriches and hawks.
Just how such an utterly different respiratory system could have evolved gradually from the standard vertebrate design is fantastically difficult to envisage, especially bearing in mind that the maintenance of respiratory function is absolutely vital to the life of an organism to the extent that the slightest malfunction leads to death within minutes. Just as the feathers cannot function as an organ of flight until the hooks and barbules are co-adapted to fit together perfectly, so the avian lung cannot function as an organ of respiration until the parabronchial system which permeates it and the air sac system which guarantees the parabronchi their air supply are both highly developed and able to function together in a perfectly integrated manner.
...The suspicion inevitably arises that perhaps no functional intermediate exists between the dead-end and continuous through-put types of lung. (Evolution: A Theory in Crisis, pp. 211, 212)
Suspicion indeed! This fantastically complex system defies evolutionary explanations and proclaims the wisdom and power of the ALMIGHTY CREATOR GOD!
By Dave and Mary Jo Nutting
Originally published in the March/April 1995 Think and Believe newsletter.
Please use the Discover Creation search engine - at the top of each page - to look for more detailed articles on things discussed in our "Creation Nuggets." | fwe2-CC-MAIN-2013-20-42109000 |
This Dawn FC (framing camera) image shows some of the undulating terrain in Vesta’s southern hemisphere. This undulating terrain consists of linear, curving hills and depressions, which are most distinct in the right of the image. Many narrow, linear grooves run in various directions across this undulating terrain. There are some small, less than 1 kilometer (0.6 mile) diameter, craters in the bottom of the image. These contain bright material and have bright material surrounding them. There are fewer craters in this image than in images from Vesta’s northern hemisphere; this is because Vesta’s northern hemisphere is generally more cratered than the southern hemisphere.
This image is located in Vesta’s Urbinia quadrangle and the center of the image is 63.0 degrees south latitude, 332.2 degrees east longitude. NASA’s Dawn spacecraft obtained this image with its framing camera on Oct. 25, 2011. This image was taken through the camera’s clear filter. The distance to the surface of Vesta is 700 kilometers (435 miles) and the image has a resolution of about 70 meters (230 feet) per pixel. This image was acquired during the HAMO (high-altitude mapping orbit) phase of the mission.
The Dawn mission to Vesta and Ceres is managed by NASA’s Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, for NASA’s Science Mission Directorate, Washington D.C. UCLA is responsible for overall Dawn mission science. The Dawn framing cameras have been developed and built under the leadership of the Max Planck Institute for Solar System Research, Katlenburg-Lindau, Germany, with significant contributions by DLR German Aerospace Center, Institute of Planetary Research, Berlin, and in coordination with the Institute of Computer and Communication Network Engineering, Braunschweig. The Framing Camera project is funded by the Max Planck Society, DLR, and NASA/JPL.
More information about Dawn is online at http://dawn.jpl.nasa.gov.
Image credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA | fwe2-CC-MAIN-2013-20-42111000 |
Once Again: Java Vulnerability
In light of the recent set of vulnerabilities found within the Java SE 7 browser plugin, I've read stories and heard from people who are completely uninstalling Java from their computers. In my opinion, this is an over-reaction to an issue that affects only one thing: the Java plugin for the browser. This is used only to run Java Applets or Java WebStart to launch applications via the browser. Considering there are three other types of Java applications that are unaffected (Java Embedded applications, Java SE desktop applications, and Java EE web-based or enterprise applications), this is only a small portion of the Java world. On top of that, there honestly aren't many Java applets in use these days, so the need to use the Java plugin is minimal.
- IDC Analyst Connection: Using Blade Systems to Cut Costs and Sharpen Efficiencies
- Application Testing Strategies in the IBM z/OS Environment
- Strategy: How to Conduct an Effective IT Security Risk Assessment
- Strategy: Smartphone Smackdown: Galaxy Note II vs. Lumia 920 vs. iPhone 5
- The Untapped Potential of Mobile Apps for Commercial Customers
- Why is Information Governance So Important for Modern Analytics?
To be clear, these specific vulnerabilities don't affect real-world server-side deployments (Java EE), or even Java SE desktop applications such as Eclipse or Netbeans, JavaFX, Swing, and so on. There really is no need to uninstall the JDK or JRE. Users need only disable the Java plugin in their browser.
One point I've been trying to make to friends and colleagues, beginning with the previous rash of vulnerabilities (see my previous blog), is that this is only an issue if the user browses to a malicious web site. Java or no Java, pointing your browser to a malicious web site is dangerous and leaves you vulnerable either way. You could raise the point that even a legitimate site can get hacked, and a Java zero-day attack launched from it. However, I would add that if a site got hacked, you're still open to vulnerability with or without the Java plugin enabled.
Oracle's Java SE 7 update 11, released to address this issue, included a description of the issue and resolution. In summary, the change included a control panel setting to block unsigned Java applets from running automatically. I've heard that only one of the two vulnerabilities discovered this week has really been patched, and I've also just read that an even newer vulnerability has been found. If this is true, it could spark a big change for the Java browser plugin design.
Either way, this doesn't mean that Java is an insecure language or platform, or that web sites built on Java EE are any less secure than other platforms. Unfortunately, perception often beats reality, and Java is getting a big black eye from this one. Hopefully Oracle can do more than just release updates to patch the vulnerabilities. They need to launch a campaign that explains the differences, as well as take steps to stop these vulnerabilities more effectively. | fwe2-CC-MAIN-2013-20-42117000 |
U.S. safety investigators called on Tuesday for a nationwide ban on texting and cellphone use while driving, a prohibition that would include certain applications of hands-free technology becoming more common in new cars.
The U.S. National Transportation Safety Board (NTSB) recommendation covers portable devices only but still goes beyond measures proposed or imposed to date by regulators and states, most of which already ban texting while behind the wheel.
“When it comes to using electronic devices, it may seem like it’s a quick call or a quick text or a tweet, but accidents happen in the blink of an eye,” says NTSB chairman Deborah Hersman.
More than 3,000 people were killed in distracted driving crashes in the United States in 2010, according to Transportation Department figures.
Most motorists participating in a Transportation Department survey released last week acknowledged few situations in which they would not use a cellphone or text while behind the wheel although they supported measures to curb the practice.
The five-member NTSB recommendation to states for a ban, except in an emergency, stemmed from an investigation of a Missouri chain-reaction crash that killed two people last year, an accident blamed on a driver who was texting.
The panel’s action follows nearly 10 years of investigating transportation accidents linked in some way to distraction and is not binding. But the safety board has long been effective at articulating U.S. transportation safety priorities and its views can be influential in legislative or regulatory decision making.
Congress has shown no interest in banning cellphone use or texting while driving. So far 35 states and the District of Columbia ban texting while driving, but fewer than a dozen prohibit using a cellphone.
The Transportation Department has waged an aggressive public campaign on the issue under Secretary Ray LaHood that has included limited bans.
“There’s no call or text message that’s so important that it can’t wait,” LaHood says.
LaHood has raised concerns about distracted driving and hands-free technology with automobile companies but has not prompted federal action or asked industry to stop putting it into new vehicles.
Cellphones and communication technology is ubiquitous and sweeping bans such as the one proposed by the NTSB are considered difficult to enforce, experts have said. This is one reason why federal and state restrictions so far have focused on the most obvious distraction — texting — or targetted individual groups, like truckers or federal workers.
The auto industry has invested heavily in hands-free communications technology, such as Bluetooth, that is now available in most 2012 models sold in the United States as standard or optional equipment.
“It actually is a big decision maker for some consumers,” says Jesse Toprak, a vice-president of TrueCar.com, who notes that Ford, in particular, has been aggressive in using it to attract younger buyers who may not otherwise have considered one of their cars.
Ford referred inquiries to the industry’s trade group in Washington, the Alliance of Automobile Manufacturers (AAM), which said it was reviewing the NTSB recommendation.
“What we do know is that digital technology has created a connected culture in the United States and it’s forever changed our society. Consumers always expect to have access to technology, so managing technology is the solution. Features that are integrated into the vehicle, and are designed by automakers are engineered to be used in the driving environment. That means it’s designed to be used in a way that helps drivers keep their eyes on the road and hands on the wheel,” the AAM says.
It further says texting while driving is “incompatible with safety.” | fwe2-CC-MAIN-2013-20-42118000 |
The following was written by Richard Dawkins to his daughter and while written more towards the discussion between religion and science, it contains much useful insight information on how we look at and evaluate health information.
I would add in another comment, blind trust of authorities, who sound like they are coming with solid evidence, is not better. We really must learn not only how to evaluate evidence but those who deliver it.
PS, I have no comment in regard to his view/comments on religions or mine.
To my dearest daughter,
Now that you are ten, I want to write to you about something that is important to me. Have you ever wondered how we know the things that we know? How do we know, for instance, that the stars, which look like tiny pinpricks in the sky, are really huge balls of fire like the Sun and very far away? And how do we know that the Earth is a smaller ball whirling round one of those stars, the Sun?
The answer to these questions is ‘evidence’.
Sometimes evidence means actually seeing (or hearing, feeling, smelling) that something is true. Astronauts have traveled far enough from the Earth to see with their own eyes that it is round. Sometimes our eyes need help. The ‘evening star’ looks like a bright twinkle in the sky but with a telescope you can see that it is a beautiful ball – the planet we call Venus. Something that you learn by direct seeing (or hearing or feeling) is called an observation.
Often evidence isn’t just observation on its own, but observation always lies at the back of it. If there’s been a murder, often nobody (except the murderer and the dead person!) actually observed it. But detectives can gather together lots of other observations which may all point towards a particular suspect. If a person’s fingerprints match those found on a dagger, this is evidence that he touched it. It doesn’t prove that he did the murder, but it can help when it’s joined up with lots of other evidence. Sometimes a detective can think about a whole lot of observations and suddenly realize that they all fall into place and make sense if so-and-so did the murder.
Scientists (the specialists in discovering what is true about the world and the universe) often work like detectives. They make a guess (called a hypothesis) about what might be true. They then say to themselves: if that were really true, we ought to see so-and-so. This is called a prediction. For example, if the world is really round, we can predict that a traveler, going on and on in the same direction, should eventually find himself back where he started. When a doctor says that you have measles he doesn’t take one look at you and see measles. His first look gives him a hypothesis that you may have measles. Then he says to himself: if she really has measles, I ought to see__Then he runs through his list of predictions and tests them with his eyes (have you got spots?), his hands (is your forehead hot?), and his ears (does your chest wheeze in a measly way?). Only then does he make his decision and say, ‘I diagnose that the child has measles.’ Sometimes doctors need to do other tests like blood tests or X-rays, which help their eyes, hands and ears to make observations.
The way scientists use evidence to learn about the world is much cleverer and more complicated than I can say in a short letter. But now I want to move on from evidence, which is a good reason for believing something, and warn you against three bad reasons for believing anything. They are called ‘tradition’, ‘authority’, and ‘revelation’.
First, tradition. A few months ago, I went on television to have a discussion with about 50 children. These children were invited because they’d been brought up in lots of different religions. Some had been brought up as Christians, others as Jews, Muslims, Hindus, Sikhs. The man with the microphone went from child to child, asking them what they believed. What they said shows up exactly what I mean by ‘tradition’. Their beliefs turned out to have no connection with evidence. They just trotted out the beliefs of their parents and grandparents, which, in turn, were not based upon evidence either. They said things like, ‘We Hindus believe so and so.’ ‘We Muslims believe such and such.’ ‘We Christians believe something else.’ Of course, since they all believed different things, they couldn’t all be right. The man with the microphone seemed to think this quite proper, and he didn’t even try to get them to argue out their differences with each other. But that isn’t the point I want to make. I simply want to ask where their beliefs came from. They came from tradition. Tradition means beliefs handed down from grandparent to parent to child, and so on. Or from books handed down through the centuries. Traditional beliefs often start from almost nothing; perhaps somebody just makes them up originally, like the stories about Thor and Zeus. But after they’ve been handed down over some centuries, the mere fact that they are so old makes them seem special. People believe things simply because people have believed the same thing over centuries. That’s tradition. The trouble with tradition is that, no matter how long ago a story was made up, it is still exactly as true or untrue as the original story was. If you make up a story that isn’t true, handing it down over any number of centuries doesn’t make it any truer!
Most people in England have been baptized into the Church of England, but this is only one of many branches of the Christian religion. There are other branches such as the Russian Orthodox, the Roman Catholic and the Methodist churches. They all believe different things. The Jewish religion and the Muslim religion are a bit more different still; and there are different kinds of Jews and of Muslims. People who believe even slightly different things from each other often go to war over their disagreements. So you might think that they must have some pretty good reasons – evidence – for believing what they believe. But actually their different beliefs are entirely due to different traditions. Let’s talk about one particular tradition. Roman Catholics believe that Mary, the mother of Jesus, was so special that she didn’t die but was lifted bodily into Heaven. Other Christian traditions disagree, saying that Mary did die like anybody else. These other religions don’t talk about her much and, unlike Roman Catholics, they don’t call her the ‘Queen of Heaven’. The tradition that Mary’s body was lifted into Heaven is not a very old one. The Bible says nothing about how or when she died; in fact the poor woman is scarcely mentioned in the Bible at all. The belief that her body was lifted into Heaven wasn’t invented until about six centuries after Jesus’s time. At first it was just made up, in the same way as any story like Snow White was made up. But, over the centuries, it grew into a tradition and people started to take it seriously simply because the story had been handed down over so many generations. The older the tradition became, the more people took it seriously. It finally was written down as an official Roman Catholic belief only very recently, in 1950. But the story was no more true in 1950 than it was when it was first invented 600 years after Mary’s death.
I’ll come back to tradition at the end of my letter, and look at it in another way. But first I must deal with the two other bad reasons for believing in anything: authority and revelation. Authority, as a reason for believing something, means believing it because you are told to believe it by somebody important. In the Roman Catholic Church, the Pope is the most important person, and people believe he must be right just because he is the Pope. In one branch of the Muslim religion, the important people are old men with beards called Ayatollahs. Lots of young Muslims are prepared to commit murder, purely because the Ayatollahs in a faraway country tell them to.
When I say that it was only in 1950 that Roman Catholics were finally told that they had to believe that Mary’s body shot off to Heaven, what I mean is that in 1950 the Pope told people that they had to believe it. That was it. The Pope said it was true, so it had to be true! Now, probably some of the things that Pope said in his life were true and some were not true. There is no good reason why, just because he was the Pope, you should believe everything he said, any more than you believe everything that lots of other people say. The present Pope has ordered his followers not to limit the number of babies they have. If people follow his authority as slavishly as he would wish, the results could be terrible famines, diseases and wars, caused by overcrowding.
Of course, even in science, sometimes we haven’t seen the evidence ourselves and we have to take somebody else’s word for it. I haven’t with my own eyes, seen the evidence that light travels at a speed of 186,000 miles per second. Instead, I believe books that tell me the speed of light. This looks like ‘authority’. But actually it is much better than authority because the people who wrote the books have seen the evidence and anyone is free to look carefully at the evidence whenever they want. That is very comforting. But not even the priests claim that there is any evidence for their story about Mary’s body zooming off to Heaven.
The third kind of bad reason for believing anything is called ‘revelation’. If you had asked the Pope in 1950 how he knew that Mary’s body disappeared into Heaven, he would probably have said that it had been ‘revealed’ to him. He shut himself in his room and prayed for guidance. He thought and thought, all by himself, and he became more and more sure inside himself. When religious people just have a feeling inside themselves that something must be true, even though there is no evidence that it is true, they call their feeling ‘revelation’. It isn’t only popes who claim to have revelations. Lots of religious people do. It is one of their main reasons for believing the things that they do believe. But is it a good reason? Suppose I told you that your dog was dead. You’d be very upset, and you’d probably say, ‘Are you sure? How do you know? How did it happen?’ Now suppose I answered: ‘I don’t actually know that Pepe is dead. I have no evidence. I just have this funny feeling deep inside me that he is dead.’ You’d be pretty cross with me for scaring you, because you’d know that an inside ‘feeling’ on its own is not a good reason for believing that a whippet is dead. You need evidence. We all have inside feelings from time to time, and sometimes they turn out to be right and sometimes they don’t. Anyway, different people have opposite feelings, so how are we to decide whose feeling is right? The only way to be sure that a dog is dead is to see him dead, or hear that his heart has stopped; or be told by somebody who has seen or heard some real evidence that he is dead.
People sometimes say that you must believe in feelings deep inside, otherwise you’d never be confident of things like ‘My wife loves me’.
But this is a bad argument. There can be plenty of evidence that somebody loves you. All through the day when you are with somebody who loves you, you see and hear lots of little tidbits of evidence, and they all add up. It isn’t purely inside feeling, like the feeling that priests call revelation. There are outside things to back up the inside feeling: looks in the eye, tender notes in the voice, little favors and kindnesses; this is all real evidence. Sometimes people have a strong inside feeling that somebody loves them when it is not based upon any evidence, and then they are likely to be completely wrong. There are people with a strong inside feeling that a famous film star loves them, when really the film star hasn’t even met them. People like that are ill in their minds. Inside feelings must be backed up by evidence, otherwise you just can’t trust them.
Inside feelings are valuable in science too, but only for giving you ideas that you later test by looking for evidence. A scientist can have a ‘hunch’ about an idea that just ‘feels’ right. In itself, this is not a good reason for believing something. But it can be a good reason for spending some time doing a particular experiment, or looking in a particular way for evidence. Scientists use inside feelings all the time to get ideas. But they are not worth anything until they are supported by evidence.
I promised that I’d come back to tradition, and look at it in another way. I want to try to explain why tradition is so important to us. All animals are built (by the process called evolution) to survive in the normal place in which their kind live. Lions are built to be good at surviving on the plains of Africa. Crayfish are built to be good at surviving in fresh water, while lobsters are built to be good at surviving in the salt sea. People are animals too, and we are built to be good at surviving in a world full of other people. Most of us don’t hunt for our own food like lions or lobsters, we buy it from other people who have bought it from yet other people. We ‘swim’ through a ‘sea of people’. Just as a fish needs gills to survive in water, people need brains that make them able to deal with other people. Just as the sea is full of salt water, the sea of people is full of difficult things to learn. Like language.
You speak English but your friend speaks German. You each speak the language that fits you to ‘swim about’ in your own separate ‘people sea’. Language is passed down by tradition. There is no other way. In England, Pepe is a dog. In Germany he is ein Hund. Neither of these words is more correct, or more truer than the other. Both are simply handed down. In order to be good at ‘swimming about in their people sea’, children have to learn the language of their own country, and lots of other things about their own people; and this means that they have to absorb, like blotting paper, an enormous amount of traditional information. (Remember that traditional information just means things that are handed down from grandparents to parents to children.) The child’s brain has to be a sucker for traditional information. And the child can’t be expected to sort out good and useful traditional information, like the words of a language, from bad or silly traditional information, like believing in witches and devils and ever-living virgins.
It’s a pity, but it can’t help being the case, that because children have to be suckers for traditional information, they are likely to believe anything the grown-ups tell them, whether true or false, right or wrong. Lots of what grown-ups tell them is true and based on evidence or at least sensible. But if some of it is false, silly or even wicked, there is nothing to stop the children believing that too. Now, when the children grow up, what do they do? Well, of course, they tell it to the next generation of children. So, once something gets itself strongly believed (even if its completely untrue and there never was any reason to believe it in the first place) it can go on forever.
Could this be what happened with religions? Belief that there is a god or gods, belief in Heaven, belief that Mary never died, belief that Jesus never had a human father, belief that prayers are answered, belief that wine turns into blood – not one of these beliefs is backed up by any good evidence. Yet millions of people believe them. Perhaps this is because they were told to believe them when they were young enough to believe anything.
Millions of other people believe quite different things, because they were told different things when they were children. Muslim children are told different things from Christian children, and both grow up utterly convinced that they are right and the others are wrong. Even within Christians, Roman Catholics believe different things from Church of England people or Episcopalians, Shakers or Quakers, Mormons or Holy Rollers, and all are utterly convinced that they are right and the others are wrong. They believe different things for exactly the same kind of reason as you speak English and someone speaks German.
Both languages are, in their own country, the right language to speak. But it can’t be true that different religions are right in their own countries, because different religions claim that opposite things are true. Mary can’t be alive in the Catholic Republic but dead in Protestant Northern Ireland.
What can we do about all this? It is not easy for you to do anything, because you are only ten. But you could try this. Next time somebody tells you something that sounds important, think to yourself: ‘Is this the kind of thing that people probably know because of evidence? Or is it the kind of thing that people only believe because of tradition, authority or revelation?’ And, next time somebody tells you that something is true, why not say to them: ‘What kind of evidence is there for that?’ And if they can’t give you a good answer, I hope you’ll think very carefully before you believe a word they say. | fwe2-CC-MAIN-2013-20-42120000 |
Because dopamine is involved in the rewarding effects of drugs of abuse, it was hypothesized in this study that normal variation in the of number of dopamine receptors in a person’s brain could influence their response to drug exposure. To test this, human subjects were given the stimulant methylphenidate (Ritalin), their brains’ were imaged using PET, and they were asked whether they liked or disliked the drug’s effects. Those subjects who had high levels of dopamine receptors found the experience unpleasant, while those with lower levels of dopamine found it more pleasurable. This suggests that individual differences in a marker of dopamine function can influence an person’s susceptibility to continued drug abuse.
As a result of scientific research, we know that addiction is a disease that affects both brain and behavior. | fwe2-CC-MAIN-2013-20-42122000 |
The Physiology of the Human Heart
The constant beating of the heart is controlled by the conducting system of the heart, which is a series of specialized nerve tissues that fire through the heart and coordinate the actions of the heart beat:
Sinoatrial (SA) node: This pacemaker initiates the impulse. It’s located anterolaterally just under the epicardium where the superior vena cava enters the right atrium. The impulse from the sinoatrial node spreads through the myocardium of the right and left atria, and it’s also quickly transmitted to the atrioventricular node.
Atrioventricular (AV) node: This node is located in the posterior and inferior portion of the interatrial septum, close to the opening of the coronary sinus in the right atrium. From there the signal is transmitted to the ventricles by a bundle of nerves called the atrioventricular bundle.
Atrioventricular bundle: This bundle of nerves runs from the atrioventricular node to the ventricles along the interventricular septum. It divides into left and right bundle branches that run deep to the endocardium to become the subendocardial branches (also called the Purkinje fibers):
Subendocardial branches of the right bundle stimulate the interventricular septum, the papillary muscle, and the wall of the right ventricle.
Subendocardial branches of the left bundle stimulate the interventricular septum, the papillary muscle, and wall of the left ventricle.
The heart is innervated by the autonomic nerves from superficial and deep cardiac plexuses. The deep cardiac plexus is located on the bifurcation of the trachea, and the superficial cardiac plexus is located on the base of the heart below the arch of the aorta.
The autonomic nervous system is made up of a two-neuron chain (using the presynaptic neuron and the postsynaptic neuron) from the central nervous system to the heart. The presynaptic sympathetic fibers branch off the first five or six thoracic segments of the spinal cord. They enter the sympathetic trunks and synapse with postsynaptic neurons located in the cervical and upper thoracic ganglia. Fibers of the postsynaptic neurons join the cardiac plexus and terminate on the SA node, AV node, cardiac muscle fibers, and coronary arteries.
Sympathetic stimulation increases heart rate, force of contraction, and dilation of coronary arteries. Parasympathetic innervation to the heart is provided by the vagus nerve (CN X). The presynaptic parasympathetic fibers of the vagus nerve join the postsynaptic sympathetic fibers in the cardiac plexus. The postsynaptic parasympathetic neurons are located in intrinsic ganglia (within the wall of the heart) and terminate on the SA node, AV node, and coronary arteries. Parasympathetic stimulation has the opposite effect of sympathetic stimulation.
The cardiac cycle is the sequence of events of each heart beat:
Diastole: During this process, the ventricles fill with blood from the atria. The atrioventricular valves are open, and the pulmonary and aortic valves are closed.
Systole: In this process, the ventricles empty into the aorta and pulmonary arteries. The atrioventricular valves are closed, and the pulmonary and aortic valves are open.
The major blood vessels of the thorax include arteries that branch off the aorta and veins that drain into the vena cava. Following are the parts of the aorta and its branches:
Ascending aorta: This part of the aorta leaves the left ventricle and ascends up to the sternal angle. It has spaces between the walls of the vessel and the cusps of the aortic valve called aortic sinuses.
Arch of the aorta: Continuing from the ascending aorta, this part arches posteriorly to the left of the trachea and esophagus, above the left primary bronchus
Thoracic aorta: The thoracic aorta continues from the arch and descends in the posterior mediastinum and left of the vertebral column.
Bronchial arteries: These arteries branch off the anterior part of the aorta or a posterior intercostal artery.
Esophageal arteries: Starting at the anterior part of the thoracic aorta, these arteries run to the esophagus.
Superior phrenic arteries: These arteries start at the anterior part of the thoracic aorta and run to the diaphragm.
Following are the parts of the vena cava and its tributaries:
Right and left brachiocephalic veins: These veins unite to form the superior vena cava near the brachiocephalic trunk (at the level of the 1st costal cartilage).
Superior vena cava: This large vein runs inferiorly to enter the right atrium.
Inferior vena cava: This vein is formed by the union of the iliac. It enters the heart at the lowest part of the right atrium.
Azygos vein: This vein arises from the right ascending lumbar vein and passes through the posterior mediastinum to drain into the superior vena cava.
Hemiazygos vein: This vein starts at the left ascending lumbar vein and crosses the vertebral column around the level of the 8th thoracic vertebra to join the azygos vein.
Accessory hemiazygos vein: This vein is formed by the union of the left fourth to the eighth posterior intercostal veins and joins the azygos vein at the level of the 7th thoracic vertebra. | fwe2-CC-MAIN-2013-20-42130000 |
A clear blue ocean, pristine white beaches and lush green forests – that’s what many holidaymakers want on a vacation.But an intact ecosystem isn’t a given in many tourist destinations. Due to climate change, extreme weather and natural disasters like tsunamis have become more frequent, causing major damage to many popular resorts. This week on Global Ideas, reporter Kerstin Schweizer shows us how tourism is now trying to become greener in a bid to preserve the environment. Seven years after a tsunami destroyed parts of the Pangandaran region on the Indonesian island of Java, there are now initiatives to promote sustainability there and restore the eco-system – a plus for Java’s agriculture and tourism sectors. The global logistics industry, too, has a big impact on the environment. In our background article, reporter Franziska Badenschier takes a look at how the transport of goods around the world has affected the climate, and how climate change in turn has impacted logistics as well. Researchers are now looking for ways to make shipping cleaner and greener. | fwe2-CC-MAIN-2013-20-42134000 |
ISO sensitivity is the ability of a sensor to provide a defined response for a given level of lighting. Photographers use this information to determine the nominal exposure conditions. If the actual ISO sensitivity of a digital camera’s sensor is lower than the sensitivity set by the user, the image is underexposed; if the sensitivity is greater, the image is overexposed.
To be easily understood by photographers, the ISO sensitivity of digital cameras has been defined such that it is similar to the ISO sensitivity of photographic film cameras, thus lower sensitivities require longer exposure for the same luminance to produce the same result. However, just as very sensitive films are known to be very grainy, parallels can be drawn for digital cameras, since high sensitivities are related to high gain and noise amplification.
While it is a common practice for camera vendors to emphasize high ISO settings on their cameras, it must be said that high ISO does not mean good image quality. Any serious photographer knows that the lowest ISO should be used to shoot a scene with a longer exposure time. Only when conditions do not allow it (as in photojournalism, low-light conditions, or sport photography) should lower exposure time and high ISO be used (typically to limit motion blur).
With film photography, changing the ISO meant having to change the film (which was very annoying). The intrinsic sensitivity of a digital camera is determined by the silicon structure of the sensor itself and cannot be changed, but the ISO of the camera can be artificially increased to arbitrary values by applying a gain to the signal. The price to pay is a proportionate increase of noise and eventually, a decrease of SNR for a given output value (see Essential characteristics of noise). The only trick is that the gain is applied before analog/digital conversion so as to avoid quantization effects.
So the fact that a camera attains ISO 10000 is no guarantee of image quality; the noise level at this ISO has to be reported as well.
ISO sensitivity (also known as ISO speed) is a numerical value calculated from the exposure provided at the focal plane of a digital camera to produce specific camera output signal characteristics.
ISO Standard 12232 defines two ways to measure ISO sensitivity. The first relates sensitivity to the exposure necessary to saturate the camera. The second, seldom used, compares the relative exposures to obtain different signal-to-noise ratios. The more common saturation-based method is described below.
The saturation focal plane exposure is defined as the exposure (illumination multiplied by exposure time in lux.s) necessary to reach sensor saturation. ISO sensitivity is then defined by
When a focal plane exposure measurement is not possible, as for a camera with non-removable optics, it is possible to compute the focal plane exposure as
L is the scene luminance (cd/m²),
t is the exposure time (s),
A is the lens aperture (f-Number),
with T as the transmission factor of the lens,
v as the vignetting factor,
and θ as the angle of the image point from the optical axis.
ISO 12232 considers a transmission factor T = 9/10, an angle θ = 10°, and a vignetting factor v = 98/100, which leads to q = 65/100.
DxOMark measures ISO sensitivity at the image center; thus θ = 0° and v = 1, considering the same transmission factor T = 9/10, which leads to q = 71/100.
ISO sensitivity is then defined by
with Lsat being the minimum luminance necessary to reach sensor saturation.
As tests show, the ISO settings reported by camera manufacturers can differ significantly from measured ISO in RAW. This difference stems from design choices, in particular the choice to keep some “headroom” to avoid saturation in the higher exposures to make it possible to recover from blown highlights. | fwe2-CC-MAIN-2013-20-42135000 |
Multivitamins Cut Cancer Risk in Men, Study Finds
(NEW YORK) -- It's a decision that millions of Americans face every morning: to take, or not to take, that multivitamin. Now, a new study of almost 15,000 men over 50 suggests popping that daily supplement could cut cancer rates by 8 percent.
The study is good news for some Americans, who spend billions of dollars each year on the assumption that taking a daily multivitamin will help prevent disease.
"Despite the lack of definitive trial data regarding the benefits of multivitamins in the prevention of chronic disease, including cancer, many men and women take them for precisely this reason," said Dr. Michael Gaziano, professor of medicine at Harvard Medical School and lead author of the study published Wednesday in the Journal of the American Medical Association. "Our study shows a modest but significant benefit in cancer prevention."
It's unclear whether the results apply to women or men under 50.
Previous large studies, including a 180,000-patient study started in 1992 and the Women's Health Initiative study of 160,000 women published in 2009, found that multivitamins had little to no effect on the risk of cancer. In fact, a 2010 Swedish study of 35,000 women who reported using multivitamins had an increased risk of breast cancer. So what changed?
First, the new study randomly assigned men to two groups, one of which took a daily Centrum Silver® while the other took a placebo pill. Previous multivitamin studies have been observational, meaning that the participants weren't compared with someone taking a placebo.
Second, it followed the men, who were 65 years old on average, over 11 years -- a longer follow-up than previous studies and sufficient time for cancer to develop.
And finally, the trial used a multivitamin, which is designed to fill nutritional gaps in a person's diet. Other trials have tested a single vitamin such as calcium or vitamin A, E or D in large doses, which is very different from how people normally get the vitamins and minerals they need from food.
"The reduction in total cancer risk in [the study] argues that the broader combination of low-dose vitamins and minerals contained in the [Centrum Silver®] multivitamin, rather than an emphasis on previously tested high-dose vitamins and mineral trials, may be paramount for cancer prevention," said Gaziano.
"Clearly the notion of megadoses of isolated nutrients has been proven wrong again and again," said Dr. David Katz, director of the Yale Prevention Research Center, who was not involved in the study. "Maybe the active ingredient in broccoli is broccoli."
So if a multivitamin prevents cancer because it provides a mix of nutrients similar to food, why not just eat more fruits and vegetables? Diets high in fruits and vegetables have been shown in observational studies to reduce the incidence of cancer and other chronic diseases. But only 1.5 percent of the public gets the recommended daily allowance of fruits and vegetables, according to Katz.
Katz compared the results of this study to a prior study from Europe that showed people who never smoke, have a body mass index or BMI lower than 30, get regular exercise and adhere to a healthy diet, can reduce their risk of chronic disease by almost 80 percent.
"Clearly however, taking a multivitamin is easy; changing dietary patterns is hard," he said.
The Centrum Silver® used in the study was provided by the manufacturer Pfizer, but Pfizer did not fund the study.
Copyright 2012 ABC News Radio | fwe2-CC-MAIN-2013-20-42140000 |
Cut Fat, Sugar and Salt
Want to Eat Smarter?
Do you ever feel as though a sweet tooth or craving for salty foods is holding you back from your health goals? The good news is that with a few simple changes to your eating and cooking habits, you can still eat right with these occasional treats.
Start building a smarter plate by choosing fruits and vegetables, whole grains, lean protein and low-fat dairy — foods that are packed with the nutrients you need without all the added sugars and solid fats. In addition, you can reduce your risk of high blood pressure, heart disease and stroke simply by eating less sodium.
Unsure where to start? Here are tips for building a smarter plate:
Eat Fewer Foods High in Solid Fats
- Opt for extra-lean ground beef, turkey and chicken. Cut back on processed meats such as hot dogs, salami and bacon.
- Grill, broil, bake or steam foods instead of frying.
- Cook with healthy oils like olive, canola and sunflower oils in place of hydrogenated, partially-hydrogenated oils or butter.
- Select low-fat or fat-free milk, yogurt and cheese.
Choose Foods and Drinks with Little or No Added Sugars
- Switch to water, low-fat or fat-free milk or 100-percent fruit juice in moderate amounts.
- For additional taste, add lemons, limes or cucumbers to water or drink carbonated water.
- Eat fresh fruit for dessert instead of cakes, cookies or pastries.
- Buy foods with little-to-no added sugars, like unsweetened applesauce or unsweetened whole-grain cereals.
Cut Back on Sodium
- Instead of salt, use herbs and spices to season foods.
- Do not add salt when cooking pasta, rice and vegetables.
- Read the Nutrition Facts Panel to compare the sodium content of high-sodium foods like pre-made foods, frozen meals, bread and canned soups and vegetables.
For more information on healthful changes you can make to your eating plan, consult a registered dietitian nutritionist in your area.
Reviewed April 2013 | fwe2-CC-MAIN-2013-20-42141000 |
Range Creek on display
Book Cliff students visit exhibit
|The students learn about the tanning process from Ephraim Dickson.|
Emery County has been put on the map with the news leaking out last year on the archaeological discoveries at Range Creek. The Utah Museum of Natural History is bringing the first Range Creek exhibit to the John Wesley Powell River History Museum in Green River for one year. The kickoff event was held on March 10. Students from Book Cliff Elementary attended the event for some hands on fun with museum presenters.
The students learned about nature's grocery store and explored the use of plants in the Fremont world. The students learned how to grind corn and the chores involved with gathering enough food each day to exist. Ephraim Dickson, director of Education for the Utah Museum of Natural History discussed hunting practices and tools used among the Fremont. Students learned about tanning hides and scraping them with rock scrapers.
|Julie Hansen talks about Fremont pottery with the school children.|
Julie Hansen, cultural anthropologist, explained how the Fremont used pottery in their everyday lives and how they made pots by collecting clay and mixing it with sand or crushed rocks to make it stronger. Sometimes they painted the pottery. The pots were baked in the sun. The clay was coiled around and around like a snake and formed into a pot. The pot was smoothed with a rock inside and out. The students saw pots made in the Fremont era and baskets.
Dickson explained how the trash piles the Indians left behind tell a lot about the way they lived. Chris Lyon, geologist, showed the children some rock art and figurines. She told them the use of the figurines isn't known at this time. The children speculated on the use of the figurines with answers like, "maybe they were used for snuggling," or "maybe they represent people who have died," or "maybe they protected the corn." No one knows for sure and one guess is as good as another as the children thought about the ancients and their way of life.
Sharon Hughes' first grade class will write stories about the Indians and the things they have learned about them. | fwe2-CC-MAIN-2013-20-42154000 |
This session will focus on introducing the use of negotiated projects and the Australian curriculum in a State School prep class. Real-life examples of how Anne has incorporated negotiated projects and The Australian Curriculum at beginning of the Prep year will be discussed. Anne aims to demonstrate how they can be used to fulfil requirements outlined in both the Early Years Curriculum Guidelines and the Australian Curriculum (C2C)
Key ideas will include:
• Anne's ongoing work with maintaining a Play based pedagogy in a State School Prep Class.
• Focused teaching that compliments Projects & the Australian Curriculum expectations.
• Examples of how Anne has begun to implement Projects and the content of C2C and Australian Curriculum
Anne Pearson is currently a Prep Teacher at the New Mango Hill S.S. Prior to teaching at Mango Hill SS Anne was a demonstration teacher for QUT at Kelvin Grove SC. Anne has experience in Prep and Preschool settings. Anne has previously presented on Playing with the Australian Curriculum at the 2011ECTA conference, and is a passionate advocate of appropriate Early Years Pedagogy as recommended by QSA and Education Queensland. Anne is constantly working on ways to best support her students and support the new Australian Curriculum. Opportunities for discussion will be provided in the presentation.
Relevant QCT Standards: 2, 9, 10 | fwe2-CC-MAIN-2013-20-42155000 |
Colouring books and education cards for kids
Friday, 23 September 2011
PhD student and community nursing coordinator, Judith Blake, has designed a colouring book for kids that explains all about sun safety. It is a beautifully designed with loads of kid-friendly pictures ready for colouring. We deeply appreciate the time and effort that Judith donated on behalf of the ECU Melanoma Research.
The book is great for schools; only $5 / book
We also have information cards for sale that detail the difference between a mole and a freckle – also great for school kids and a valuable fundraiser for the ECU melanoma research group. These were designed by the ECU fundraising and marketing team.
Please contact Mel Ziman at [email protected] for more details. | fwe2-CC-MAIN-2013-20-42156000 |
[Part 1 briefly reviews the differences between analogue and digital synthesis, and discusses voltage control - "one of the major innovations in the development of the synthesizer." Part 2 begins a look at subtractive synthesis with a discussion of VCOs, waveforms, harmonic content, and filters. Part 3 discusses envelopes - the overall 'shape' of the volume of a sound, plotted against time. Part 4 looks at amplifiers as well as other modifiers, including LFOs, envelope followers, waveshapers, and modulation. Part 5 shows how a subtractive analogue synthesizer can be a learning tool for exploring some of the principles of audio and acoustics. Part 6 considers other methods of analogue synthesis. Part 7 deals with the topology of the modules that make up a typical synthesizer and then looks at categorizing types of synthesizers.]
3.7 Early versus modern implementations
Electronics is always changing. Components, circuits, design techniques, standards and production processes may become obsolete over time. This means that the design and construction of electronic equipment will continuously change as these new criteria are met.
The continuing trend seems to be for smaller packaging, lower power, higher performance and lower cost but at the price of increasing complexity, embedded software, difficulty of repair and rapid obsolescence. Over the last 25 years, the basic technology has changed from valves and transistors towards microprocessors and custom ICs.
3.7.1 Tuning and stability
The analogue synthesizers of the late 1960s and early 1970s are infamous for their tuning problems. But then so are many acoustic instruments!
In fact, it was only the very earliest synthesizers that had major tuning problems. The first Moog VCOs were relatively simple circuits built at the limits of the available knowledge and technology – no one had ever built analogue synthesizers before. The designs were thus refined prototypes which had not been subjected to the rigorous trials of extended serious musical use.
It is worth noting that the process of converting laboratory prototypes into rugged, 'road-worthy' equipment is still very difficult; and at the time, valve amplifiers and electromechanical devices such as tape echo machines were the dominant technology. Modular synthesizers were the first 'all-electronic' devices to become musical instruments that actually left the laboratory.
The oscillators in early synthesizers were affected by temperature changes because they used diodes or transistors to generate the required exponential control law, and these change their characteristics with temperature (diodes or transistors can be used as temperature sensors!). Once the problem was identified, it was quickly realized that there was a need for temperature compensation. A special temperature compensation resistor called a 'Q81' was frequently used – they have a negative temperature coefficient which exactly matches the positive temperature coefficient of the transistor.
Eventually circuit designers devised methods of providing temperature compensation, which did not require esoteric resistors, usually based around differential pairs of matched transistors. Developments of these principles into custom synthesizer chips have effectively removed the need for additional temperature compensation.
Unfortunately, the tuning problems had created a characteristic sound, which is one reason why the 'beating oscillator' sounds heard on vintage analogue synthesizers are emulated in fully digital instruments that have an excellent temperature stability.
Tuning problems fall into four categories:
- overall tuning
- high-frequency tracking
Because of the differences in the response of components to temperature, the tuning of an analogue synthesizer can change as it warms up to the operating temperature. This can be compensated manually by adjusting the frequency CV or automatically using an 'auto-tune' circuit (see later). Some synthesizers used temperature-controlled chips to try and provide elevated but constant temperature conditions for the most critical components: usually the transistors or diodes in the exponential converter circuits. These 'ovens' have been largely replaced in modern designs by careful compensation for temperature changes.
Tuning polyphonic synthesizers requires patience and an understanding of the way that key assignment works (see Section 6.5.3). The tuner needs to know which VCO is making the sound (sometimes indicated by a light emitting diode (LED) or by a custom circuit addon), as well as how to cycle through the remaining VCOs – often by holding one note down with a weight or a little wedge and then pressing and holding additional notes. | fwe2-CC-MAIN-2013-20-42164000 |
Description from Flora of China
Herbs annual or perennial. Stems diffuse or procumbent, much branched. Leaves alternate or opposite, sessile or shortly petiolate; leaf blade oblong, elliptic, or subcordate; stipules small, membranous, caducous. Inflorescence a small cyme or glomerule, sometimes reduced to a solitary flower, leaf-opposed or terminal; bracts small, membranous. Flowers 4- or 5-merous. Pedicel green, short or nearly absent, small. Sepals not aristate at apex, persistent. Petals very small or absent. Stamens as many as and shorter than sepals. Ovary obovoid, 1-locular with 1 to several ovules; style very short, apex 2-fid. Fruit a utricle, a membranous-walled achene enclosed within persistent sepals, irregularly dehiscent or indehiscent, usually 1-seeded. Seeds brown, ovoid or flat-orbicular; testa shiny.
About 45 species: Africa, Europe, and Mediterranean region to C Asia; three species in China.
(Authors: Lu Dequan; Michael G. Gilbert) | fwe2-CC-MAIN-2013-20-42167000 |
Michael A LoGuidice Sr, DO
Mark Persin, DO
Scott H Plantz, MD, FAAEM
Francisco Talavera, PharmD, PhD
Ron Fuerst, MD
Guillain-Barre Syndrome Overview
Guillain-Barre syndrome is a nerve disorder. It is an acute and rapidly progressive inflammation of nerves that causes loss of sensation and muscle weakness.
This syndrome causes the destruction, removal, or loss of the myelin sheath of a nerve. Myelin is the substance of the cell membrane that coils to form the myelin sheath. The myelin sheath serves as an electrical insulator to nerve fibers.
It is also known as a polyneuropathy, which is a disease that involves several nerves.
Get breaking medical news.
Pill Identifier on RxList
- quick, easy,
Find a Local Pharmacy
- including 24 hour, pharmacies | fwe2-CC-MAIN-2013-20-42175000 |
Fetal Alcohol Syndrome
Fetal alcohol syndrome (FAS) is the term for severe birth defects caused by heavy alcohol use (5 or more drinks on at least one occasion) during pregnancy.
Children with FAS may have:
A child with FAS may also have birth defects that involve the eyes, ears, heart, urinary tract, or bones.
eMedicineHealth Medical Reference from Healthwise
To learn more visit Healthwise.org
© 1995-2012 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
Find out what women really need.
Most Popular Topics
Pill Identifier on RxList
- quick, easy,
Find a Local Pharmacy
- including 24 hour, pharmacies | fwe2-CC-MAIN-2013-20-42176000 |
Definition of Hepatobiliary
Hepatobiliary: Having to do with the liver plus the gallbladder, bile ducts, or bile. For example, MRI (magnetic resonance imaging) can be applied to the hepatobiliary system.
Hepatobiliary makes sense since "hepato-" refers to the liver and "-biliary" refers to the gallbladder, bile ducts, or bile.Source: MedTerms™ Medical Dictionary
Last Editorial Review: 6/14/2012
Medical Dictionary Definitions A - Z
Search Medical Dictionary
eMedicineHealth Top News
Get the latest treatment options.
Digestive Disorders Resources
Health Solutions From Our Sponsors
Most Popular Topics
Pill Identifier on RxList
- quick, easy,
Find a Local Pharmacy
- including 24 hour, pharmacies | fwe2-CC-MAIN-2013-20-42177000 |
Definition of Wart, venereal
Wart, venereal: The same as a genital wart, a wart that is confined primarily to the moist skin of the genitals. These warts are due to viruses belonging to the family of human papilloma viruses (HPVs) which are transmitted through sexual contact. The virus can also be transmitted from mother to baby during childbirth.
Most people infected with HPV have no symptoms but these viruses increase a woman's risk for cancer of the cervix. HPV infection is the most common sexually transmitted disease in the United States. It is also the leading cause of abnormal PAP smears and pre-cancerous changes of the cervix in women.
There is no cure for
venereal (genital) warts virus infection. Once contracted, the virus
with a person for life.
Source: MedTerms™ Medical Dictionary
Medical Dictionary Definitions A - Z
Search Medical Dictionary
eMedicineHealth Top News
Sex & Relationships
Get tips to boost your love life.
Most Popular Topics
Pill Identifier on RxList
- quick, easy,
Find a Local Pharmacy
- including 24 hour, pharmacies | fwe2-CC-MAIN-2013-20-42178000 |
Definition of Genu
Genu: The Latin word for the knee. When the knee is referred to in medicine, it is just called the knee. However, the word "genu" is also used in medicine as in: genu recurvatum (hyperextension of the knee), genu valgum (knock knee) and genu varum (bowleg).
The knee (or genu, if you are into Latin) is a joint which has three parts. The thigh bone (femur) meets the large shin bone (tibia) forming the main knee joint. This joint has an inner (medial) and an outer (lateral) compartment. The kneecap (patella) joins the femur to form a third joint, called the patellofemoral joint.
The knee joint is surrounded by a joint capsule with ligaments strapping the inside and outside of the joint (collateral ligaments) as well as crossing within the joint (cruciate ligaments). These ligaments provide stability and strength to the knee joint.
The large muscles of the thigh move the knee. In the front of the thigh the quadriceps muscles extend the knee joint. In the back of the thigh, the hamstring muscles flex the knee. The knee also rotates slightly under guidance of specific muscles of the thigh.
The knee functions to allow movement of the leg and is critical to normal walking. The knee flexes normally to a maximum of 135 degrees and extends to 0 degrees. The bursae, or fluid-filled sacs, serve as gliding surfaces for the tendons to reduce the force of friction as these tendons move. The knee is a weightbearing joint. Each meniscus serves to evenly load the surface during weight- bearing and also adds in disbursing joint fluid for joint lubrication.Source: MedTerms™ Medical Dictionary
Last Editorial Review: 4/27/2011 5:27:15 PM
Medical Dictionary Definitions A - Z
Search Medical Dictionary
eMedicineHealth Top News
Find out what women really need.
Most Popular Topics
Pill Identifier on RxList
- quick, easy,
Find a Local Pharmacy
- including 24 hour, pharmacies | fwe2-CC-MAIN-2013-20-42179000 |
Apr 11, 2012 / ENERGY GLOBE Award
Project presentation - "Solar houses for Siberia"
Low energy houses which do not require additional energy for heating are almost state of the art here in Austria. But how is the situation in the icy regions of the Russian Far East, like in Vadivostok, with records being minus 68 degrees, chilling winds and a soil that is always frozen?
People in Russia usually heat their homes with millions of tons of oil, gas, coal, and firewood, but still houses get no warmer than 14 degrees. On average, every Russian uses 50 % more energy than any citizen of the EU. Saving energy is an unknown concept.
An innovative architectural design developed by the Far Eastern Federal University of Vladivostok is tackling these challenges and brings a new quality of live into the living rooms of people: the "Eco House" for everyone has an overall heat insulation made of a special material not only for the roof and walls but also, and especially, for the foundations.
So the house stays warm even in the most icy winter, and no longer sinks into the ground, which normally happens when too much heat softens the permafrost soil. With its windbreak architectural form and thanks to solar collectors and heat storage systems now a comfortable 22 degrees can be reached and the house needs only very little energy from external sources even in very cold conditions. | fwe2-CC-MAIN-2013-20-42183000 |
Forums · General English Grammar & Vocabulary, Listening & Speaking · General English Grammar Questions
Below are the definitions from online dictionaries. It looks like an excerpt is a formal quote, am I right?
excerpt ( ) n. A passage or segment taken from a longer work, such as a literary or musical composition, a document, or a film.
quote v. , quoted , quoting , quotes . v.tr. To repeat or copy the words of (another), usually with acknowledgment of the source.
Informal. A quotation
Thanks in advance.
Excerpts are generally longer quotes, and as your definition says, come from written or scripted pieces; quotes needn't.
People are waiting to help.
Live chatRegistered users can join here
Related forum topics:
Online chat is available | fwe2-CC-MAIN-2013-20-42185000 |
Definitions, Specifications, and Other Guidance
Postconsumer fiber means:
- Paper, paperboard, and fibrous wastes from retail stores, office buildings, homes, and so forth, after they have passed through their end-usage as a consumer item, including: used corrugated boxes; old newspapers; old magazines; mixed waste paper; tabulating cards; and used cordage; and
- All paper, paperboard, and fibrous wastes that enter and are collected from municipal solid waste.
- Postconsumer fiber does not include fiber derived from printers' over-runs, converters' scrap, and over-issue publications.
Recovered fiber means:
Postconsumer fiber such as:
Manufacturing wastes such as:
- Paper, paperboard, and fibrous materials from retail stores, office buildings, homes, and so forth, after they have passed through their end-usage as a consumer item, including: used corrugated boxes; old newspapers; old magazines; mixed waste paper; tabulating cards; and used cordage; and
- All paper, paperboard, and fibrous materials that enter and are collected from municipal solid waste, and
Manufacturing wastes such as:
- Dry paper and paperboard waste generated after completion of the papermaking process (that is, those manufacturing operations up to and including the cutting and trimming of the paper machine reel into smaller rolls or rough sheets) including: envelope cuttings, bindery trimmings, and other paper and paperboard waste resulting from printing, cutting, forming, and other converting operations; bag, box, and carton manufacturing wastes; and butt rolls, mill wrappers, and rejected unused stock; and
- Repulped finished paper and paperboard from obsolete inventories of paper and paperboard manufacturers, merchants, wholesalers, dealers, printers, converters, or others.
Mill broke means any paper waste generated in a paper mill prior to completion of the papermaking process. It is usually returned directly to the pulping process. Mill broke is excluded from the definition of "recovered fiber." Also see "measurement" section below.
EPA recommends that procuring agencies review specifications provisions pertaining to performance and aesthetics and revise provisions that can impede use of postconsumer and recovered fiber, unless such provisions are related to reasonable performance standards. Agencies should determine whether performance provisions are unnecessarily stringent for a particular end use. Agencies also should revise aesthetics provisions-such as brightness, dirt count, or shade matching-if appropriate, consistent with the agencies' performance requirements, in order to allow for a higher use of postconsumer and recovered fiber.
EPA recommends that procuring agencies document determinations that paper products containing postconsumer and recovered fiber will not meet the agencies' reasonable performance standards. Any determination should be based on technical performance information related to a specific item, not a grade of paper or type of product.
EPA recommends that procuring agencies watch for changes in the use of postconsumer and recovered fiber in paper and paper products. When a paper or a paper product containing postconsumer and recovered fiber is produced in types and grades not previously available, at a competitive price, procuring agencies should either revise specifications to allow the use of such type or grade, or develop new specifications for such type or grade, consistent with the agencies' performance requirements.
EPA recommends that procuring agencies express their minimum content standards as a percentage of the fiber weight of the paper or paper product. EPA further recommends that procuring agencies specify that mill broke cannot be counted toward postconsumer or recovered fiber content, except that procuring agencies should permit mills to count mill broke generated in a papermaking process using postconsumer and/or recovered fiber as feedstock toward "postconsumer fiber" or "recovered fiber" content, to the extent that the feedstock contained these materials. In other words, if a mill uses less than 100% postconsumer or recovered fiber, only a proportional amount of broke can be counted towards postconsumer or recovered fiber content.
EPA recommends that procuring agencies consider the effect of a procurement of a paper product containing recovered and postconsumer fiber on their paper collection programs by assessing the impact of their decision on their overall contribution to the solid waste stream. | fwe2-CC-MAIN-2013-20-42189000 |
Cushing's Disease, or Cushing's Syndrome, is often thought of as a disease that only afflicts older horses, however, it has been known to occur in horses as young as eight years old.
Symptoms of Cushing's Disease
Horses with Cushing's Disease can be easily recognized by their coarse, wavy coat that often fails to shed out in the summer. A gelding at the barn I used to board at suffered from Cushing's Disease, and even in the heat of a Houston summer, he had a thick coat of wavy hair.
Other symptoms are excessive thirst, combined with excessive urination. A normal horse will drink in the region of 5 - 8 gallons per day, whereas a horse suffering from Cushing's Disease will drink as much as 20 gallons per day. Affected horses often have a pot-bellied appearance, combined with a loss of muscle on the topline. In addition, horses with Cushing's Disease are often more susceptible to other diseases because their immune system has been compromised.
What Causes Cushing's Disease?
Cushing's Disease is caused by a tumor of the pituitary gland, which is the small gland at the base of the brain which regulates the rest of the horse's endocrine systems. As the tumor grows, it puts pressure on the nearby hypothalmus, which is what regulates the body temperature. This is believed to be the primary cause of the distinctive coarse, wavy hair coat. As cells in the pituitary gland become overactive, they excess quantities of a peptide called pro-opiolipomelanocortin (POLMC, for short) causing the entire endocrine system to go out of balance.
Diagnosing Cushing's Disease
Even though the clinical symptoms are often very obvious, a number of tests have been developed over the years to positively diagnose Cushing's Disease in horses. These include the dexamethasone suppression test (DST) and ACTH (adrenocorticotropic hormone) stimulation. In addition, a test which combines the DST with a thyroid stimulating hormone release test, or TRH was developed by a team at the University of Tennessee to eliminate the overlap of the values of normal horses with those with pituitary tumors which was occurring in a number of cases.
Treating Cushings Disease
The good news is that once Cushings Disease has been diagnosed, treatment is simple, if long term, and in many cases allowing the horse to return to normal health.
Bromocriptine mesylate, a dopamine agonist, is one of the drugs used to treat Cushing's Disease. It mimics dopamine to inhibit overproduction of activating peptides, and it has been shown to mildly decrease plasma ACTH and cortisol levels. There are problems in absorbtion which limit it's practical use, however, and there are reported to be a number of side effects.
A more successful drug in the treatment of Cushing's Disease is cyproheptadine, a seratonin blocker. This is available in tablet form, which is easily absorbed into the horse's system, making it a much more practical treatment.
The simplest way to monitor the horse's improvement is to watch the water intake over a 24 hour period. The drug levels are slowly increased till the water consumption returns to normal. Once the horse has shown maintained improvement for a month, the dosage of the drug is decreased until a maintenance dosage is reached.
It is important to note that while these drugs treat the symptoms, they do not treat the pituitary tumor itself. Horses with mild Cushing's Disease may be returned to good health for a number of years, but eventually the tumor will compromise the horse's life and euthanasia becomes the kindest option. | fwe2-CC-MAIN-2013-20-42192000 |
Akha in the Language CloudPrint
This graph shows the place of this language within the cloud of all living languages. Each language in the world is represented by a small dot that is placed on the grid in relation to its population (in the vertical axis) and its level of development or endangerment (in the horizontal axis), with the largest and strongest languages in the upper left and the smallest and weakest languages (down to extinction) in the lower right. The population value is the estimated number of first language (L1) speakers; it is plotted on a logarithmic scale (where 100 = 1; 102 = 100; 104 = 10,000; 106 = 1,000,000; 108 = 100,000,000). The value for the development versus endangerment dimension is the estimated level on the EGIDS scale. (See the pages on Development and Endangerment for a fuller explanation.)
The language in focus is represented by a large, colored dot. When the population is unknown, a color-coded question mark appears at the bottom of the grid. When there are no known L1 speakers, an X appears at the bottom of the grid. The color coding matches the color scheme used in the summary profile graphs on the navigation maps for the site. In this scheme, the EGIDS levels are grouped as follows:
- Purple = Institutional (EGIDS 0-4) — The language has been developed to the point that it is used and sustained by institutions beyond the home and community.
- Blue = Developing (EGIDS 5) — The language is in vigorous use, with literature in a standardized form being used by some though this is not yet widespread or sustainable.
- Green = Vigorous (EGIDS 6a) — The language is unstandardized and in vigorous use among all generations.
- Yellow = In trouble (EGIDS 6b-7) — Intergenerational transmission is in the process of being broken, but the child-bearing generation can still use the language so it is possible that revitalization efforts could restore transmission of the language in the home.
- Red = Dying (EGIDS 8a-9) — The only fluent users (if any) are older than child-bearing age, so it is too late to restore natural intergenerational transmission through the home; a mechanism outside the home would need to be developed.
- Black = Extinct (EGIDS 10) — The language has fallen completely out of use and no one retains a sense of ethnic identity associated with the language.
The EGIDS level indicated by the large, colored dot may be higher than the EGIDS level reported in the main entry for the language. This is because a separate EGIDS estimate is made for every country in which a language is used. Our method for calculating the EGIDS level for the language as a whole is not to take an average of all countries, but to report the highest level (that is, most safe) for any country. The logic here is that if the EGIDS level of a language is taken as a predictor of its likely longevity, then its longevity will be determined by where it is the strongest.
Each dot in the cloud is gray at the level of 20% black. As dots are superimposed on each other, the spot gets darker. Thus a spot of total black indicates that at least 5 languages are at the same spot in the cloud. The population scale is continuous; thus the placement in the vertical axis corresponds exactly to population. The EGIDS scale, however, is discrete. Rather than placing all of the dots for a given EGIDS level exactly on the grid line for that level, the dots are “jittered” (that is, the horizontal placement is random within a band around the grid line for the level). | fwe2-CC-MAIN-2013-20-42196000 |
An organized population-based breast cancer screening program in Norway and an approach to screening that relies on physician- and self-referrals in Vermont are equally sensitive for detecting cancer, researchers report in the July 29 online issue of the Journal of the National Cancer Institute. But the recall rate for abnormal mammograms was lower in Norway.
Breast cancer screening in the United States is usually initiated in response to a physician's recommendation (known as "opportunistic screening"), and women are advised to have annual screening mammograms. By contrast, breast cancer screening programs in Norway and in some other European countries regularly send letters to all women in a specific age range inviting them to have a screening mammogram. The Norway program aims for women to be screened every two years. The differences between the two approaches make it relatively difficult to compare their effectiveness, and few studies have aimed to do so previously.
In the current study, Berta Geller, Ed.D., of the University of Vermont in Burlington, Solveig Hofvind, Ph.D., of the Cancer Registry of Norway, and colleagues compared the screening approaches by looking at the percentage of women who were recalled for a re-evaluation, the screening detection rate of breast cancer, and the rate of interval cancers in 45,050 women in Vermont and 194,430 women in Norway from 1997 to 2003. Women included in the study were aged 50 to 69 years at the time of screening.
The age-adjusted screening detection rate of cancers was similar between the two populations (2.77 per 1,000 woman-years in Vermont versus 2.57 in Norway), however, more than three times as many women were recalled in Vermont than in Norway (9.8 percent versus 2.7 percent, respectively). The rate of interval cancers was higher in Vermont than in Norway (1.24 per 1,000 woman-years versus 0.86), and 55.9 percent of the interval cancers were 15 mm or smaller in Vermont compared with 38.2 percent of the interval cancers in Norway. When all cancers detected during regular screening and between screening mammograms were combined, there were no substantial differences in the prognostic features of invasive cancers detected in the two populations.
The researchers conclude that although most of the women in Vermont were screened twice as often as the women in Norway, the overall rate of cancer detection was similar. Given the shorter interval between screens, Geller and colleagues were surprised to find a higher interval cancer rate in the Vermont women and hypothesize that "Vermont women and/or their health care providers may more readily pursue evaluation of symptoms and clinical findings than their Norwegian counterparts."
"Our results demonstrate that despite its longer screening interŽval, the organized population-based screening program in Norway achieved similar outcomes as the opportunistic screening in Vermont," the authors write.
Contact: Jennifer Nachbur, [email protected], (802) 656-7875
Citation: Hofvind S, Vacek PM, Skelly J, Weaver DL, Geller BM. Comparing Screening Mammography for Early Breast Cancer Detection in Vermont and Norway. J Natl Cancer Inst 2008; 100:1082-1091
Note to Reporters:
We have started up an e-mail list to alert reporters when papers are available on the EurekAlert site. If you would be interested on being on this list, please let us know at [email protected]. The content will continue to be available through EurekAlert's e-mail system and our EurekAlert page.
The Journal of the National Cancer Institute is published by Oxford University Press and is not affiliated with the National Cancer Institute. Attribution to the Journal of the National Cancer Institute is requested in all news coverage. Visit the Journal online at http://jnci.oxfordjournals.org/.
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system. | fwe2-CC-MAIN-2013-20-42202000 |
An image sensor is a device that converts a visual image to an electric signal, which is used mostly in digital cameras and other imaging devices. It is a set of charge-coupled device (CCD) or active-pixel sensors (APS) like CMOS sensors. The pixel responds to the light by accumulating charges - the more light, the more charge. Thereby the optical signal is transformed into an electric signal, which gets converted into digial information. Compared to CCD, CMOS sensors are faster, smaller, and cheaper as they are more integrated, which makes them also more power efficient. | fwe2-CC-MAIN-2013-20-42204000 |
Ewings who settled the area in and around Robinson Township, Allegheny Co, Pennsylvania just before and after the Revolutionary War.For a summary descendancy chart for these settlers click here.
- Moses Ewing (1725-1798): Moses was an older brother of Squire James Ewing. He almost certainly joined his younger brother, James, in migrating to Southwestern Pennsylvania about 1773. Like his brother, he homesteaded land in Collier Township. While alive, he rented part of his approximately 400 acre homestead; having no sons, he needed help in working the land. At his death, Moses left his land to his brother James. [Ancestors, Descendants]
- James Ewing, Squire (1733-1825): James Ewing went West about the time (1773) that Westmorland County was created from Bedford County. He was accompanied by his wife, Mary McKown, and first son, William. He was also probably accompanied by his brother Moses. It is probable that he first settled land along Montours Run, near its mouth with the Ohio River, in the area that became known as Ewing's Mill. His 1814 will refers to land on Montours Run adjoining David Smith, William Holland and a second parcel held by patent on which are both a Grist Mill and Saw Mill lying (together, my whole claim on the waters of Montours Run)." He subsequently homesteaded with the assistance of several slaves to help clear his land and erect improvements some 680 acres in the area near Walker's Mill in Collier Township. Later, he purchased an additional 350-or-so acres from Robert Boyd, this land lying between his original homestead and the land of Isaac Walker. He apparently also owned some land in North Fayette Township, to the west of the Walker's, that he transferred to his cousin Samuel Ewing (1741-1820). [Ancestors, Descendants]
- Samuel Ewing (1751-1805): This Samuel was a first cousin of Squire James. He and Jean (Neal) Ewing moved from Perry County to Southwestern Pennsylvania prior to the birth of their son John in 1798. At the time of his death in 1805, Samuel's family lived in Moon Township; this is probably where he originally settled. It appears that after Samuel's death, his family relocated to Beaver County to live near Samuel's brother James. It is most likely that this James settled in Beaver County prior to his brother's death. Alternatively, however, James may have moved to Beaver County at the time of his brother's death to support his sister-in-law, Samuel's wife Jean and her young, minor children. [Ancestors, Descendants]
- Alexander Ewing (1752-1798): Alexander was a nephew of Squire James Ewing. Prior to moving to Allegheny County, he lived in Adams County, Pennsylvania. He was a teamster and hauled goods back and forth between Eastern Pennsylvania and the Pittsburgh area. He moved to the Allegheny County area about 1779 and settled in North Fayette Township on land to the west of land owned by Isaac and Gabriel Walker (and which he possibly purchased from the Walker's). He was accompanied by his wife and his first two sons, John and Thomas. [Ancestors, Descendants]
- Samuel Ewing (1752-1820): Samuel Ewing, a first cousin of Squire James, and his wife Mary Oldham started out their married life in Cecil County, Maryland. They may have moved to Allegheny County in steps, stopping in Redstone, near Uniontown, Fayette County, Pennsylvania. They were living in North Fayette Township by the time of the 1800 Census but not, as well as can be determined, at the time of the previous 1790 Census. Their son Amos had married by the time of the 1800 Census and is listed separately. Samuel and his son probably settled on land near the current town of Oakland in North Fayette Township that he obtained from his cousin James Ewing (1733-1825). [Ancestors, Descendants]
- Moses Ewing (1762-1845): This Moses was a first cousin once removed of Squire James Ewing. Like his second cousin, Alexander Ewing (1740-1798), Moses became attracted to Southwestern Pennsylvania while a teamster hauling freight between Pittsburgh and Eastern Pennsylvania. He migrated to the area in 1792 and eventually settled, some fifteen years later, 180 acres in Robinson Township. [Ancestors, Descendants] | fwe2-CC-MAIN-2013-20-42205000 |
In-Home Behavior & Parent Training Programs
In-Home Behavior and Parent Training Programs to help correct and manage maladaptive behavior(s) while helping to develop a healthier family system
"We went to see our psychiatrist today today, and I told her about Alex's work with my son Charles as his in-home behaviorist, and she said "wow! you get great services!" I told her it was FACT! No other agency would be doing this kind of great work with him."
Thanks again, Suzanne
A variety of behaviors can manifest in childhood. While some habits and phases extinguish on their own, others can become increasingly maladaptive and difficult to manage. When this happens, it can derail child development, and negatively impact the entire family system. Such behaviors often include tantrums and aggression, as well as self-stimulatory and self-injurious behaviors, all of which can be exacerbated by deficits in communication and social skills, anxiety, transitions, and general life changes. When unaddressed, maladaptive behaviors begin to effect the whole family, resulting in negative sibling interactions and making everyday activities, like going to the store, or having dinner out in a restaurant, difficult or even impossible for families to do.
FACT’s In-Home Behavior and Parent Training Programs begin with an assessment that examines the function, severity, and frequency of maladaptive behaviors. This assessment is comprehensive, incorporating direct and indirect measures, and focuses on ten different areas of skill: Communication, Community Use, Functional Academics, Home Living, Heath and Safety, Leisure, Self-Care, Self-Direction, Social, and Work.
Following this assessment, a recommendation is made that includes the type of service and recommended hours, as well as possible funding sources, and specific interventions targeted to address the identified behaviors.
If the decision to begin an in-home program is made, a trained behaviorist will come to the home to work with all family members, and any other adults that interact with the child (nanny, babysitter, grandparents, etc) on how to correct and manage maladaptive behaviors, as well as how to successfully teach the child appropriate replacement behaviors. Additionally, a licensed clinician will supervise and oversee all behavior programs, and will come to the home on a monthly basis to work with the family.
What techniques and interventions are used to help correct and replace maladaptive behaviors?
At FACT, we look at the whole family system and how the system is impacting, creating, and maintaining maladaptive behavior(s) for everyone in the system. To do this many interventions and approaches are used. The approaches used include:
- ABA- Applied Behavior Analysis
This approach looks at functions of behavior and what may be reinforcing the behaviors(s) and ways to replace the maladaptive behavior and reinforce other behaviors. Some of the interventions used may include: operant reinforcement systems, token economies, response cost, checklist, visual/written/pictorial schedules, classic conditioning, shaping, chaining, data collection, and more.
- Cognitive Behavioral Therapy
This approach looks at how one thinks (cognition), feels (emotion), and acts (behavior), and how the interaction defines the choices we make. Interventions include: keeping a diary of thoughts and behaviors, questioning and testing assumptions and patterns of thoughts, systemically facing and experiencing avoided activities, testing out new behaviors, and learning and using relaxation techniques.
- Social Skills/Communication Training
This approach uses modeling, structured role play and rehearsal, conflict resolution, and active listening to help shape, teach, grow, and replace and behavior(s) that are influencing and harming social skills for the child. Reinforcement for appropriate communication along with visual or pictorial communication aides are used to help grow communication styles the child uses.
- And more...
Based on the family structure and issues present other approaches maybe used, which include: narrative therapy, strategic therapy, family systems, and others.
Let an In-Home Behavior and Parent Training Program help bring a healthier, more fulfilling and productive lifestyle for your family today. | fwe2-CC-MAIN-2013-20-42216000 |
Philip the Good
Philip the Good, 1396–1467, duke of Burgundy (1419–67); son of Duke John the Fearless. After his father was murdered (1419) at a meeting with the dauphin (later King Charles VII of France), Philip formed an alliance with King Henry V of England. Under the Treaty of Troyes (1420; see Troyes, Treaty of) Philip recognized Henry V as heir to the French throne; the dauphin was disinherited. Philip aided the efforts of Henry and his successor to establish English rule in France. Finally, in return for important concessions, Philip ended the English alliance and made peace with Charles VII in the Treaty of Arras (1435; see Arras, Treaty of). Despite the truce, Philip's relations with Charles were not always amicable. He temporarily supported (1440) the rebellious nobles in the Praguerie and gave asylum to the dauphin (later King Louis XI), who was constantly in revolt against his father. During Philip's reign the territory of his duchy was more than doubled. Through inheritance, treaty, conquest, and purchase he acquired Hainaut, Holland, Zeeland, Friesland, Brabant, Limburg, Namur, Luxembourg, Liège, Cambrai, and numerous other cities and feudal dependencies. Uprisings in Bruges (1436) and in Ghent (1450–53) were suppressed. In 1463, Philip was forced to return some of his holdings to Louis XI. His vow (1454) to go on crusade was never fulfilled. Philip's court was the most splendid in the Western Europe of his time. He was succeeded by his ambitious son, Charles the Bold, who took control of the government from Philip in 1465.
See biography by R. Vaughan (1970); J. L. A. Calmette, The Golden Age of Burgundy (1949, tr. 1962).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on Philip the Good from Fact Monster:
See more Encyclopedia articles on: French History: Biographies | fwe2-CC-MAIN-2013-20-42217000 |
Soloveitchik, Joseph (sŏˌləvāˈchĭk) [key], 1903–93, Jewish Talmudist and philosopher. Born into a rabbinic family in Poland, he was educated according to his grandfather's analytical method of Talmud study and also earned a Ph.D. at the Univ. of Berlin in 1931. In 1932 he came to the United States where he became rabbi in Boston. In 1941 he succeeded his father as professor of Talmud at Yeshiva Univ., New York. In essays and especially in oral discourse, Soloveitchik stressed the need for halakah as a means of gaining mastery over one's own nature, as well as for drawing closer to God. As a teacher, and as chairman of the Halakah Commission of the Rabbinical Council of America, he exerted a large influence over mainstream Orthodox Jewry in America.
See his Halakhic Man (1944, tr. 1983) and The Halakhic Mind (1984).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Judaism: Biographies | fwe2-CC-MAIN-2013-20-42218000 |
Mesabi (məsäbˈē) [key], range of low hills, NE Minn., once famous for its extensive iron ore deposits. The ores were found in a belt c.110 mi (180 km) long and from 1 to 3 mi (1.6–4.8 km) wide between Babbitt and Grand Rapids, occurring in horizontal layers (up to 500 ft/152 m thick) near the surface and mined by the open pit method. Reserves of high-grade hematite iron are now exhausted, and lower-grade taconite deposits are being worked. The taconite contains mostly chert and magnetite (an iron-bearing mineral) and must undergo a costly and complex beneficiation process before being shipped in the form of pellets containing c.60% iron. Most of the ore found is shipped by rail to Duluth, Minn., and other ports on Lake Superior. The Mesabi iron ore deposits were first discovered in 1887 by Leonidas Merritt and his brothers, who organized the Mountain Iron Company in 1890 to mine the ore; John D. Rockefeller gained control of the company in the Panic of 1893.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on Mesabi from Fact Monster:
See more Encyclopedia articles on: U.S. Physical Geography | fwe2-CC-MAIN-2013-20-42219000 |
China’s Low-Carbon Economy and its Target
China’s Low-Carbon Economy and its Target
The impact of climate change is global, and it demands urgent action. According to the Stern Review, “Business-As-Usual” (BAU) will cost five to ten percent of global GDP in 2050. However, if measures to reduce greenhouse gas emissions are taken now, the macroeconomic costs will bebetween a one percent gain to a 5.5 percent decrease of global GDP and less than an eighth of a percentage point in annual GDP, the Intergovernmental Panel on Climate Change (IPCC) predicts in its Fourth Assessment Report.
McKinsey Global Institute put these figures in perspective: if one were to view this spending as a form of insurance against potential damage due to climate change, it might be relevant to compare it to global spending on insurance, which was reached 3.3 percent of GDP in 2005. The treatment of crucial assumptions on global energy consumption scenarios and the use of top-down economic modeling (for example, estimating the effect that measures such as a carbon tax would have on a choice of energy sources) is still a controversial matter. However, the key message is clear: choices about the scale and timing of climate change mitigation must balance the economic costs of more rapid emission reductions against the medium and long term risks of delay.
Hence the term “low-carbon economy,” embraced by China’s top political think tank, National Development and Reforming Commission (NDRC). It is hard to find an NDRC equivalent outside China, since it plays the role of not only a policy advisor to the government, but also an executor of its policymaking. In the field of greenhouse emissions reductions in China, NDRC plays a pivotal role.
One of the most important measures the NDRC is using to steer the nation’s low-carbon development is the so-called Five-Year Plan, in which a compulsory, quantifiable and verifiable target of energy consumption is stipulated. In January 2011, NDRC director Zhang Ping announced that China has “basically fulfilled” its energy consumption reduction per GDP unit target as mandated in the 11th Five-Year Plan (2006-2010), a 20 percent reduction in 2010 from 2005 levels. The question of whether the 20 percent target is ambitious enough or not has two sides of a coin, and this coin embodies the Chinese characteristics of a low carbon economy.
On one hand, this target is pegged to GDP, which means a significant GDP growth would still permit a significant growth of energy consumption. Given that China’s real GDP growth, adjusted for inflation, during 2005-2010 is above 60 percent, one might wonder how much energy consumption is really reduced. On the other hand, this reduction target is not an easy task, especially since China is still predominately fuelled by coal, and the transition to a less energy-intensive one will take decades. What was probably unforeseen by NDRC was the outbreak of an economic crisis that pushed the central government to adopt a basket of stimulus programs aiming to boost the nation’s economy and global competiveness — with a large part of the benefit accruing to high-energy industries such as steel and cement.
In Chinese, “basically” is understood as “more or less,” without referring to an accurate figure. But this is the first time in Chinese history that such a clear target on energy consumption has ever been announced, considering the complexity of China energy supply as well as the country’s lack of data transparency.
Furthermore, the target was issued to provincial and city governors as a top-down target, a metric on which their political performances will be evaluated. So the motivation to achieve the 20 percent through real actions is strong, as much as the motivation to present a figure as close as possible or marginally higher.In February, voices of feedback were heard from all over China in the auspicious pre-Spring-Festival atmosphere, with regional media reporting that most provinces and cities have “basically met” the target, with the only exception of Xinjiang, which fulfilled a 10.2 percent reduction.
To verify the target’s fulfillment, an Energy Saving Evaluation Group —consisting of experts from NDRC, the China Statistics Bureau and China Energy Bureau — was sent to various cities after the Spring Festival, in order to conclude with a performance score. To date, the final result is yet to be published.
But what about actions so far? The NDRC Director once criticized the simplistic action of cutting off or restricting electricity supply adopted in some Chinese regions as “improper.” Let’s switch to those proper actions, categorized by a) policy levers, b) public awareness and behavior changes and c) technological development and applications.
It is only fair to say that the NDRC target is more than a demonstrated commitment from the very top political level: it is a creative lever to augment the nation’s enthusiasm on energy saving. Campaigns and posters carrying the slogans of “low-carbon life” can be found everywhere in China, and the Shanghai Expo was also positioned as “low-carbon Expo.” Even the Beijing Olympic Games went “green.” Public awareness of the whole climate change issue is solidly established in China. Significant improvement in behavior changes is observed, such as the electricity saving of individual households, or maybe this is more induced by the increase of the electricity price. It is hard to judge the technological side. However, the huge trading volume with steady increase of Chinese import and export of environmental technologies is already solid proof that at least things are going in the right direction.
So what comes next? NDRC unveiled the twelfth Five-Year Plan (2011-2015) during this year’s “Two-Conferences” (meetings of the National People's Congress and the Chinese People Political Consultative Congress), in which a 16 percent further reduction target of energy consumption per GDP unit was announced as a compulsory task. A novelty in this blueprint is that China also commits a 17 percent reduction target of greenhouse emissions per GDP unit. The baseline for comparison is 2010, a figure yet to be concluded. | fwe2-CC-MAIN-2013-20-42221000 |
Hawaii Vital Records (FamilySearch Historical Records)Edit This Page
From FamilySearch Wiki
|This article describes a collection of historical records scheduled to become available at FamilySearch.org.|
Collection Time Period
This collection includes records for the years 1904 to 1949.
The Hawaii Vital records are records of birth, marriages and deaths that happen on the Hawaiian Islands. They include a lot of genealogical history that should help you in your search for your Hawaiian ancestors.
The key genealogical facts found in the birth records may include the following information:
- Child’s name
- Child’s sex
- Birth date
- Birth place
- Registration date
- Parents' names
- Parents' residence
- Father’s occupation
- Parents' birth places
The key genealogical facts found in the marriage records may include the following information:
- Full name of bride and groom
- Marriage date
- Marriage place
- Residence of bride and groom
- Age or birthdate of bride and groom
- Bride and groom’s occupation
- Birth place of bride and groom
- Parents of bride and groom
- What number of marriage for bride and groom
The key genealogical facts found in the death records may include the following information:
- Name of deceased
- Death date
- Death place
- Marital status
- Cause of death
- Birth date and place
- Names of parents
- Surviving spouse
- Informant’s name
- Informant’s residence
How to Use the Record
To begin your search it is helpful to know the following:
- The place where the birth, marriage, or death occurred
- The approximate date the event occurred
- The name of the individual or individuals such as the names of the bride and groom, the infant, or the deceased
Compare the information in the record to what you already know about your ancestors to determine if this is the correct person. You may need to compare the information of more than one person to make this determination.
When you have located your ancestor’s record, carefully evaluate each piece of information given. These pieces of information may give you new biographical details and lead you to other records about your ancestors. Add this new information to your records of each family.
- Use the marriage date and place as the basis for compiling a new family group or for verifying existing information.
- Use the birth date or age along with the place of birth of each partner to find a couple's birth records and parents' names.
- Use the birth date or age along with the place of birth to find the family in census records.
- Use the residence and names of the parents to locate church and land records.
- Occupations and titles listed can lead you to other types of records such as employment, military, and church records.
- Use the parents' birth places to find former residences and to establish a migration pattern for the family.
- The name of the officiator may be a clue to their religion or area of residence in the county.
- Use a marriage number to identify previous marriages.
- Continue to search the records to identify children, siblings, parents, and other relatives who may have been born, married, or died in the same county or nearby. This can help you identify other generations of your family or even the second marriage of a parent. Repeat this process for each new generation you identify.
- When looking for a person who had a common name, look at all the entries for the name before deciding which is correct.
Keep in mind:
- The information is usually reliable, but depends upon the reliability of the informant.
- Earlier records may not contain as much information as the records created after the mid 1800s.
- There is also some variation in the information given from one record to another record.
If you are unable to find the ancestors you are looking for, try the following:
- Check for variant spellings of the surnames.
- Check for a different index. There are often indexes at the beginning of each volume.
- Search the indexes and records of nearby counties.
For a summary of this information see the wiki article: United States, How to Use the Records Summary.
Statewide registration of births and marriages began in 1842. Registration of deaths began in 1859. Few records exist until 1896, however, and registration was not generally complied with until 1929.
Why the Record Was Created
These records were created to keep track of the vital events happening in the lives of the citizens and to safeguard their legal interests.
These records are generally reliable but can vary depending on the knowledge of the informant.
Related Wiki Articles
Contributions to This Article
|We welcome user additions to FamilySearch Historical Records wiki articles. Guidelines are available to help you make changes. Thank you for any contributions you may provide. If you would like to get more involved join the WikiProject FamilySearch Records.|
Citing FamilySearch Historical Collections
When you copy information from a record, you should list where you found the information. This will help you or others to find the record again. It is also good to keep track of records where you did not find information, including the names of the people you looked for in the records.
A suggested format for keeping track of records that you have searched is found in the Wiki Article: How to Cite FamilySearch Collections.
Examples of Source Citations for a Record in This Collection
- United States. Bureau of the Census. 12th census, 1900, digital images, From FamilySearch Internet (www.familysearch.org: September 29, 2006), Arizona Territory, Maricopa, Township 1, East Gila, Salt River Base and Meridian; sheet 9B, line 71
- Mexico, Distrito Federal, Catholic Church Records, 1886-1933, digital images, from FamilySearch Internet (www.familysearch.org: April 22, 2010), Baptism of Adolfo Fernandez Jimenez, 1 Feb. 1910, San Pedro Apóstol, Cuahimalpa, Distrito Federal, Mexico, film number 0227023
Sources of Information for This Collection
Hawaii. Vital Records, 1904-1949. Hawaii Department of Health, Office of Health Status Monitoring. Honolulu.
- This page was last modified on 30 September 2011, at 21:58.
- This page has been accessed 563 times.
New to the Research Wiki?
In the FamilySearch Research Wiki, you can learn how to do genealogical research or share your knowledge with others.Learn More | fwe2-CC-MAIN-2013-20-42224000 |
Trust In God But Tie Your Camel: and Other Arab Proverbs
Compiled by: Stephen J. McGrane
Publisher: Llumina Press
Publication Date: October 2009
Reviewed by: Ellen Feld
Review Date: November 23, 2009
With international tensions between the Western and Islamic worlds increasing on an almost daily basis, it is imperative that the two cultures understand each other. One of the best ways to truly comprehend how a people think is by studying their language, including both daily usage as well as proverbs. Enter Stephen J. McGrane, businessman and seasoned traveler to the Arab world. McGrane has compiled an extremely thorough volume of Arab proverbs that opens the eyes of Westerners to the views of Arabs. Broken down into categories from “fear” to “patience” to “marriage, love, and beauty,” to “fate and luck,” this book will teach the Westerner much about Arab culture.
Trust In God But Tie Your Camel is a very simply laid out book, with just three proverbs per page. You could easily read it in one sitting but I’d suggest taking your time, digesting the meaning of each proverb as you slowly make your way through the pages. Many of the proverbs have subtle meanings that could easily be lost with a cursory glance.
There is a Preface that notes the importance of learning about a culture through their proverbs as well as an explanation as to how the author arranged the text, noting that Western counterparts, if they exist, are included.
It is interesting to see the frequent Bible quotes that coincide with the various proverbs as it truly gives the reader insight into how various cultures share similar beliefs. For example, a proverb attributed to the prophet Mohammed states, “None of you will be considered believers if you do not love your neighbor as yourself.” Sound familiar? It should if you’ve read Leviticus 19:18 (the New King James version of the Bible), “You shall love your neighbor as yourself.”
As mentioned above, many Arab proverbs have a Western equivalent and McGrane points them out whenever possible. “A thousand curses do not tear a robe,” may not at first sound familiar until you read the Western version, “Sticks and stones may break my bones but words will never hurt me.” Of course, some proverbs have no direct link to the Western world and may leave the reader stumped. “The camel limped from its split lip,” offers no immediate clue as it its meaning but McGrane provides a Western translation “A bad workman blames his tools,” which elicits an immediate “oh, yes, I get it,” from the reader.
Quill says: Trust In God But Tie Your Camel should be required reading for anybody who wishes to understand how the Islamic world thinks.
For more information on Trust In God But Tie Your Camel: and Other Arab Proverbs, please visit the book's website at Llumina Press. | fwe2-CC-MAIN-2013-20-42232000 |
The Future of the Dollar
JANUARY 01, 1974 by HENRY HAZLITT
Mr. Hazlitt is the well-known economist, columnist, editor, lecturer and author of numerous books, including What You Should Know About Inflation which is available in paperback from the Foundation for Economic Education.
Before we consider the future of the American dollar it may be wise to cast a glance at the glories of its past and examine the main causes that have brought it to its present humiliating state.
The logical starting point in this examination is Bretton Woods. When the representatives of some forty nations met there in 1944, heretical monetary notions were floating in the air. Lord Keynes, who was there, was their chief spokesman. The most definite of these notions was that the gold standard was a barbarous relic, and neither could nor should be restored. It put every national economy in a strait jacket. It prevented full employment; it strangled economic growth; it tied the hands of national monetary managers. And all for no good reason except an outworn mystique. Besides, there wasn’t enough gold in the world to sustain convertibility.
But because some American Congressmen and some parliaments were thought to have a lingering prejudice in favor of gold, it seemed prudent to compromise, and to set up something that looked almost like a gold standard — a thinly gold-plated standard. So, through an International Monetary Fund (IMF), a sort of world central bank, every other currency was to be pegged at a fixed rate to the Almighty Dollar. Each nation, after fixing an official parity for its currency unit, pledged itself to maintain that parity by buying or selling dollars. The dollar alone was to be convertible into gold, at the fixed rate of $35 an ounce. But unlike as in the past, not everybody who held dollars was to be allowed to convert them on demand into gold; that privilege was reserved to national central banks or other official institutions.
Thus, everything seemed to be neatly taken care of. When every other currency was tied to the dollar at a fixed rate, they were all necessarily tied to each other at fixed rates. Only one currency was tied to the dreadful discipline of gold, and even that in a very limited way. Gold was "economized" as never before. It was now the servant, no longer the master.
In addition, the Bretton Woods agreements provided that if any nation or central bank got into trouble, it was entitled to automatic credit from the Fund, no questions asked.
Thus, not only released from a strict gold standard, but tempted to imprudence, individual nations felt free to expand their paper money and credit supply to meet their own so-called domestic "needs." The politicians and the monetary managers in practically every country were infected with a Keynesian or inflationary ideology. They rationalized budget deficits and continuous monetary and credit expansion as necessary to maintain "full employment" and "economic growth." As a consequence, there were soon wholesale devaluations. The IMF has published hundreds of thousands of statistics; but the single figure of how many devaluations there were between the opening of the Fund and August 15, 1971, when the dollar itself became officially inconvertible into gold, the IMF has never published.
There were certainly hundreds of devaluations. To my knowledge, practically every currency in the Fund, with the exception of the dollar, was devalued at least once. The record of the British pound was much better than that, say, of the French franc, but the pound itself, which had already been devalued from $4.86 to $4.03 when it entered the IMF, was devalued again from $4.03 to $2.80 in September 1949 (an action that touched off 25 more devaluations of other currencies within a single week), and devalued still again from $2.80 to $2.40 in November, 1967.
Devaluation, let us remember, is an act of national bankruptcy. It is a partial repudiation, a government welching on part of its domestic and foreign obligations. Yet, by repetition by all the best countries, devaluation acquired a sort of respectability. It became not a swindle, but a "monetary technique." Until the dollar went off gold in August 1971 and was devalued in December, we heard incessantly how "successful" the Bretton Woods system had proved.
During the early part of this period, however, the world suffered from what everybody called a "shortage of dollars." The London Economist, among others, even solemnly argued that there was now a permanent "shortage of dollars." Americans thought so too. Our monetary managers seemed completely unaware of the tremendous responsibility we had assumed when we allowed the dollar to become the standard and the anchor for all the other currencies of the world. Our money managers never dreamed that it was possible to create an excess of dollars. They issued and poured out dollars and sent them abroad in foreign aid. Total disbursements to foreign nations, in the fiscal years 1946 through 1971, came to $138 billion. The total net interest paid on what the United States borrowed to give away these funds amounted in the same period to $74 billion, bringing the grand total through the 26-year period to $213 billion.
This amount was sufficient in itself to account for the total of our Federal deficits in the 1946-1972 period. The $213 billion foreign aid total exceeds by $73 billion even the $140 billion increase in our gross national debt during the same years. Foreign aid was also sufficient in itself to account for all our balance-of-payments deficits up to 1970.
We created a good deal of this money through internal inflation. From January 1946 to August 8, 1973, the money supply, as measured by currency in the hands of the public plus demand bank deposits, increased from $102 billion to $264 billion, an increase of $162 billion, or of 159 per cent. In the same period the money supply as measured by currency plus both demand and time deposits increased from $132 billion to $549 billion, an increase of $417 billion, or 316 per cent.
Because of what our monetary authorities believed was the necessity of keeping this enormous inflation going, they adopted one expedient after another. In 1963, blaming the deficit in our balance of payments on private American investment abroad, they put a penalty tax on purchases of foreign securities. In 1965 they removed the legal requirement to keep a gold reserve of 25 per cent against Federal Reserve notes. They resorted to a "two-tier" gold system. Next they invented Special Drawing Rights, or "paper gold." But all to no avail. On August 15, 1971, they officially abandoned gold convertibility. They devalued the dollar by about 8 per cent in December, 1971. They devalued it again, by 10 per cent more, on February 15, 1973.
Before we bring this dismal history any further down to date, let us pause to examine some of the chief fallacies prevailing among the world’s journalists, politicians, and monetary managers that have brought us to our present crisis.
Because we were sending so many of our dollars abroad, the real seriousness of our own inflation was hidden both from our officials and from the American public. We contended that foreign inflations were greater than our own, because their official price indexes were going up more than ours were. What we overlooked —what most Americans still overlook — is that we were exporting part of our inflation and that foreign countries were importing it.
This happened in two ways. One was through our foreign aid. We were shipping billions of dollars abroad. Part of these were being spent in the countries that received them, raising their price level but not ours. The other way in which we exported inflation was through the IMF system. Under that system, foreign central banks bought our dollars to use them as part of their reserves. But in addition, under the rules of the IMF system, central banks were obliged to buy dollars, whether they wanted them or not, to keep their own currencies from going above parity in the foreign exchange market. The result is that foreign central banks and official institutions today hold some 71 billion of our dollars.
These dollars will eventually come home to buy our goods or make investments here. When they do, their return will have an inflationary effect in the United States. Our domestic money supply will be increased even if our Federal Reserve authorities do nothing to increase it.
Balance of Payments
The meaning of the "deficit" in our balance of payments has been grossly misunderstood. It has not been in itself the real disease, but a symptom of that disease. The real question Americans should have asked themselves is not what consequences the deficits in the balance of payments caused, but what caused the deficits. I have just given part of the answer — our huge foreign aid over the last 27 years, and the obligation of foreign central banks under the Bretton Woods agreements to buy dollars. But the foreign central banks had to buy dollars because dollars had become overvalued at their official rate. They became overvalued because the U.S. was inflating faster than some other countries.
After the United States formally suspended gold payments, and after the dollar was twice devalued, foreign banks no longer felt an obligation to buy dollars. The dollar fell to its market rate, and as one consequence we again have a monthly excess of exports. The economists who had all along been demanding the restoration of free-market exchange rates were right. Now that the dollar is no longer even nominally convertible into gold there is no longer any excuse for governments to try to peg their paper currency units to each other at arbitrarily fixed rates. The IMF system ought to be abandoned. The International Monetary Fund itself ought to be liquidated. Paper currencies should be allowed to "float" — that is, people should be allowed to exchange them at their market rates.
But it is profoundly wrong to assume, as many economists and laymen unfortunately now do, that daily and hourly fluctuating market rates for currencies will be alone sufficient to solve the multitudinous problems of foreign commerce. On the contrary, these wildly fluctuating rates create a serious impediment to international trade, travel and investment. They force importers, exporters, travelers, bankers, and investors either to become unwilling speculators or to resort to bothersome and costly hedging operations. With 125 national currencies represented in the IMF, there are some 7,750 changing cross-rates to keep track of, and twice as many if you state each cross-rate both ways. With a gold standard gone, with the dollar standard gone, there is no longer a single accepted unit in which all of these rates can be stated.
Some Gain — Some Loss
It is a great gain when currencies can be exchanged at their true market rates. Since this has happened the American trade balance has improved. In the second quarter of 1973, for example, there was again a surplus of exports. In July, 1973, American exports in dollar terms were the highest for any single month on record. But it is one thing to allow trade to improve by abandoning arbitrary pegs on foreign-exchange rates; it is quite another thing for a country to seek to increase its exports at the expense of its neighbors by deliberate devaluation. Yet this is what the United States government has very foolishly done.
In early August, 1973, Frederick B. Dent, the U. S. Secretary of Commerce, assured the American public that the devaluations of the dollar had provided the nation with a "bright opportunity." "Without question," he added, "the most important factor in the improving trade trend is the combination of the two devaluations." In fact, the U. S. Department of Commerce placed an advertisement in the issue of Time of July 2, and in other magazines, declaring that to the U. S. exporter the devalued dollar means "vastly improved prospects," that it would help him to capture "a bigger share of over-sea markets," and that it was up to him to "start putting the devalued dollar to work."
The basic fallacy in this euphoric picture is that it looks only at the short run consequences of devaluation and even at these only as they affect a small segment of the population.
It is true that the first effect of a devaluation, if it is confined to a single country, is to stimulate that country’s exports. Foreigners can buy that country’s products cheaper in terms of their own money. Thus, as the Department of Commerce’s ad correctly pointed out: "For instance, an American product for which a West German importer paid 1000 deutsche mark only 18 months ago would now cost him as little as 770 marks. Or about 23 per cent less than before." So the American exporter stands to sell more goods abroad at the same price in dollars, or the same volume of goods in higher prices in dollars, or something in between, depending on whether his product is competitive or a quasi-monopoly.
So far, so good. But U. S. exports amount to only 4¹/2 per cent of the gross national product. Now let us enlarge our view. If the dollar is devalued, say, by a weighted average of 25 per cent in terms of other currencies, something else happens even on the first day after devaluation. The prices of all American imports go up by that percentage (or more precisely, by its converse). Every American consumer has to pay more, directly or indirectly, for meat, coffee, cocoa, sugar, metals, newsprint, petroleum, foreign cars, or whatever. Even the American exporter, as a consumer, has to pay more, and also more for his imported raw materials. So the immediate effect of a devaluation is to force the consumers of the devaluing nation to work harder to obtain a smaller consumption than otherwise of imported goods and services. Is it really a national gain for the American people to sell their own goods for less and buy foreign goods for more?
The belief that devaluation is a blessing, because it temporarily enables us to sell more and forces us to buy less, stems’ from the old mercantilist fallacy that looked at international trade only from the standpoint of sellers. It was one of the primary achievements of the classical economists to explode this fallacy. As John Stuart Mill said:
The only direct advantage of foreign commerce consists in the imports. A country obtains things which it either could not have produced at all, or which it must have produced at a greater expense of capital and labor than the cost of the thing which it exports to pay for them…
The vulgar theory disregards this benefit, and deems the advantage of commerce to reside in the exports: as if not what a country obtains, but what it parts with, by its foreign trade, was supposed to constitute the gain to it.
So far I have considered only the immediate effects of a devaluation. Now let us look at the longer effects. The devaluation or depreciation of a currency soon leads to a rise of the internal price level. The prices of imported goods, as I have just pointed out, have a corresponding rise immediately. The demand for exports rises, and therefore the prices of export goods rise. This rise of prices leads to increased borrowing by manufacturers and others to stock the same volume of raw materials and other inventories. This leads to an expansion of money and credit which soon makes other prices rise. (Often, of course, the causation is the other way round: an expansion of a country’s currency and a consequent rise of its internal price level will soon be reflected in a fall of its currency quotation in the foreign exchange market.) In brief, internal prices soon adjust to the foreign-exchange quotation of the currency, or vice versa.
We can see more clearly how this must take place if we look at a freely transportable international commodity like wheat, copper, or silver. Let us say, for example, that copper is 50 cents a pound in New York when the deutsche mark in the foreign exchange market is 25 cents. Then purchases, sales, and arbitrage transactions will have brought it about that the price of copper in Munich is four times as high in marks as in dollars plus costs of transportation.
Suppose the dollar is devalued or depreciated so that the mark now exchanges for 40 cents. Then, assuming that the price of copper in terms of marks does not change (and though I have been specifically mentioning marks, dollars, and copper I intend this as a hypothetical and not a realistic illustration), purchases, sales, and arbitrage transactions will now bring it about that the price of copper in New York will have to rise 60 per cent in terms of dollars. To bring this new adjustment about, more copper will flow from the U. S. to Germany. But after this temporary stimulus to American export, the new price adjustment will bring it about that, other things being equal, the relative amount of copper exported may be no different than before the devaluation.
A Brief Period of Transition
I have been speaking of international commodities, traded on the speculative exchanges, and easily and quickly transportable. In these commodities the international price adjustments will take place in a few days or weeks. The price adjustments of most other goods will, of course, take place more slowly. The main point to keep in mind is that there is a constant tendency for the internal purchasing power of a currency to adjust to its foreign-exchange value —and vice versa. In other words, there is a constant tendency for the internal prices in a country to adjust to the changing foreign-exchange value of its currency —and vice versa. Though our modern monetary managers and secretaries of commerce seem to know nothing about this, the purchasing power theory of the exchanges was first explained a century and a half ago by Ricardo.
In other words, the alleged foreign trade "advantages" of a devaluation last for merely a brief transitional period. Depending on specific conditions, that period may stretch over more than a year or less than twenty-four hours. It tends to become shorter and shorter for any given country as depreciation of its currency continues or devaluations are repeated. Internal currency depreciation usually lags behind external depreciation, but the lag tends to diminish.
Statistical studies have been made of the relationships of the internal and external purchasing power of a currency under extreme conditions — for instance, the German mark during the 1919-1923 inflation. (See The Economics of Inflation, by Constantino Bresciani-Turroni, 1937.) It would not be too hard for any competent statistician, with the help of a copy of International Financial Statistics, published monthly by the IMF, to put together revealing comparisons of foreign-exchange rates and internal prices for any country that publishes reasonably honest wholesale or consumers price indexes.
It is instructive to recall, incidentally, that at the height of the German hyperinflation, which eventually brought the mark to one-trillionth of its former value, monthly exports, measured in tonnages, fell to less than half of what they had previously been, while the tonnage of imports doubled or tripled.
In brief, the pursuit of a more "favorable" balance of payments, or a trade "advantage," through depreciation or devaluation of one’s own currency, is the pursuit of a will-o-the-wisp. Any gain of exports it brings to the devaluating nation is temporary and transient, and is paid for at an excessive cost — an internal price rise and all the economic distortions and social discontent and unrest this brings about.
The usual criticism of currency devaluation is that it will provoke reprisals; that other countries will try the same thing, and the world may be plunged into competitive devaluations and trade wars. This objection is, of course, both a valid and a major one. But what I have been trying to emphasize here is a point that few of our monetary managers have grasped — that even if there is no retaliation, devaluation as a deliberate policy pursued for the sake of a foreign trade gain is self-defeating and stupid. The two American devaluations, for example, were monumental blunders. If the world’s monetary managers can be brought to learn this one lesson, the economic and political gain will be immense.
What steps should be taken to halt the present world inflation and return the world to sound money? The immediate steps are simple and can be briefly stated.
The United States — and for that matter every country — should forthwith allow its citizens to buy, sell, and make contracts in gold. This would be immediately followed by free gold markets, which would daily measure the real depreciation in each paper currency. Gold would immediately become a de facto world currency, whether "monetized" or not. The metal itself would not necessarily change hands with each transaction, but gold would become the unit of account in which prices would be stated. Exporters would be insured against the depreciation of the currencies in which they were being paid.
The second (and preferably simultaneous) step can be stated more briefly still. Every nation should refrain from further increase in its paper money and bank credit supply.
For the United States a special measure would also be needed. A hundred billion dollars or more are held by foreign central banks and foreign citizens. Most of these are no longer wanted. They dangerously overhang the market, and constantly threaten to bring sudden and sharp declines in the dollar.
The U. S. government must do two things. It must follow monetary policies that will assure foreign dollar holders that they are not holding an asset that is likely to depreciate still further but, on the contrary, one that is likely to keep its value or even to appreciate a little. Secondly, the U. S. government should volunteer to fund the dollar overhang. It could do this by offering foreign central banks interest-bearing long-term obligations for their liquid dollar holdings — say, bonds that would be repayable and retirable, principal and interest, in equal installments over a period of twenty-five or thirty years. It should preferably negotiate with each country separately, and should guarantee its bonds by making principal and interest repayable, at the option of the central bank holding them, in either the face value of the dollars or in the currency of the country holding them, at the same ratio to the dollar as of the market rate on the day the agreement was reached. Thus, the Bank of Japan would be paid off, at its option on any payment date, either in dollars or in yen; the Bundesbank either in dollars or in marks; and so on.
Ricardo’s Recommendations of a Full Gold Standard
Of course, the world should eventually return to a full gold standard. A gold standard is needed now for the same reason that David Ricardo gave for it in 1817:
Though it (paper money) has no intrinsic value, yet, by limiting its quantity, its value in exchange is as great as an equal denomination of coin, or of bullion in that coin…. Experience, however, shows that neither a State nor a bank ever have had the unrestricted power of issuing paper money without abusing that power; in all States, therefore, the issue of paper money ought to be under some check and control; and none seems so proper for that purpose as that of subjecting the issuers of paper money to the obligation of paying their notes either in gold coin or bullion.
A return to gold will involve some difficult but not insuperable problems, which we shall not attempt to discuss in detail here. The main immediate requirement is that individual countries stop increasing their paper money supplies.
But my topic here is the future of the dollar — not what it ought to be, but what it is likely to be. And I am obliged to say that the outlook for the dollar — or, for that matter, of national currencies anywhere — is hardly bright. The world’s currencies will be what the world’s politicians and bureaucrats make them. And the world’s politcians and bureaucrats are still dominated everywhere by an inflationary ideology. Whatever they say publicly, whatever fair assurances they give, they still have a mania for inflation, domestic and international. They are convinced that inflation is necessary to maintain "full employment" and to continue "economic growth." They will probably continue to "fight" inflation only with false remedies, like "income policies" and price controls.
The International Monetary Fund is the central world factory of inflation. Nearly all the national bureaucrats in charge of it are determined to continue it. Having destroyed the remnants of the gold standard by printing too much paper money, they now propose to substitute Special Drawing Rights, or SDR’s, for gold — in other words, they propose to print more international paper money to serve as the "reserves" behind still more issues of national paper monies. The first international step toward sound money, to repeat, would be to abolish the IMF entirely.
In August, 1973, the present American Secretary of the Treasury, George P. Schultz, named fourteen men as members of a new advisory committee on reform of the international monetary system. These included three former Treasury secretaries, all of whom pursued the very monetary policies that brought the United States and the world to its present crisis. The whole list of men in this committee included only two professional economists. I don’t want to attack individuals, but to my knowledge not a single man appointed to the new panel believes in the gold standard, has ever advocated its restoration, or has ever spoken out in clear and unequivocal terms even against the chronic increase in paper money issues. But the climate of opinion is now such in the United States that I must confess I would find myself hard put to it to name as many as fourteen qualified Americans who could be counted on to recommend a sound international monetary reform.
The truth is that everybody is afraid of a return to sound money. Nobody in power wants to give up inflation altogether because he fears its abandonment would be followed by a recession. It’s true that if we stopped inflation forthwith we might have a recession, for much the same reasons as a heroin addict, deprived of his drug, might suffer agonizing withdrawal symptoms. But such a recession, even if it came, would be a very minor and transient evil compared with the catastrophe toward which the world is now plunging.
This article is from a paper delivered at a regional meeting of the Mont Pelerin Society in Guatemala, September 4, 1973. | fwe2-CC-MAIN-2013-20-42235000 |
Within Christianity there are a range of different churches which are better known as denominations. Each of these denominations has slightly different values and beliefs in regards to Christianity – although they all have the Holy Bible at the centre of their beliefs. Essentially they all have different interpretations of the Bible and as such teach different values at Church.
In this guide we shall look at some of the most popular and common type of Christian churches.
Roman Catholic Church
The Roman Catholic Church is by far the most prominent in the world accounting for approximately one sixth of the world’s population and over 75 percent of Christians in the USA.
The Roman Catholic Church is run by the Pope and it identifies its mission as spreading the gospel of Jesus Christ, administering the sacraments and exercising charity.
Within the Roman Catholic global community not only are there thousands of churches but a huge network of Catholic schools, hospitals, universities and missions.
The Catholic Church believes itself to be the original church that was founded by Jesus and his Apostles and that their bishops are successors of the apostles through apostolic succession.
The Roman Catholic Church dates back almost two thousand years and as such has played a vital role in the shaping of Western society.
While the Roman Catholic church has the largest number of worldwide members, in recent years it has undergone criticism for its outdated teachings on abortion, euthanasia, birth control and sexual ethics.
The Lutheran Church is a sub-denomination of Protestantism and is one of the major churches of Christianity in the USA. The church bases its teachings on those of the 16th century German reformist – Martin Luther.
Due to Martin Luther’s views and disagreements over the doctrine of Justification – the Roman Catholic church split apart from the Lutheran church to become two now very separate churches.
The Lutheran church maintains a very traditional view of Christianity and retains many of the liturgical practices and sacramental teachings of the early Church, prior to the reformation.
The central teaching of the Lutheran church is Justification. They believe that humans are saved from their sins by God’s grace alone and through faith alone. They are Trinitarians meaning that they believe there are three people making up the trinity – the father, the son and the Holy Spirit.
The Restorationist church is based on the belief that that a purer form of Christianity should be restored by way of using the early church as a model to base it on.
One of the major denominations of Restorationism is the Church of Jesus Christ Latter Day Saints – otherwise known as Mormons. They believe that Joseph Smith Jr. was the person chosen to restore the church back to how it was in the early days of the religion as founded by Jesus rather than reform to a modern church.
The Church of Latter Day Saints was first organized in 1830 and is one of the newest denominations of Christianity – yet has grown rapidly since.
The final type of Christian church we shall look at today is the Orthodox Church. The Orthodox Church is the second largest Christian church communion in the world with approximately 225 million members worldwide.
The goal of the Orthodox church is to draw yourself continually closer to God – right from the time of Baptism as an infant. The process of becoming closer to God is called theosis – and each member of the church strives to become more “Christ like” throughout their lifetime.
They believe in the Trinity and they believe that Jesus Christ was both God and Man and that he was born, lived and died.
An interesting belief of Orthodox Christians is that when a person dies the soul is temporarily separated from the body. They believe that after this separation it lingers for a while until it is sent either to paradise or the darkness of Hades – this follows the temporary Judgment. The Final Judgment is when the soul and the body will reunite. | fwe2-CC-MAIN-2013-20-42244000 |
The Natural Rights Republic
By Michael Zuckert
University of Notre Dame Press 304 pp. $34.95
In The Natural Rights Republic, Michael Zuckert takes up a question that has long divided American historians and political philosophers: “Was the American founding inspired by classical republican, Christian, Whig historical, Scottish enlightenment, or modern liberal conceptions?” Zuckert unambiguously chooses the latter: America, he says, is the “natural rights republic”-not in the sense that liberalism was the only element present at the creation, but in the sense that it was the dominant one, and showed a power to “make peace with and indeed assimilate important aspects of classical antiquity and Christianity.” Nor is Zuckert’s argument merely descriptive-he is very much an advocate of the natural rights republic.
The first part of the book develops the author’s account of the natural rights philosophy of the American founding. He offers a painstaking analysis (textual and structural) of the Declaration of Independence, a detailed discussion and critique of other interpretations of the founding (e.g., Garry Wills’ and Morton White’s), and a close reading of Jefferson’s Notes on the State of Virginia. The Declaration is interpreted particularly in conjunction with Locke’s Second Treatise on Government and with contemporaneous expressions of the “American Mind,” especially the Virginia and Massachusetts Bills of Rights.
Zuckert’s analysis of the Declaration is generally convincing: he seems to me correct in his “structural” reading of the Declaration as a fundamentally “Lockean” or liberal document. At the same time, his tendency to minimize the importance of religion in the American political tradition as anything more than a useful prop of politics at times appears excessive, as in the quick move, in his analysis of Jefferson’s handiwork, from “Creator” to “nature”: “Jefferson himself in the Declaration traced . . . rights to the creator, that is, nature.” The very use of Jefferson as the touchstone for understanding natural rights philosophy magnifies the tendency to minimize religion. It tilts the board in favor of a certain understanding of American republicanism that would not have been acceptable to a majority of the people Jefferson was writing for when he penned the Declaration.
On the whole, however, Zuckert is very persuasive in making his case for the natural rights republic. His critiques of alternative views are particularly powerful. Zuckert confronts his opponents head-on, portraying them fairly, but then going effectively (if always politely) for the jugular.
Zuckert speaks of “Convergences” in the American political tradition. But for him this means not so much a convergence of equal strands as an assimilation by the natural rights tradition of other traditions: Old Whig constitutionalism, Puritan political theology, and the progressive realization of democracy understood as a variant of classical republicanism. America was indeed, Zuckert says, an amalgamation of these views, but “the natural rights philosophy remains America’s deepest and so far most abiding commitment, and the others could enter the amalgam only so far as they were compatible, or could be made so, with natural rights.”
Take, for example, the argument that traces American political thought to its Puritan roots. Zuckert surveys various versions of this position: strong continuity (the major principles of the founding can be found in the Puritans), minimal continuity (while accommodating themselves to a Christian citizenry, the leading founders held ideas incompatible with Puritan political thought), secularized continuity (there is continuity through a secularization of Christian concepts, e.g., covenant), and eclectic continuity (political and social theories of Puritanism are one of several major sources for the founding). Zuckert’s mode of argument here is essentially negative: his demolition of the strong, secularized, and eclectic continuity theses leaves only minimal continuity in place. The “Lockeanized Protestantism” of the eighteenth century represented a “substantial break with the reigning political theology of the previous century,” the God of supernature giving way to the God of nature. The Protestant impulse to deny magistrates the power to serve “the good of the soul,” which led to a dissociation of the political realm and the spiritual realm, thereby prepared the way for the liberal focus on rights as the central category of politics.
Having dissected the Puritan continuity thesis, Zuckert goes on to the “Whig constitutionalism” thesis of John Phillip Reid and the “classical republican” thesis of scholars such as J. G. A. Pocock and Gordon Wood and shows them to be equally subordinated to natural rights philosophy.
Over time, Zuckert argues, America has developed a successful synthesis of Jeffersonian and Madisonian republicanism. The large and less strenuously republican Madisonian constitutional system is the fundamental frame, but it has become more infused with a Jeffersonian spirit (by, for example, political parties, formal modification of the Constitution, and the mass media), so that it is not so far removed from the people.
There are tensions, of course, between the “expressive” (participatory) and the “instrumental” (rights-protecting) elements of Jefferson’s republicanism. Zuckert joins Madison in criticizing Jefferson on two grounds: 1) his insufficient attention to the tension between the popular right to control government and the rights to be protected by government, and 2) the unlikelihood that Jefferson’s localized “ward republics” would supply the energy, competence, and prudence necessary for effective national government. Nonetheless, Zuckert says, the validity of Jefferson’s ideals is reinforced by our persistent concern about the quality of democratic life. We live in a tension between the expressive and instrumental dimensions of republicanism. Current debates between liberals and communitarians are simply one manifestation of this tension-a tension that cannot and probably should not be resolved.
A question that Zuckert needs to take up more explicitly is whether one can move from careful textual analysis of the Declaration and other major public documents to such a conclusive characterization of the nature of the American regime. How much do the views of those who are not leading founders, and of the citizenry at large, deserve to be weighed? How much do the premodern elements imbedded in American institutions-e.g., the common law and much state legislation (including, in many states, religious establishments)-count?
Perhaps Zuckert would argue that time has told the story: it is the leading founders and the natural rights philosophy they adopted that won out, and those elements have transformed and decisively subordinated the other, nonliberal elements. But an historical argument is not entirely sufficient, since one might view that victory as unfortunate in important ways, as Zuckert would think unfortunate some twentieth-century developments away from the natural rights philosophy.
Zuckert would, I think, argue that the natural rights philosophy is a superior form of political thought and practice. But that is an argument that requires much more than this book, which is more a detailed explication of natural rights philosophy and its influence than it is a compelling argument for its superiority. Some of us who harbor more serious doubts about the liberal/natural rights/republicanism synthesis, and who believe that many contemporary problems cannot find effective solutions in either contemporary liberalism or in an older liberal amalgam, will remain convinced of the need for a healthier dose of religion and of (non-Lockean) “natural law” in some form.
Whatever one’s view of what America needs, however, it is important to understand the general character of the American founding, and Zuckert’s book is a powerful exposition of the most central political principles of that founding. Its elegant articulation of its own thesis, together with its insightful analysis and critique of a wide variety of alternative views, makes it an extremely important contribution to debates on our national origins, which all serious students of the founding and of liberalism will have to confront.
Christopher Wolfe is Professor of Political Science at Marquette University. | fwe2-CC-MAIN-2013-20-42249000 |
Your teenage “kids” are probably a lot more competent than they seem, according to psychologist Robert Epstein. But a raft of laws and regulations (compulsory education, labor restrictions, a separate juvenile justice system) and an ever-growing consumer sector have needlessly delayed their entry into the adult world. Historically, he points out in an interview about his recent book The Case Against Adolescence, this is not the norm:
We have completely isolated young people from adults and created a peer culture. We stick them in school and keep them from working in any meaningful way, and if they do something wrong we put them in a pen with other “children.” In most nonindustrialized societies, young people are integrated into adult society as soon as they are capable, and there is no sign of teen turmoil. Many cultures do not even have a term for adolescence. But we not only created this stage of life: We declared it inevitable. In 1904, American psychologist G. Stanley Hall said it was programmed by evolution. He was wrong.
Rejecting the stereotype of the teenager as immature and incompetent, Epstein argues that adolescents are fully capable of cognitive and moral reasoning, maintaining long-term relationships, and being responsible for themselves. While teens “have too much freedom” in certain senses, they’re nevertheless “not free to join the adult world, and that’s what needs to change”:
I believe that young people should have more options—the option to work, marry, own property, sign contracts, start businesses, make decisions about health care and abortions, live on their own—every right, privilege, or responsibility that an adult has. . . .
When we dangle significant rewards in front of our young people—including the right to be treated like an adult—many will set aside the trivia of teen culture and work hard to join the adult world.
Naturally I disagree with him about abortion, and I’m not convinced that we should roll back child labor laws or institute the competency tests that he favors. Broadly, however, I think he’s right that the myth of the shallow, irresponsible teenager is a self-fulfilling prophecy.
Parents may not be able to give their teenage sons and daughters all the rights and responsibilities of adulthood, but they can at least encourage teens to find a job and give them enough freedom to learn from their mistakes, just like adults do. Don’t assume they’re incapable of making good decisions unless they’ve proven by their behavior that they’re incapable. Stop treating them like kids, and they may stop acting like them.
h/t Joe Carter | fwe2-CC-MAIN-2013-20-42250000 |
The capital required to construct a tidal barrage has been the significant stumbling block too. It is not an attractive proposition to an investor due to long payback periods. This problem could be solved by government funding or large organisations getting involved with tidal power.
In terms of long term costs, once the construction of the barrage is complete, there are very low maintenance and operating costs and the turbines only need replacing once around every 30 years. The life of the plant is indefinite and for its entire life it will receive free fuel from the tide.
Few tidal barrages have been constructed. The largest tidal power station in the world (and the only one in Europe) is in the Rance estuary in northern France. La Rance was completed in 1966 and has operated reliably ever since. So too has the barrage at The Bay of Fundy in Canada - though this had an adverse effect on Marine life.
There have been plans for a "Severn Barrage" from Brean Down in Somerset to Lavernock Point in Wales. Every now and again the idea gets proposed, but nothing has been built yet. It could have over 200 large turbines, and provide over 8,000 Megawatts of power (over 12 nuclear power station's worth).
It would take 7 years to build, and could provide 7% of the energy needs for England and Wales. There would be a number of benefits, including protecting a large stretch of coastline against damage from high storm tides, and providing a ready-made road bridge.
However, the drastic changes to the currents in the estuary could have huge effects on the ecosystem so it is unlikely ever to be built due to the major environmental impact that it would cause.
A major drawback of tidal power stations is that they can only generate when the tide is flowing in or out - in other words, only for 10 hours each day. However, tides are totally predictable, so we can plan to have other power stations generating at those times when the tidal station is out of action. | fwe2-CC-MAIN-2013-20-42260000 |
A federal government website managed by the U.S. Department of Health & Human Services
200 Independence Avenue, S.W. - Washington, D.C. 20201
H1N1 (originally referred to as Swine Flu)
The H1N1 flu virus caused a world-wide pandemic in 2009. It is now a human seasonal flu virus that also circulates in pigs.
- Although the World Health Organization (WHO) announced the pandemic was over in August 2010, H1N1 is still circulating.
- Getting the flu vaccine is your best protection against H1N1.
- You cannot get H1N1 from properly handled and cooked pork or pork products.
- Symptoms of H1N1 are similar to seasonal flu symptoms.
What is H1N1 flu?
H1N1 is a flu virus. When it was first detected in 2009, it was called “swine flu” because the virus was similar to those found in pigs.
The H1N1 virus is currently a seasonal flu virus found in humans. Although it also circulates in pigs, you cannot get it by eating properly handled and cooked pork or pork products.
Is H1N1 still a threat?
On August 10, 2010 WHO announced that the world is in a post-pandemic period. However, H1N1 is still circulating. H1N1 is included in the 2011-2012 seasonal flu vaccine.
What are the symptoms of H1N1 flu?
The symptoms of H1N1 are the same as seasonal flu symptoms.
How does H1N1 flu spread?
The H1N1 flu virus spreads between people in the same way that seasonal flu viruses spread.
How can I prevent H1N1 flu?
The best way to prevent the H1N1 flu is to get the seasonal flu vaccine. The 2011-2012 flu vaccine includes protection against the H1N1 flu virus. You should also follow our everyday steps to keep yourself healthy during flu season.
Vietnam has begun a phase 1 clinical trial for the first H1N1 pandemic influenza vaccine developed entirely in Vietnam with support from the U.S. Department of Health and Human Services’ Biomedical Advanced Research and Development Authority (BARDA). This is the first step in testing the new vaccine in humans. The study and data analysis is expected to be complete by the end of 2012.
I have H1N1. What should I do?
If your health care provider has diagnosed you with H1N1, you should follow our treatment recommendations and your health care provider’s orders.
Who is monitoring H1N1 in the U.S.?
The Centers for Disease Control and Prevention (CDC) tracks seasonal flu activity, which includes H1N1. | fwe2-CC-MAIN-2013-20-42262000 |
Cause for Concern (Church/State)
What Jefferson intended as an explanation of the First Amendment's protection of the free exercise of religion was misapplied by the Supreme Court to the Establishment Clause, a mix-up that has resulted in the very interference with religious free exercise that Jefferson argued against.
The so-called "wall of separation between church and state" has done more damage to America's religious and moral tradition than any other utterance of the Supreme Court. While the First Amendment was originally intended to prevent the establishment of a national religion and thus ensure religious liberty, the Supreme Court's misuse of the "separation of church and state" phrase has fostered hostility toward, rather than protection of, religious freedom.
This phrase has been used by the Court to outlaw Ten Commandments displays in public buildings, prayer and Bible reading in schools, clergy and even student invocations at school events, and other public acknowledgements of God. Such decisions clearly negate the Founding Father's presupposition of America's Christian identity. It is time to return the First Amendment back to its original meaning and revive the rich faith-filled heritage of America's public life.
Many of the state legislatures that ratified the Constitution conditioned their approval on the further inclusion of a guarantee of individual liberties such as the freedom of religion. Some of those states already had taxpayer-supported "establishments" of religion. The new Congress took up these calls for action and drafted the Bill of Rights for further approval by the states. James Madison, a major participant in the debate and drafting of what ultimately became the First Amendment, introduced the initial draft on June 8, 1789 as discussions began in the House:
The civil rights of none shall be abridged on account of religious belief or worship, nor shall any national religion be established, nor shall the full and equal rights of conscience be in any manner, or on any pretext, infringed.
After further discussion, other versions of the amendment were offered, including: "no religion shall be established by law," "no religious doctrine shall be established by law," "no national religion shall be established by law" and "Congress shall make no laws touching religion." Finally, the House sent back to the Senate this version: "Congress shall make no law establishing religion." The Senate took the House version under advisement, but then offered its own version: "Congress shall make no law establishing articles of faith or a mode of worship, or prohibiting the free exercise of religion." When the House and Senate met to resolve their differing versions, they settled on the ultimate version of "Congress shall make no law respecting an establishment of religion."1
What is clear from the records of the First Amendment debates, as well as Jefferson's own "wall of separation" language, is the Founders' aversion to Congress establishing a national religion, not the religion-scrubbing tool the Supreme Court has made of it over the last 60 years.
A few Supreme Court justices have resisted the current perversion of Jefferson's "wall" metaphor and its effect on the Establishment Clause. In his 1985 dissent from yet another Supreme Court decision invoking Jefferson's "wall" to strike down Alabama's "moment of silence" statute, Chief Justice Rehnquist had this to say:
"It is impossible to build sound constitutional doctrine upon a mistaken understanding of constitutional history, but unfortunately the Establishment Clause has been expressly freighted with Jefferson's misleading metaphor for nearly 40 years."
In another by-product of the Everson decision, the Supreme Court decreed that the First Amendment, which begins "Congress shall make no law …," would henceforth apply to the states as well as the federal government. That's how the Supreme Court gained authority over religious expression in local schoolrooms, graduation ceremonies, football games, courthouses, city councils and thousands of other state and local venues. Although that particular issue is too large to address here, it is further evidence of the Supreme Court's massive power grab in the Everson decision.
Copyright © 2008 Focus on the Family. All rights reserved. International copyright secured. | fwe2-CC-MAIN-2013-20-42266000 |
Display Fonts Collection
|Starting in the 19th century with the explosion of popular entertainment and popular-oriented art forms, one of new form of art was the design of posters and advertisements intended to catch the imagination and generate special interest in the audience. From the advertising found in magazines and decorative fronticepieces in books to the poster art movement in France, a consciousness emerged that type and lettering could be decorative and artistic and eyecatching in a way which had previously never really been considered.
The concept of display and ornamental type started with newspaper and poster designers taking regular text styles and using them in enormous sizes, or developing italic or slightly embellished styles for emphasis within text. From these beginnings designers began to experiment with what they could do to make titles stand out even more, starting with extra bold or exaggeratedly weighted styles and increasingly more decorative and ornamental styles. Many of these early titling faces took on characteristics of traditional calligraphy, because it was the only decorative lettering which many designers were familiar with, or looked like text faces expanded and transformed.
By the middle of the 19th century type designers were experimenting with all sorts of onramental type, particularly for use in advertising and in specialized books aimed at an increasingly intellectual middle class market. Much of this type partook of the characteristics of calligraphy, but it was increasingly complex and decorative beyond the scope of simple pen-strokes.
One of the innovators in this period was William Morris, who launched the Arts and Crafts movement, which included among its interests the development of new and visually striking styles of lettering and typography, such as Morris’ own Troy type and the unique lettering of artists like Charles Rennie Mackintosh and Walter Crane.
In the last two decades of the 19th century Art Nouveau spread across Europe, emerging from the Arts and Crafts movement, but attracting a much larger popular audience. Decorative type and lettering was a major element of the Art Nouveau movement, which had strong ties to the performing arts and other visual arts which required publicity in the form of advertisements and posters.
The Art Nouveau movement spurred a renaissance in font design, but much of the art of the period was expressed in unique designs which were never made into typefaces at that time. Hand-lettered posters and advertising titles by artists like Alphons Mucha were in great demand, and the Poster Art movement grew out of Art Nouveau and the poster became the major new medium for popular art by the end of the 19th century.
The hand-lettering of Mucha influenced many other artists and designers and when Mucha returned to his native Czechoslovakia he spurred a renaissance of art and design in eastern Europe, which eventually developed into the cubist and futurist movements in art which had a great influence on designers around the world in the period
Today there is still a great demand for new and unusual display fonts. They are essential to advertising in every media, because they draw attention and give a product a signature look which sets it apart from the competition. Advances in desktop publishing have also made it possible to introduce a greater variety of fonts for titling in publications, both in print and online. As a result display fonts are available in great diversity, offering every kind of look for every kind of use.
Because the basic function of display fonts is to do titles and label things, they may not have the same character set as traditional text fonts. Display fonts often only have either upper or lower case characters, and usually don’t have extended punctuation beyond what’s normally called for in titles. They are also often designed to be bolder or more ornate than text fonts, often to an exaggerated degree, and as a result they may only really be readable at large sizes and are often poorly suited to text use. Virtually anything can be a display font, from the weirdest degenerated style to the most intricate and complex artistic fantasy.
The Scriptorium’s collection of display fonts offers exceptional variety. We have fonts based on Art Nouveau designs, early Victorian styles, hand poster lettering by artists like Alphons Mucha and unique original fonts you won’t find anywhere else. We offer over 80 display faces, all of which are available in TrueType or Postscript format for Macintosh and PC-compatible computers. They are available singly for between $18 and $24 each, or as part of discounted packages. We also offer a complete collection of all of the Display Fonts for only $129. It includes all of our display fonts, including the very latest releases.
Our single fonts and font samplers can be ordered online, by mail or by phone for delivery online or by mail. The special display fonts CD can also be ordered online or by any other means and is deliverable by mail on CD or by convenient download.
To see a large selection of individual display fonts which can be ordered online and downloaded CLICK HERE
To order the complete Display Fonts collection CLICK HERE
To order by phone call 1-512-656-8011.
|Fonts in this collection. Click on name to see sample.
To get an idea of what our display fonts are like, try out the demo version of our Dromon font. It should give you a good idea of what our display fonts can look like on your computer. | fwe2-CC-MAIN-2013-20-42269000 |
Oysters: Raw or Cooked? 8 Essential Tips About Oysters
- Article Author: Salvatore Cesareo
Going out on a date and wonder if oysters are it and more? A team of American and Italian researchers found out that the high concentration of amino acids present in the oysters flesh triggers an increased level of sex hormones, and their high level of zinc helps the production of testosterone.
So, there you have it! It’s time to pile up a dozen or two of nice fresh oysters for the next romantic dinner.
But first let me give you 8 Essential Tips About Oysters
1. What is an oyster?
Oysters are bivalve molluscs which live in marine habitat. Oysters are commonly consumed cooked or raw and are considered to be aphrodisiacs. The edible oysters belong to the family of so called True Oysters and are not closely related to the Pearl Oysters.
2. Wild Vs. Farmed
Oysters can be harvested in very shallow waters by hand. When in deeper waters long rakes or oyster tongs are used to reach them. In either way the prestige is the same: the oysterman scrapes the oysters into a pile and then scoops them out.
The use of scallop dredge (a toothed bar attached to a chain bag) towed through the oyster bed by a boat is in some areas still permitted even though represents an environmental problem for the beds. Wild oysters can obviously be collected directly by divers.
Once the oysters are collected, they are sorted to eliminate dead animals, and debris. Then they are taken to market where they are either canned or sold live.
Oysters have been cultured since the Romans’ time. Two are the methods commonly used:
- • Release: The release technique involves distributing oyster spat over existing oyster beds. This allows them to mature in a natural way and later be collected like wild oysters.
- • Bagging: The bagging technique involves putting oyster spat in racks or bags and having them above the bottom until harvest. Then they get lifted to the surface and the mature ones get removed.
In both cases oysters are cultivated onshore when they can attach themselves to a surface. Here they are allowed to mature in the water to form seed oysters.
3. Oysters as food:
Many evidence show that oysters were considered food by humans since prehistoric times. In fact, oysters have been an important food source all around coastal areas where they could be easily found.
They have been harvested and commercialized since Romans’ times, who believed them to have aphrodisiacal powers. In the19th century, in the United States oysters were considered street food and sold at the market stands in New York and even given away at San Francisco saloons during the Gold Rush.
Unregulated harvesting, consumption and overfishing almost cause the oysters to become extinct in the Atlantic and Pacific coasts. Today, oystermen are more aware of the long terms impact of harvesting in the coastal flats and reefs where the oysters grow allowing the oyster beds to replenish.
4. Why are so popular?
Oysters’ relative scarcity, the challenge to transport them live and the high demand in the food market dictate the price we ultimately pay for these shellfish. It is estimated that in the United States of America we consume 2.5 billion oysters a year and it is no surprise to pay $2 to $3 dollars for each oyster at the restaurant.
But, what makes them so popular? In brief: the taste. Oysters can be very salty or sweet, with notes of cucumber, melon, herbs, butter, flint, or copper, all depending on the water in which they grew.
5. Oysters' nutrition facts:
Many people would agree that oysters are best raw and eaten from the shell. Oysters are a good source of zinc, iron, calcium, selenium, Vitamin A and Vitamin B12 and for those who count calories the good news is that they are low in calories; one dozen raw oysters contain approximately 110.
6. How to properly store oysters:
While oysters can live up to two weeks their taste becomes less pleasant as they age. Oysters should always be refrigerated out of water, and in 100% humidity. If oysters are left in water in the refrigerator they will open, start “breathing” and consume the available oxygen and ultimately die.
7. How to eat oysters: raw or cooked:
Oysters must be eaten alive, or cooked alive never dead. An open shell is a sign of a dead oyster and cannot be eaten. If the oyster is cooked live the heat will cause the shell to open by itself. To taste the full flavor and savor oysters at their best, oysters should be eaten raw on the half shell, however they are also consumed smoked, baked, fried, roasted, pickled, steamed and broiled.
Because oysters act as natural filter, they will feed on anything present in the surrounding water, therefore can contain harmful bacteria.
8. Opening an oyster (shucking technique) requires:
- a. A special knife (called oyster knife); a short and thick blade about 5 centimeters (2.0 in) long is needed to open oysters.
- b. Heavy gloves; the shell can be razor sharp.
- c. Skills | fwe2-CC-MAIN-2013-20-42270000 |
Working in collaboration with the CAO, SANParks and the University of
the Witwatersrand, the CSIR's Earth observations research groups have
achieved several milestones, changing the way large areas like the KNP
and surrounding areas can be managed.
According to Prof Greg Asner, professor at the Department for Global
Ecology at Stanford University and in charge of the CAO, their
relationship with South Africa is quite unique: "It is one of the only
places in the world where we work directly with local scientists on
issues of management conservation. Working in South Africa with the
Kruger National Park and the CSIR gives us the chance to have real
impact," he said during an interview at the time of the CAO's third
mission to the country in April 2012.
This sentiment is echoed by SANParks' research manager for GIS and
remote sensing, Dr Izak Smit: "We do not have the infrastructure,
technology or expertise to deal with a project of this magnitude. Yet,
working with external partners, we can leverage the expertise and
funding, thereby enriching our work in transforming the science into
management decisions and practices.
"We find ourselves at the interface between the science and the
management of the parks. Collaboration with external partners like the
CSIR, universities and the CAO is essential to the successful management
of the parks, and has had impacts on how we manage the park when it
comes to the provision of water holes and prescribed burning, for
For Dr Renaud Mathieu, a CSIR principal scientist, the collaboration is
also about building technical skills and capacity in South Africa to
process large sets of data and developing remote sense technologies well
suited to the South African savannah landscape.
"Historically, especially in Africa, most remote sensing-based
approaches focused on tropical deforestation. However, more than half of
the southern African subcontinent is covered with savannah with about
10 to 50% tree cover and undergoing mostly gradual changes such as bush
encroachment or tree logging for fuelwood. Techniques developed for
assessing woody biomass in tropical forests with dense canopies cannot
simply be transferred to savannas and woodlands," he explains.
Futhermore, the long-term vision is to develop the whole LiDAR value
chain, including the local capacity to operationally collect LiDAR data
for environmental management and vegetation applications using local
airborne survey companies. In this regard, the CSIR and SANParks are
already working with a South African company to test the viability, as
SANParks is considering using LiDAR surveys for long-term monitoring.
Research milestone: Sustainability of fuelwood for rural energy needs
The LiDAR data from the 2008 flight campaign have enabled researchers to
map and measure woody biomass in rural areas such as Bushbuckridge,
where harvesting of live wood is still the primary source of fuel for
cooking and heating even when electricity is available.
Researchers combined the LiDAR data with socio-economic data collected
from the area over the past 20 years by the Wits Rural Public Health and
Health Transitions Research Unit, and the WITS programme for Sustaining
Natural Resources in African Ecosystems.
This shows that at the current rate of fuelwood consumption - three to
four tons per year per household - the woodland resources for some rural
villages in Bushbuckridge may only last another 12 years. With the help
of the LiDAR data and fieldwork, researchers have also found evidence
of illegal commercial cutting of fuelwood in the communal rangelands.
"There is great concern that the current levels of utilisation are not
sustainable, with direct negative impacts on the poor, as well as for
biodiversity loss and conservation. Our findings to date regarding the
sustainability of this ecosystem service warrant further investigation,"
says Dr Mathieu.
In all instances, improved estimates will be instrumental to poverty alleviation.
Research milestones: Loss of big trees in conserved areas
Another significant finding is that large herbivores and fires may have a
bigger impact on the loss of big trees in conserved areas than in
communal areas, where large trees like the Marula are valued for their
fruits. Over five metres high, many of these trees have taken over 50
years to grow.
Dr Mathieu, "We have detected a 20% loss of big trees from research
sites in a private game reserve next to the KNP in just two years,
compared to a 10% loss of big trees from research sites on communal land
over the same time period."
This was also the first time in his remote sensing career that he found a
100% correlation between prediction of a remote sensing system (the
LiDAR) and ground verification.
But researchers are still puzzled about why and how this is happenin.
"At the moment, we think it is because of different reasons," explains
Dr Mathieu: "In the case of the private game reserve, field work shows
that a combination of elephants and fire damage is involved. For
instance, the elephants push and debark the big trees. The trees are
weakened, and then burn more easily in veld fires."
In the communal areas, trees are cut for building posts for field
fencing and fuel wood; however, it is a big taboo to cut big
fruit-bearing trees like the Marula, and people mostly cut the
lower-growing trees and bushes.
Again, this finding needs to be further investigated and interrogated.
For the remote sensing specialists in South Africa, the recent 2012 CAO
campaign will be useful to confirm or infirm these results over a wider
area and a longer time span. | fwe2-CC-MAIN-2013-20-42274000 |
Jump to the main content of this page
Pacific Southwest Research Station
Tahoe Science Projects supported by SNPLMA
Remote sensing of Lake Tahoe's nearshore environment
Erin Lee Hestir, University of California, Davis
The goal of this research is to use remotely sensed data to retrieve fine sediment, chlorophyll, and colored dissolved organic matter (CDOM) concentrations from the water column in the near shore, and to map the distribution of periphyton (attached algae), aquatic macrophytes (submerged plants), clam beds in the nearshore of Lake Tahoe and variations in sediment type. High spatial resolution multispectral satellite imagery, moderate spatial resolution multispectral satellite imagery, and airborne hyperspectral imagery will be used. We will investigate both empirical and model-driven methods to map fine sediment, chlorophyll, and CDOM concentration, macrophyte communities, clam beds, periphyton, and substrate type. The empirical approach will first classify the optically shallow near shore into the different bottom classes using the field data and spectral library first to train and then (independently) validate the classifier. This analysis allows the development of statistical correlations (e.g., regression modeling) whereby reflectance information can be used to predict the probability of the concentration of water quality constituents above a particular bottom type. Upon successful development, the statistical model can then be used to predict water quality in each image pixel given the reflectance value of that pixel. The second approach will use a radiative transfer model that simulates remote sensing reflectance of water given inputs of different aquatic optical properties. One of the key deliverables of the project is a cost-benefit analysis of remote sensing approaches for monitoring the nearshore environment and a manual for implementing remote sensing analysis for monitoring the nearshore environment.
Relation to Other Research Including SNPLMA Science Projects
The value of remote sensing technologies for evaluation and monitoring in the Lake Tahoe Basin has also been widely recognized, and an increasing number of remote sensing datasets are being acquired over the basin. NASA has been operating a remote sensing validation site at Lake Tahoe for over a decade and recently the Tahoe Regional Planning Agency (TRPA) and U.S .Geological Survey have purchased high spatial resolution satellite imagery of the Tahoe Basin. Dr. Schladow and Dr. Steissberg (UC Davis TERC) and Dr. Hook (NASA-JPL) have completed a SNPLMA Round 7-funded project, "Monitoring past, present, and future water quality using remote sensing (RS),"aimed at using remote sensing to quantify changes in lake-wide distributions of Secchi depth and chlorophyll distribution. The large pixel size has limited the application to areas outside the nearshore; however, there will be considerable benefit to this project based on what was learned. Several previous SNPLMA science studies will inform this project, including: 1) "Predicting and managing changes in near-shore water quality," 2) "Natural and human limitations to Asian clam distribution and recolonization-factors that impact the management and control in Lake Tahoe," and 3) "Development of a risk model to determine the expansion and potential environmental impacts of Asian clams in Lake Tahoe," as well as a project funded by the US Army Corps of Engineers that is conducting a baseline assessment of benthic species and developing recommendations for future assessments.
Expected date of final products:
|Last Modified: Mar 7, 2013 06:28:08 PM| | fwe2-CC-MAIN-2013-20-42288000 |
"This paper has very strong implications for the United States staying ahead in magnet technology, which would bring great dividends in research and improvements in medical imaging."
A scientific surprise greets FSU researchers at higher magnetic fields
by Susan Ray
Research performed by a team at Florida State University's National High Magnetic Field Laboratory suggests that the benefits of building higher-field superconducting magnets likely will far outweigh the costs of building them.
FSU researchers Riqiang Fu, Ozge Gunaydin-Sen and Naresh Dalal discovered something they weren't expecting while trying to improve the resolution, or quality of image, in the magnet lab's unique 900-megahertz, 21.1-tesla magnet. While experimenting with the giant magnet, the three noted an exponential increase in the ease of detecting the "fingerprint" of the chemical compound they were studying as they exposed it to ever-higher magnetic fields.
A paper describing their research was published recently in the Journal of the American Chemical Society, a top-tier chemistry journal. The paper can be accessed here.
"This paper has very strong implications for the United States staying ahead in magnet technology, which would bring great dividends in research and improvements in medical imaging," said Tim Cross, director of the magnet lab's NMR User Program and a professor of chemistry and biochemistry at FSU. "We need—and are working on—additional fundamental studies that show the benefits of going to higher fields."
Nuclear magnetic resonance, or NMR, generates a true-to-life fingerprint—a unique pattern indicating the presence of specific molecules—for a research sample that is being analyzed. As a technique, NMR is very accurate as long as one can detect the sample in the first place. The ease or difficulty of detecting a sample is known as "sensitivity." Low sensitivity has been one of NMR's biggest liabilities, because the lower the sensitivity, the longer the experiment takes. Such slowness has limited NMR's potential applications.
"Poor signal is like a faint picture in the darkness," said Dalal, the Dirac Professor of Chemistry and Biochemistry at FSU. "We've shown that the '900' (magnet) increases the picture's brightness by a factor of about 10 relative to low-field images. Think of how much more you can see in a room that is that much brighter…and imagine what you'd see at even higher fields."
Theorists had predicted a linear increase in both resolution and sensitivity at higher magnetic fields, moving from 14.1 tesla to 21.1 tesla, the current state of the art in superconducting magnets. In their experiment, the FSU team members observed an exponential increase—with the sensitivity increasing by a factor of three over what had been predicted.
Higher sensitivity in a magnet means it takes far less time—or much less of a sample—to conduct an experiment.
"The reduction in time is like going from one hour to a couple of minutes," said Fu, an associate scholar/scientist at the magnet lab and the FSU chemistry department. "Many experiments take weeks, and such a reduction in time will allow for far more studies to be conducted on a single instrument."
Dalal said the shortening of experimental time increases scientists' ability to fingerprint materials, opening up new areas of scientific investigation in NMR, including the study of materials useful in nanotechnology and medical imaging. The need for less of a sample—up to 18 times less—will open up high-field NMR to the study of enzymes and purified proteins, an area in which samples typically are of limited size.
The National High Magnetic Field Laboratory (www.magnet.fsu.edu) develops and operates state-of-the-art, high-magnetic-field facilities that faculty and visiting scientists and engineers use for interdisciplinary research. The laboratory is sponsored by the National Science Foundation and the state of Florida and is the only facility of its kind in the United States. | fwe2-CC-MAIN-2013-20-42289000 |
|Freshwater Mussels of the Upper Mississippi River System|
Mussel Conservation Activities
2005 Highlights: Possible fish predation of subadult Higgins eye was observed in the Upper Mississippi River, Pools 2 and 4.
Subadult Higgins eye pearlymussels (Lampsilis higginsii) from the Upper Mississippi River, Pools 2 and 4. Shell damage may be due to predation by fish (i.e. common carp or freshwater drum). Top photo by Mike Davis, Minnesota Department of Natural Resources; bottom photo by Gary Wege, U.S. Fish and Wildlife Service.
Species Identification and Location • Threatened and Endangered Mussels • Life History • Ecology • Mussel Harvest on the River • Current Threats • Mussel Conservation Activities • Ongoing Studies and Projects • Multimedia • Teacher Resources • Frequently Asked Questions • Glossary • References • Links to Other Mussel Sites
Privacy • FOIA • FirstGov • Contact
Department of the Interior • U.S. Fish & Wildlife Service • U.S. Geological Survey
|Last updated on
December 21, 2006 | fwe2-CC-MAIN-2013-20-42293000 |
Yearly Floods the New Reality for Rural Women
MEXICO CITY -- Year after year, women in rural areas of the southeastern Mexican state of Tabasco have to get ready for floods that threaten their homes, crops and livestock.
By Emilio Godoy, IPS
Photo: Communities along the rivers in Tabasco are badly affected by floods in the second half of the year. Credit:Emilio Godoy/IPS
"We have adapted. Now we build our houses on
stilts," Celia Hernández, who works for an indigenous tourism project in
Centla, 857 km south of Mexico City, told IPS.
"Every year in June," she said, "the women start putting things away and
preparing the older people and children," in case there is flooding and
everyone has to evacuate their homes and take refuge on higher ground
Centla, a municipality with a population of 102,110, lies either side of
the Grijalva river in Tabasco, a state of 2.2 million people.
In the rainy season between June and October, the water level rises and
affects urban and rural areas, including Centla, which is located in a
swampy region and is home to 25 rural communities and 53 "ejidos"
(collectively owned farmlands), as well as the coastal town of Frontera,
the municipal capital.
"People lose everything they own. The government provides some economic
support, but it only covers part of the losses, so we have to start
again, over and over," said 18-year-old Hernández, who lives with her
family in a rural community where people are involved in tourism or
Since 2007, Tabasco has been hit by the highest and longest-lasting
floods of recent decades, in territory that is highly vulnerable to
climate change effects such as more intense rainfall, mudslides, rising
sea level and loss of biodiversity, which harm the welfare of the
"This is changing women's way of life and the traditional activities
they have carried out for years," María Hernández, in charge of gender
equality issues for the Santo Tomás Ecological Association (AEST), a
local NGO, told IPS.
"The floods leave women psychologically devastated and economically
destroyed. They have difficulty recovering their livelihoods," said
María (no relation to Celia Hernández).
"People used to know when they could plant this or that crop, but now
they don't know. Women take charge of supporting the family and looking
for food for their children and husbands," she said.
The devastation wrought in Tabasco by floods in October and November 2007 was assessed by the Economic Commission for Latin America and the Caribbean (ECLAC) at over three billion dollars.
Since then the AEST has supported groups of women in four
municipalities, providing training to cope with recurring climate
changes, psychological assistance and support for carrying out
development projects like poultry farming and family vegetable gardens.
Photo: Rural women in Tabasco raise chickens to help overcome the crop losses caused by climate disasters. Credit: Iván García/IPS
In rural areas of Tabasco, women grow maize, tomatoes and other
vegetables and raise chickens and turkeys, complementing their husbands'
work which focuses on fishing.
Tabasco's climate, with an average annual rainfall of 2,550 millimetres,
and its 28 rivers and four dams make adaptation and mitigation measures
necessary. Together, these can create climate justice for women.
After the 2007 disaster, the regional government created the
Reconstruction and Reactivation Programme to Transform Tabasco, one of
whose goals is to complete the building of 3,500 housing units on high
ground around Villahermosa, the state capital, benefiting women in
"Women made homeless by the floods were relocated as part of a policy of
adapting to climate change," Dolores Rojas, programme coordinator for
the Mexican office of the Berlin-based Heinrich Böll Foundation, told
"An interesting aspect of this policy is that women were given the title
deeds to their homes. This meant they could decide to start a small
shop in their home, for example, without needing to ask their husbands'
permission," she said.
However, there have been some problems. Families who moved to the houses
faced higher transport costs, because their new homes are far from the
centre of Villahermosa, a city of 560,000 people, and there is a lack of
services, Rojas said.
But the study "Gender Relations and Women's Vulnerability to Climate
Change", carried out by Jenny Jungehülsing for the Heinrich Böll
Foundation, found that the relocation "reduced women's vulnerability in a
number of spheres of life," and contributed significantly to meeting
"important practical needs."
The policy lays the foundation "to advance toward greater gender
equality," the 2011 study concludes.
The national water commission, CONAGUA, and the Engineering Institute of
the state National Autonomous University of Mexico consider it a matter
of urgency to relocate over 66,000 people living in 18,000 dwellings in
107 communities at risk of flooding.
But not everyone agrees with the relocation. Some local organisations
call it forced eviction, especially in rural areas like Centla.
"What we want is more financial support, because (the government)
focuses mainly on urban areas. Close to the rivers, flooding is
inevitable," but people are reluctant to leave their places of origin,
Celia Hernández said.
Civil society organisations complain of bad planning in Tabasco that has
allowed construction in at-risk areas, and poor management of the dams,
which release excess water when they reach maximum levels. The overflow
often floods the surrounding communities, as happened in 2007.
"In order to protect Villahermosa, the floodwater is being diverted to
the rural communities. People have to leave their homes and are
relocated in another area, where they have nowhere to farm," complained
As a result, "the pressure of supporting a family increases and gender violence gets worse," she said.
Because of the recurring floods, the women have had to diversify their activities, since recovering their crops was impossible.
"Relocation plans should include a gender perspective so as not to give
only partial solutions; a comprehensive policy is needed. It's an
expression of climate justice," said Rojas.
Women "are more exposed to risks, and their vulnerability depends on
their socioeconomic status. Resettlement partially compensates for that
vulnerability," she said.
The Heinrich Böll Foundation study says: "Given that gender equality and
women's empowerment are central elements for reducing vulnerabilities
to climate change, it is important that these policies - through clear,
productive actions - diminish these vulnerabilities and advance toward
greater gender equality."
Published by: Magne Ove Varsi | fwe2-CC-MAIN-2013-20-42296000 |
In the Garden:
Western Mountains and High Plains
Get ready to browse for seeds you'd like to start indoors.
How to Get Started with Starting Seeds Indoors
Starting seeds indoors is one of the best cures for late winter cabin fever and it motivates me to think about trying something new in the garden. With all those garden magazines piling up on my desk, I'm tempted to try a few new varieties that are often not available locally. It's time to get set up for starting some seeds indoors.
Though it would be nice to have a greenhouse, I'll make do with my home environment to start my seeds. Bright, high-quality light is the key to success. A south-facing window will work nicely. Find a spot with enough space to accommodate a work table or two, fluorescent lights, heat mats, seed starting trays, and containers. Seed starting takes minimal effort, but the basics never change. Here are some of my time-tested tips:
For best germination, use a special, soilless mixture just for seed starting. Most of the commercial mixes are a blend of milled sphagnum peat moss, perlite (heated volcanic rock to create pore space), and vermiculite (mica-like material to expand and retain moisture). Some may contain compost or a fertilizer, enough to keep the seedlings growing for a few weeks. Moisten the mixture with warm water before you use it. It should be evenly moist but not dripping wet (you shouldn't be able to squeeze any water out of a handful of the soilless mix).
Fill a clean container with dampened seed starting mixture. You can use flat, shallow seed starting trays, but you can be creative with containers. Recyclable plastic containers including yogurt cups, margarine containers, and egg cartons are just a few items I've used. Make sure they have some drainage holes punctured in the bottom. Press the seed mix down as you fill the containers and gently tamp it down to level and firm the surface. Fill the container to within a half-inch of the rim. It's a good idea to water the mix again before sowing seeds so they won't be washed around by a stream of water applied after sowing.
The number one reason seeds don't germinate well is that they are planted too deeply. So carefully read the seed packet and follow the instructions. If the seeds are extremely tiny, mix them with a little sand or vermiculite to make it easier to sow them thinly. Don't cover the seeds too deeply. Some seeds need light to germinate and shouldn't be covered at all (this will be indicated on the seed packet). On a plant label write down the variety and the sowing date, and stick it into or staple to the container.
Create a mini-greenhouse so that the seeds have warmth and constant, even moisture to germinate. Covering the container or tray with its plastic cover, or a sheet of plastic food wrap, acrylic, or sheet of glass. Don't seal it too tightly as this can cut off all air circulation and molds may develop on the surface of the mixture. Air circulation discourages diseases like damping-off and other fungus problems. If you use plastic wrap, lift it off part of the day to let air in. Seeds that need darkness to germinate can be covered with a sheet of newspaper, black plastic, or fabric to cut out the light. If they need light to germinate, set the container in full light, but not direct sun.
Keep the seed starting mixture moist, but not soggy. When it becomes dry to the touch, mist it immediately or set the container in a pan of warm water. Here is where bottom watering is best as it won't cause the seeds to wash or splash around or encourage diseases.
The seed growing area should be warm and I prefer to use a heat mat to keep the temperature around 70 degrees F. Once the seeds have sprouted, remove the covers and set them where they get good light or under fluorescent lights placed 2 inches above the seedlings. Give the seedlings at least ten hours of light every day; 12 hours is even better. A light rigged with an automatic timer is ideal.
The rest is up to you to thin and transplant as needed. In the long run, once you get hooked, you, like me, will be starting seeds every year to curb the symptoms of cabin fever and try growing something new.
Care to share your gardening thoughts, insights, triumphs, or disappointments with your fellow gardening enthusiasts? Join the lively discussions on our FaceBook page and receive free daily tips! | fwe2-CC-MAIN-2013-20-42299000 |
What are guidelines for fertilizing annual and perennial flowers with nitrogen and phosphorus? —Rosanne Janssen, Wichita, KS
Plants need both major and minor nutrients. Minor nutrients (also called “trace elements”) are usually supplied by the soil, but plants use major nutrients in large enough quantities that we should add them regularly.
Of the major nutrients, nitrogen (N) is associated with green, vegetative growth; phosphorus (P) is associated with flowering; and potassium (K) aids root development. However, that’s an oversimplification. All three major nutrients are necessary, to some degree, for all aspects of healthy plant growth. If plants don’t have enough nitrogen or potassium, they’re unlikely to flower well, even though there’s plenty of phosphorus in the soil.
There is no one best fertilizer formulation for annuals or perennials. Many work well. However, choose a fertilizer that’s higher in phosphorus and potassium than nitrogen to encourage blooming and discourage excessive leafy growth. If you’re growing perennials primarily for foliage, use a balanced fertilizer with equal parts N-P-K.
Fertilize annuals throughout the growing season, and perennials from spring through midsummer. Stop fertilizing most perennials after midsummer so they’ll slow their growth and prepare for winter dormancy. Perennials that are still putting on new growth in fall will be vulnerable to winter injury.
Some newer annuals do better with frequent fertilizing (every 10 to 14 days), particularly if they’re growing in containers. An easy way to provide enough nutrients is to mix slow-release fertilizer pellets into the soil when you plant. The pellets will release nutrients into the soil for about three to six months. | fwe2-CC-MAIN-2013-20-42300000 |
Mapping Q lines - to join, add the earliest known direct paternal ancestor only for the testd line.
Haplogroup Q is one of the two branches of haplogroup P (M45). Haplogroup Q is believed to have arisen in Central Asia approximately 15,000 to 20,000 years ago. It has had multiple origins proposed. Much of the conflict may be attributed to limited sample sizes and early definitions that used a combination of M242, P36.2, and MEH2 as defining mutations.
This haplogroup has many diverse haplotypes despite its low frequency among most populations outside of the Americas. There also are over a dozen subclades that have been sampled and identified in modern populations.
Q is found predominantly in Central Siberia, Central Asia and among Native Americans. In the latter case it is the specific subclade Q1a3a1.
One hypothesis is that Q came to Europe with the Huns in the 5th century. The Huns are thought to have originated from Central Siberia, where haplogroup Q is still common nowadays. Q is found in 2% of the people in Hungary and up to 5% in isolated pockets in the mountains of Slovakia, just north of Hungary. It is historically attested that Hungary was were most of the Hunnic invaders finally settled after wreaking havoc around Europe. The Nordic and Baltic states have the second highest frequency of Q in Europe. Based on the Hunnic hypothesis, it is possible that a group of Huns settled in Sweden and/or Norway along with their allies, the Goths. The Romans reported that the Huns consisted of a small ruling elite and their armies comprised mostly of Germanic warriors. An alternative scenario is that Nordic and Baltic Q came through the Uralic-speaking population of Siberia via Finland and Lappland, but this is unlikely because Q is not more common in Finland and does not correlate with the densities of the Uralic haplogroup N1c1.
Other Central Asian or Siberian migrations might have brought Q to Ukraine in the late Antiquity or Medieval period. For instance, the multi-ethnic Central Asian troops of Genghis Khan could very well have carried some haplogroup Q (along with C, G, O and R1a) to Eastern Europe, but not to Central Europe or Scandinavia.
Sources and Resources
Informasjon på norsk | fwe2-CC-MAIN-2013-20-42311000 |
CNN was founded by Georgia businessman Ted Turner. In the 1970s Turner took advantage of the increasing availability of communications satellites to begin broadcasting his independent UHF station,
Plans for CNN were publicly announced in May 1979. With the bravado that was one of his trademarks, Turner predicted that CNN would represent "the greatest achievement in the history of journalism." Schonfeld would serve as the network's first president and CEO. Veteran journalist Daniel Schorr, who had worked for CBS News during the "golden age" of Edward R. Murrow, lent his credibility to the venture when he agreed to become the new channel's most visible correspondent. Turner set an ambitious goal of beginning CNN's broadcast on June 1, 1980.
Early response was skeptical. Critics doubted whether there was a market for around-the-clock news, and many questioned whether such a venture could be profitable. In a television news universe dominated by the "big three" networks (CBS, NBC, and ABC), many wondered if there was room for such a shoestring operation, particularly one that planned to fill an enormous amount of airtime on a budget that was a fraction of what the networks spent.
Despite formidable organizational and technical obstacles (including the loss of SATCOM III, the satellite originally scheduled to carry the network's signal), CNN managed to make its June deadline. An estimated 1.7 million cable television subscribers were able to receive the channel when it aired. Although the first day did not go without a hitch, CNN did get its first "scoop" only minutes into its inaugural broadcast, cutting away from its first commercial break to bring viewers live coverage of U.S. president Jimmy Carter's visit to the Fort Wayne, Indiana, hospital room of civil rights leader Vernon Jordan, who had been wounded in an assassination attempt.
Part of the concept of CNN was that the news, not the anchor, would be the star. The network's early format, drawn in part from that of all-news radio, was centered on a news "wheel." Major stories were repeated on a cyclical basis throughout the day, sometimes with minor modifications. New stories were added to the mix periodically. At any time, however, breaking news could arise and dominate the schedule.
Growth and Expansion
Derided by some as the "Chicken Noodle News," CNN began to gain respectability throughout the 1980s.
As it grew more successful, CNN expanded its lineup and its family of channels. Crossfire, a local Washington, D.C., show picked up by CNN, became a major venue for heated political discussion. In 1985 the network hired longtime radio talk-show host Larry King to host an hour-long nightly interview program. Larry King Live was an immediate ratings success.
In December 1981 Turner Broadcasting launched Headline News, the first major CNN spin-off. This channel reduced the "wheel" concept to its basics, repeating major stories at the top of every half hour, with entertainment, sports, and weather at scheduled intervals in every cycle. In 1987 CNN moved from its original headquarters in a former country club into its current headquarters: the Omni International complex in Atlanta became the CNN Center.
"Live from Baghdad"
As it passed into its second decade, CNN was becoming an international presence. Perhaps no event highlighted its increasing importance more than the Persian Gulf War (1990-91). An international coalition led by the United States sought to remove the forces of Iraq's Saddam Hussein from the nation of Kuwait. As a deadline for peaceful resolution set by U.S. president George H. W. Bush approached, the Iraqi government ordered most foreign television journalists out of the country. Only CNN was allowed to remain. As the bombardment of Iraq began in January, viewers across the world saw the war's opening hours, as Iraqi antiaircraft fire lit up the sky over Baghdad. Such reporters as Peter Arnett in Iraq and Wolf Blitzer at the Pentagon soon became household names. CNN, and Arnett in particular, were singled out by many for criticism when the network aired reports that had been censored by Iraqi officials.
A New Media Environment
By 1995 CNN had bureaus around the world and more than 2,500 employees on its editorial staff. That year, Turner Broadcasting System, the parent company of CNN, was bought by Time Warner, Incorporated. Ted Turner became vice chair of Time Warner. In 2001 Time Warner merged with the Internet service provider America Online, creating the world's largest media conglomerate. In January 2003 Turner announced his intention to step down as vice chair.
By the late 1990s CNN faced stiff competition from other cable news channels, such as MSNBC (a joint venture of Microsoft and NBC) and Fox News Channel (owned by billionaire Rupert Murdoch's News Corporation), both of which were launched in 1996.
Although often different in tone, CNN's cable competitors were largely using the model pioneered by CNN. Media critics over the years have both lauded CNN for its attention to international issues and lamented the compression of editorial decision-making processes spurred when the live around-the-clock cycle introduced by the network produces a "rush to air." Some observers now speak of "the CNN effect," as expanded television news coverage affects political, diplomatic, and military decision making on a global level.
Whatever the future may bring for CNN, it has been instrumental in changing the way millions of people get their news. Speaking shortly before the network's launch, Turner promised that, barring technical problems, "We won't be signing off until the world ends. We'll be on, and we will cover the end of the world, live, and that will be our last event. . . . and when the end of the world comes, we'll play 'Nearer My God to Thee' before we sign off."
Don M. Flournoy and Robert K. Stewart, CNN: Making News in the Global Market (Luton, England: University of Luton Press, 1997).
Reese Schonfeld, Me and Ted against the World: The Unauthorized Story of the Founding of CNN (New York: Cliff Street Books, 2001).
Perry McCoy Smith, How CNN Fought the War: A View from the Inside (New York: Carol Publishing Group, 1991).
Hank Whittemore, CNN: The Inside Story (Boston: Little, Brown, 1990).
Lain Hughes, University of Georgia
A project of the Georgia Humanities Council, in partnership with the University of Georgia Press, the University System of Georgia/GALILEO, and the Office of the Governor. | fwe2-CC-MAIN-2013-20-42314000 |
High Temperature Instruments for supercritical geothermal reservoir characterization and exploitation
GFZ Part in Workpackage 4: Production integrity monitoring
Within HITI, new surface and downhole tools and approaches for deep high-temperature boreholes are developed, built, and tested in the field. The new set of tools and methods has been chosen to provide a basic set of data needed to describe either the supercritical reservoir structure and dynamics (measurement of temperature, pressure, natural gamma radiation, electrical resistivity, reservoir storativity, and acoustic imaging of the borehole wall), and the evolution of the casing and cement integrity during production (acoustic imaging). The new tools will be tested in-situ in existing Icelandic wells, including the IDDP (“Iceland Deep Drilling Project”) hole.
Within workpackage 4 “Production Integrity Monitoring”, GFZ contributes its expertise in fiber-optic temperature monitoring, which is further developed for the specific demands in high-temperature geothermal wells. The current system was deployed in 4.2 km depth at temperatures of 146 °C at the In-situ Geothermal Lab in Groß-Schönebeck (Henninges et al., 2005). Within HITI, a DTS sensor cable will be tested during a field experiment in a high-enthalpy geothermal reservoir in Iceland.
© Fournier (1999)
- Pressure-enthalpy diagram for pure H2O with selected isotherms. The conditions under which steam and water coexist is shown by the shaded area, bounded by the boiling point curve to the left and the dew point curve to the right. The arrows show various different possible cooling paths (Fournier, 1999).
The pressure-enthalpy diagram for pure water (Figure 1) from Fournier (1999) provides a summary of how a supercritical geothermal system might be managed to produce electricity. For more explanations see: European-Projekt HITI | fwe2-CC-MAIN-2013-20-42320000 |
Yale researchers discover source of signals that trigger hair growth
By Darren Quick
September 5, 2011
In news that offers hope to millions of chrome-domes everywhere - yours truly included - Yale researchers have made a discovery that could lead to new treatments for baldness. While men with male pattern baldness still have stem cells in follicle roots, they need signals from within the skin to grow hair. Until now, the source of those signals that trigger hair growth has been unclear, but the Yale researchers claim to have now discovered it.
When hair dies, the researcher team led by Valerie Horsley, assistant professor of molecular, cellular and developmental biology, observed that the layer of fat in the scalp that comprises most of the skin's thickness shrinks. When hair growth begins, the fat layer expands in a process called adipogenesis. They identified a type of stem cell - adipose precursor cells - within the skin's fatty layer that is involved in the creation of new fat cells. They showed that these cells' production of molecules called PGDF (platelet derived growth factors), was necessary to spur hair regrowth in mice.
Horsley's team is trying to identify other signals produced by adipose precursor stem cells that may play a role in regulating hair growth. She also wants to know whether these same signals are required for human hair growth.
"If we can get these fat cells in the skin to talk to the dormant stem cells at the base of hair follicles, we might be able to get hair to grow again," said Horsley.
Just enter your friends and your email address into the form below
For multiple addresses, separate each with a comma | fwe2-CC-MAIN-2013-20-42324000 |
Jan 7, 2011
New form of public involvement in French grapevine project
Genetic engineering experiment: From information to interaction
The Institut National de Recherche Agronomique in Alsace has been trialling a new form of public involvement over recent years. A release experiment with genetically modified grapevines was monitored for six years by a Local Monitoring Committee, which helped develop the biosafety research questions. The project ended in 2010 when the trial field was destroyed. The final report appeared in the online journal PLosBiology at the end of 2010.
Fanleaf degeneration is a crop disease with significant financial consequences for wine-growers. One of the symptoms is spotty, yellow leaves. The INRA in Colmar studied transgenic grapevines that are resistant to this disease.
In the past, methods of involving the public in the introduction of new technologies have usually been restricted to public information or public hearings. More recent methods place a greater emphasis on the active involvement of citizens and stakeholders. One such method was trialled at the Institut National de Recherche Agronomique (INRA) in Colmar from 2003 to 2010.
The focus was a field trial with GM grapevines that are resistant to the grapevine fanleaf virus (GFLV). GFLV is one of several viruses that cause fanleaf degeneration. It is transmitted via soil-dwelling nematodes. Affected plants normally have to be completely removed and the soil treated with nematicidal substances, although these are banned in many countries. The transgenic grapevines produce a coat protein of the GFLV virus, which protects them to a large extent against infection by the ‘real’ viruses. Since the virus is transmitted through the soil, only the rootstocks are genetically modified; the scions grafted onto them do not contain any transgenes. For the field trial, soil was taken from two infected vineyards and brought to the INRA site.
The Local Monitoring Committee (LMC), which was convened before the start of the trial, had no fixed membership, but was open to anyone interested, and members were free to pull out at any time. The members were representatives of wine-growers, consumer associations, environmental and nature conservation associations, representatives of the town council, the regional council and the regional environment agencies, as well as one independent wine-grower and a neighbour of the trial site. Despite the considerable time investment involved, the composition of the committee remained stable over a period of six years.
The biosafety research experiments on the GM grapevines were planned in the first instance by INRA scientists and then discussed and modified in the LMC. Following discussions in the LMC, for instance, scions were chosen from a grapevine variety that is not otherwise grown in Alsace and which has a very different appearance from the Alsace grapevines. Although the scions were not genetically modified, this approach was intended to prevent fears among the local community about the transgene outcrossing to native grapevines. Another modification instigated by the LMC was for a membrane to be buried under the trial field to isolate the experiment. This was designed to prevent the GFLV-infected nematodes from spreading. The membrane was also employed because of fears raised in the LMC that horizontal gene transfer could take place between the transgenic rootstocks and the nematodes. However, the INRA scientists regarded these fears as unfounded.
The LMC also initiated additional research, e.g. into whether an exchange of genetic material takes place between the transgenic rootstock and the soil microflora or the non-transgenic scion. In addition, the LMC developed a research programme on conventional methods of controlling the GFLV virus.
After the field trial was partially destroyed in September 2009, the LMC received broad support from a wide range of organisations and parties, including from those opposed to the use of genetic engineering in agriculture. Since the rootstocks remained unharmed, the research work was resumed. However, in August 2010 the field trial was destroyed so completely that the research had to be abandoned. | fwe2-CC-MAIN-2013-20-42327000 |
Analyzing the educational achievement in Latino/a students in California
This qualitative research will examine factors that influence the educational achievement of Latino/a students in California. This research will provide a historiography of Latino/a education from the past to the present, with an emphasis on educational barriers such as limited access to a quality education, under-qualified teachers, poverty, limited English proficiency, citizenship status, and less demanding curriculums. The investigation will focus on the following areas: the percentage of Latino/a students that study in public schools compared to other ethnicities; the ways in which the language barrier of many Latino/a students and the lack of qualified teachers to tackle this problem affect their overall educational performance; how the low average income of Latino/a parents and their level of formal education affect the education of their children; the impact of undocumented students have on the Latino/a youth population in general and the programs and organizations that support Latino/a students in California.
University of Puerto Rico at Río Piedras
Dr. Christopher M. Span
Department of Research Advisor:
Educational Policy Studies
Year of Publication: | fwe2-CC-MAIN-2013-20-42344000 |
Despite the many initiatives launched in the last two decades, there is still a long way to go to tackle Britain's big health and lifestyle challenges: alcohol misuse, obesity and smoking. The biggest and worst of these public health challenges remains smoking. Unlike high-fat foods or alcohol, there is no acceptable level of consumption of cigarettes. Smoking kills – it's that simple. Not only does smoking shorten smokers' lives, but it also comes at an enormous cost to the taxpayer. Smoking-related illnesses are estimated to cost the NHS at least £2.7bn a year in England alone.
Today marks the closure of the government's consultation on whether plain packaging of tobacco products should be introduced in the UK. The prospect of standardised cigarette packaging coming into law will inevitably result in a greater focus on the harmful effects associated with smoking and the role that our government can play in helping people to kick the habit.
The medical evidence is startlingly clear; smoking is the single biggest preventable cause of early death and illness in the UK, each year directly accounting for more than 100,000 unnecessary deaths. One in two long-term smokers will die prematurely from a smoking related disease. We also know that smoking harms others through passive smoking in the home, and smoking by pregnant women also remains a significant problem. More than one in six mothers smoke during pregnancy, with potentially harmful results for both mum and baby.
Smoking also comes at a large financial cost, and often to those who can least afford it. The average price for a packet of cigarettes translates to £51.80 a week for a 20-a-day habit. We know that smoking rates are highest among those on the lowest incomes, further entrenching the health inequalities that exist between the richest and poorest people in our country.
So how do we change things? We must discourage young people from taking up smoking in the first place. While there are fewer older adult smokers than there were a decade ago, smoking is still common among young people, and in particular amongst teenage girls.
The triumph of Britain's athletes at the London Olympic Games has provided a once in a generation opportunity to inspire people to do more exercise and to address some of the toughest public health challenges we face as a country, but it is unlikely that sporting inspiration and the Olympic legacy alone will be enough. One of the most overlooked and potentially most successful aspects of the coalition government's recent NHS reforms was the allocation of much of the public health budget and the responsibility for public health and wellbeing to local councils. Given that there is a clear and pressing need to do more to encourage young people not to smoke, and that local councils also have a responsibility for education, there is now a real potential for properly targeted and co-ordinated public health and lifestyle interventions in our schools, building not only on Britain's Olympic success but also looking to the longer-term.
But we need to go further than education. Studies show that it is young people who are also more likely to be seduced by the marketing techniques of the tobacco companies, which include the way in which cigarette packets are branded and packaged to increase their appeal. The government needs to review all of the available evidence about the effectiveness of such a measure before going ahead with a plain packaging policy, but introducing plain packaging for cigarettes could certainly help to reduce the brand marketing appeal of cigarettes to teenagers, and most importantly, help to stop young people from developing a smoking habit that can only shorten their lives. | fwe2-CC-MAIN-2013-20-42355000 |
Last year the Bill and Melinda Gates Foundation raised eyebrows with its announcement that they would fund an initiative to take toilet technology to the next level. With 2.5 billion people lacking access to sanitation in a world that cannot afford to use potable water for carrying away human waste in the long run, the economic opportunities that the reinvented toilet offer businesses could be highly lucrative.
Ground zero for the quest to find the perfect toilet for the 21st century's needs may as well be Durban, South Africa. The coastal city with a population of 3.8 million is a prime test bed to find a new solution for the water hogging commode that has changed very little, either technically or functionally, since its invention in the 1850s.
Despite having an average annual rainfall of 1200mm, Durban, as is the case with much of South Africa, now faces increasing water scarcity. The city's infrastructure suffers further strain as more of South Africa's rural poor move there to find work. Currently 800,000 residents live in shacks situated in districts that have marginal sanitation. Durban has made impressive strides on water issues in the past decade. Tap water, for example, is now safe to drink. Nevertheless, 230,000 families still lack access to safe and hygienic toilets. Communal toilet blocks are a stopgap measure for residents who lack indoor plumbing or space for a private toilet. But with the urban poor viewing the porcelain flushing toilet as the gold standard, municipalities such as Durban face the dual challenge of diminishing water supplies and meeting citizens' increased expectations. For the world's poor, a clean toilet is not just about health, but offers dignity, privacy and a break from the daily chaos in the streets.
Durban's pressing challenge is balancing the needs of its citizens, tight water supplies and the mandate of the South African constitution, which states that access to water and a clean environment are inherent rights. For now the city's strategy is to follow a "sanitation edge concept," under which waterborne sanitation is provided where the housing density justifies such infrastructure. In more remote sections of the city, dry urinary diversion (UD) toilets are the standard. And hence the dilemma: the Victorian-era flush toilets are wasteful, but dry pit toilets are not clean or safe to use in the long term. In the end, poorer citizens want what they view as a simple tool with a handle that flushes. So what about one that does not discharge litres and litres of water?
To that end, the city of Durban has entered a partnership with the Gates Foundation and the Swiss aquatic research institute EAWAG to find a solution that captures the functionality of the flush toilet without waste. According to Neil Macleod, head of Durban's water and sanitation department, the holy grail for the future toilet is one that not only eliminates waste, but also generates wealth.
Speaking to an audience at World Water Week in Stockholm, Macleod said the technology to recover waste and energy from human waste exists, but the process requires much refinement. In a world where resources such as phosphorous are becoming limited and expensive, last night's dinner, multiplied by millions and even billions, could offer a wealth of materials that could provide energy, fertiliser and even recycled water. And the technologies involved could include solar, microwaves and nanotechnology.
The toilet's future, said Macleod, is analogous to what has happened with telephones over the past two decades. In the same way that mobile phones skipped a generation in the developing world, a similar story could unfold with toilets. Instead of wasteful flush toilets replacing filthy pit latrines, a future commode that uses modern technology could generate economic opportunity across the globe. Rather than a massive revamp of centuries-old infrastructure in cities, Macleod envisions decentralised water technology systems where waste would be separated very close to its source. Could such a contraption resemble a washing machine at the back of a house, where recycled water and fertiliser flow out to separate pipes? Could water, which is now generally a monopoly controlled by one central authority, follow the path of computing and telephony and become managed at a more decentralised level?
The shift in viewing sewage as a valuable resource rather than waste will require a massive rethink by government, business and consumers. But a nascent clean technology sector focused on the reinvention of the 150 year old toilet is already taking hold. Entrepreneurs have started to cash in: the Gates Foundation has announced the first round winners of its "Reinvent the Toilet Challenge" and Durban will host the World Toilet summit this December. The future commode, waterless and, for now, a wizardly concept, will bring wealth to a new class of inventors – and also enrich the lives of millions who lack the simplest tool that citizens of wealthier nations access on a daily basis with little thought.
Leon Kaye is founder and editor of GreenGoPost.com | fwe2-CC-MAIN-2013-20-42358000 |
One of the key tenets of Generally Accepted Accounting Principles (GAAP) is the matching principle. The matching principle states that companies should report associated costs and benefits at the same time.
If a company buys a $300 million cruise ship in 1982 and then sells tickets to passengers for the next 30 years, the company should not report a $300 million expense in 1982 and then ticket sales for 1982 through 2012. Instead, the company should spread the purchase price of the ship (the cost) over the same time period it sells tickets (the benefit).
To create income statements that meet the matching principle, accountants use an expense called depreciation.
So, instead of reporting a $300 million purchase expense in 1982, the company might:
Report a $30 million depreciation expense in 1982, 1983, 1984...and every year after that for the 30 years the company expects to sell tickets to passengers on this cruise ship.
To calculate depreciation, a company must make estimates and choices such as:
The cost of the asset
The useful life of the asset
The salvage value of the asset at the end of its useful life
And a way of spreading the cost of the asset to match the time when the asset provides benefits
The range of different ways of spreading the cost under GAAP accounting is too long to list. However, public companies in the United States explain their depreciation choices to shareholders in a note to their financial statements. It is critical that investors read this note. Investors can find this note in the companys 10-K.
Past depreciation expenses accumulate on the balance sheet. Most public companies choose not to show this contra asset account on the balance sheet they present to shareholders. Instead, they simply show a single item. This single asset item may be marked Net. Such as Property, Plant, and Equipment - Net. It is actually the asset account netted against the contra asset account.
A contra asset account is an account that offsets an asset account. So, for example a company might have:
Property, Plant, and Equipment - Gross: $150 million
Accumulated Depreciation: $120 million
Property, Plant, and Equipment - Net: $30 million
In this case, the only item likely to be shown on the balance sheet is Property, Plant, and Equipment - Net. This is the cost of the companys property, plant, and equipment (asset account) minus the accumulated depreciation (the contra asset account). It means the companys assets cost $150 million, the company has reported $120 million in depreciation expense over the years, and the company is now reporting the assets have a book value of $30 million.
It is possible for a company to have fully depreciated assets on its balance sheet. This means the companys estimate of the useful life of the asset was shorter than the assets actual useful life. As a result, the asset - although it is still being used - is carried on the balance sheet at its salvage value.
This is a reminder that depreciation involves estimates and choices. It is not an infallible process.
Companies do not have cash layout for depreciation. Therefore, depreciation is added back in the cash flow statement.
Although depreciation is not a cash cost, it is a real business cost because the company has to pay for the fixed assets when it purchases them. Both Warren Buffett and Charlie Munger hate the idea of EDITDA because depreciation is not included as an expense. Warren Buffett even jokingly said We prefer earnings before everything when criticizing the abuse of EDITDA.
Depreciation estimates make the calculation of net income susceptible to managements accounting choices. These choices can be either overly aggressive or overly conservative.
, Operating Margin
, Earnings Before Depreciation and Amortization* All numbers are in millions except for per share data
Brown-Forman Corporation Annual Data
Brown-Forman Corporation Quarterly Data | fwe2-CC-MAIN-2013-20-42360000 |
Fern Extract and Sun ProtectionMay 4, 2009 Written by JP [Font too small?]
Summer is nearly upon us and that means that many us will be spending more time outdoors in the sun. While this is a good thing in many respects, it also increases the likelihood of sun damage and premature aging of skin. Another very real concern is skin cancer. But these risks may be significantly reduced if we protect ourselves from excessive UV radiation during the peak hours of the day and by supporting the body from the inside out.
There is a little known nutritional supplement that may help shield the skin from the harmful effects of summertime sun exposure. I’m referring to a fern extract (Polypodium leucotomos) that has been the subject of scientific study for over a decade. Here’s an overview of several studies that support its use as an “internal sunscreen”.
- In 2004, a study at the Harvard Medical School Department of Dermatology tested the effects of a fern extract on 9 healthy adults. The volunteers were exposed to artificial UV radiation on two different occasions. In one instance, they were asked to take the fern extract. The second UV radiation session was administered without the supplement. Skin tests performed 24 hours after the UV exposures demonstrated a significant “chemophotoprotective” effect thanks to the fern extract. The dosage used was 7.5 milligrams per kilogram of body weight. This would equate to just over 500 mg for a 150 pound individual and about 700 mg for someone who weighs 200 pounds. (1)
- An Italian trial from 2007 found that those with sun sensitivity also responded very well to fern supplementation. 25 patients consumed 480 mgs of fern extract a day and found that it provided a statistical reduction in “skin reaction and subjective symptoms”. In addition, this natural medication did not provoke unwanted side effects. (2)
A recent scientific review from the Sloan Kettering Cancer Center revealed several proposed mechanisms by which this fern extract appears to work:
- It inhibits the formation of free radicals and the typical oxidative damage brought about by UV radiation. This may have to do, in part, with the naturally occurring antioxidants present in ferns. (3)
- It specifically protects skin cells and DNA from sun related damage/decomposition and cell death. This may account for some of the skin anti-aging effect noted in some research. (4,5)
- Fern extracts also show a remarkable anti-inflammatory effect in skin tissue. Chronic inflammation appears to contribute to both cancer and wrinkle formation. (6,7)
- Fern extract preserves immune function during UV exposure, which may prevent harmful cellular changes that play a role in the development of skin cancer. (8)
The topical application of fern may yield added benefits as well. Studies as far back as 1997 show its far reaching potential as a skin saver. (9,10) The prospect of combining oral and topical fern appears to be very promising indeed. In fact, even difficult to treat skin conditions such atopic dermatitis, psoriasis and vitiligo (a loss of pimentation in sections of the skin) may be responsive to a combination therapy that includes fern extract. (11,12,13,14,15)
Using fern extracts will not give you license to engage in reckless sun exposure. But it may give you an added layer of protection from the harmful effects of UV radiation. We all should spend some time in the sun. The health benefits are undeniable. But we should do so in a judicious manner. Fern extract appears to be an ally which can help us to derive more of the sun’s benefits with less potential for accompanying damage.
Tags: Skin Care
Posted in Nutritional Supplements | fwe2-CC-MAIN-2013-20-42378000 |
The Hebrew vocabulary is spread over wide horizon with special focus on the words and the various derivations from it. There are various terns which are needed to be remembered in order to entreat you with significant results. There are so many flash tools and software available through online resources that allow you to understand and learn the Hebrew vocabulary easily, efficiently and quickly. Thus you can get skilled and learn Hebrew vocabulary with minimum effort and time.
There are so many accredited courses available even through online recourses with well qualified and trained faculty. There are software and flashcards available that allows you to recall and master yourself and increase your vocabulary by carefully listening to the pronunciation and speaking accordingly. When you learn Hebrew vocabulary it allows you to read and write the language with total efficiency. There are letters with strong and weak pronunciations which helps us to use while writing with vowel systems. You need to understand and learn the rules of the Hebrew vocabulary in order to become a skilled learner. When you learn Hebrew vocabulary it helps you to introduce yourself and start your conversation with total command over the language. This gives you a chance to explore and understand their rich culture and tradition of the community and Hebrew language.
Learn Now The Hebrew Vocabulary & Idioms (Learn Hebrew For Free Online):
- Hebrew Vocabulary – Our Body
- Hebrew Vocabulary – Basic Talk
- Hebrew Vocabulary – Family
- Hebrew Vocabulary – Clothes
- Hebrew Vocabulary – Colors
- Hebrew Vocabulary – Animals
- Hebrew Vocabulary – Countries
- Hebrew Vocabulary – Numbers
- Hebrew Vocabulary – Qualities
- Hebrew Vocabulary – Questions | fwe2-CC-MAIN-2013-20-42381000 |
HELCOM Indicator Fact Sheets for 2005
As the environmental focal point of the Baltic Sea, HELCOM has been assessing the sources and inputs of nutrients and hazardous substances and their effects on ecosystems in the Baltic Sea for almost 30 years. The resulting indicators are based on scientific research carried out around the Baltic Sea under the HELCOM PLC and COMBINE monitoring programmes. During the past few years, HELCOM Indicator Fact Sheets have been compiled by responsible institutions and approved by the HELCOM Monitoring and Assessment Group. The Indicator Fact Sheets for 2005 are listed in the navigation menu on the left and older ones can be found in the Indicator Fact Sheet archive.
The development of sea surface temperature in the Baltic Sea in 2004 was characterised by rather cold months of June and July and by a warm August. The wave climate in the northern Baltic Sea in 2004 was charactrised by a spring season that was calmer than usual and by a storm in December during which the significant wave heigth in the northern Baltic Proper reached a record value of 7.7 meters. The following ice winter was, by the extent of the ice cover, classified as normal. The break up of ice was in most waters earlier than normal and on the 23rd of May the Baltic Sea was ice free.
Life pulsates according to water inflows
The present state of the Baltic Sea is not only the result of the anthropogenic pressures but is also influenced hydrographic forces, such as water exchange between the Baltic Sea and the North Sea. After the major Baltic inflow in January 2003, which renewed most of the deep water in the Baltic Sea, no new major inflow has taken place and the near-bottom water in the Bornholm and eastern Gotland Basin returned back to anoxic conditions in the middle of 2004.
The Baltic Sea continues to suffer the impacts of human activities
Baltic Sea habitats and species are threathened by eutrophication and elevated amounts of hazardous substances as a result of decades of human activities in the surrounding catchment area and in the sea.
Eutrophication is the result of excessive nutrient inputs resulting from a range of anthropogenic activities. Nutrients enter the either via runoff and riverine input or through direct discharges into the sea. Although nutrient inputs from point sources such as industries and municipalities have been cut significantly, the total input of nitrogen to the Baltic Sea is still over 1 million tonnes per year, of which 25 % enters as atmospheric deposition on the Baltic Sea and 75 % as waterborne inputs. The total input of phosphorus to the Baltic Sea is ca. 35 thousand tonnes and enters the Baltic Sea mainly as waterborne input with the contribution of atmospheric deposition being only 1-5 % of the total. The main source of nutrient inputs is agriculture. (Please note that Indicator Fact Sheets on nutrient inputs to the Baltic Sea will be published in the near future).
The inputs of some hazardous substances to the Baltic Sea have been reduced considerably over the past 20 to 30 years. In particular discharges of heavy metals have decreased. The large majority of heavy metal enters the Baltic Sea via rivers or as direct discharges: 50 % for mercury, 60-70 % for lead and 75-85 % for cadmium. The remaining share of inputs is mainly from atmospheric deposition of these heavy metals.
Eutrophication intensifies phytoplankton blooms
The waterborne loads for nitrogen and phosphorus were significantly higher in 2004 compared to the previous year, partly due to the natural flutuations in inputs caused by varying hydrographical conditions. Annual emissions of nitrogen from the HELCOM Contracting Parties were lower in 2003 than in 1995. Mainly because of interannual changes in meteorology, no significant temporal pattern in nitrogen depositions to the Baltic Sea and its sub-basins can be detected, however depositon in 2003 was 11% lower than in 1995.
Eutrophication is an issue of major concern almost everywhere around the Baltic Sea area. The satellite-derived chlorophyll-like pigments in the Baltic Sea are clearly higher than in the Skagerrak and North Sea. The average biomass production has increased by a factor of 2.5 leading to decreased water clarity, exceptionally intense algal blooms, more extensive areas of oxygen-depleted sea beds as well as degraded habitats and changes in species abundance and distribution.
Annual integrated rates for sedimentation of organic matter in the Gotland Sea have not show significant trends between 1995 and 2003. However, decrease in water clarity has been observed in all Baltic Sea sub-regions over the last one hundred years, with it being most pronounced in the Northern Baltic Proper and the Gulf of Finland.
Although no rising trend can be detected in spring blooms from 1992 to 2005, the 2005 spring bloom in the Gulf of Finland was more intense than in the previous year while negligable in the Arkona Basin.
Due to the poor weather during the summer of 2004, there were no major cyanobacteria blooms that year. As a result, levels of dissolved inorganic nutrients in the winter nutrient pool remained extremely high throughout the Baltic Proper and meant that the risk for severe cyanobacterial blooms remained. The average concentrations of dissolved inorganic nitrogen were lower in all regions except at the entrance to and within the Gulf of Finland throughout the year 2004 when compared to the reference (the average of the years 1993-2003). This was confirmed by the 2005 summer blooms of cyanobacteria being amongst the most intense and widespread ever encountered in the Northern and Central Baltic Proper. High surface water temperatures are a prerequisite for intensive blooms of toxic Nodularia species.
In 2004, the abundance of the nitrogen fixing cyanobacteria as well as the ratio between the toxic Nodularia spumigena and the non-toxic Aphanizomenon flos-aquae were almost at the same level as in the previous four years.
Heavy metals and organic pollutant still persistent in marine environment
The inputs of some hazardous substances to the Baltic Sea have reduced considerably over the past 20 to 30 years. However, the concentrations of heavy metals and organic pollutants in sea water are still several times higher in the Baltic Sea compared to waters of the North Atlantic. As a result of efforts to reduce pollution, annual emissions of heavy metals to the air have decreased since 1990 and consequently their annual deposition onto the Baltic Sea has also halved since 1990. Riverine heavy metal loads (notably cadmium and lead) have also decreased for most of coastal states.
Concentrations of contaminants in fish vary according to substance, species and location, but in general, the concentrations of cadmium, lead and PCBs have decreased. Still the content of dioxins in the fish muscle may exceed the authorized limits set by the European Commission.
Overall the levels of radioactivity in the Baltic Sea water and biota have shown declining trends since the Chernobyl accident in 1986, which caused significant fallout over the area. Radioactivity is now slowly transported from the Baltic Sea to the North Sea via Kattegat. The amount of caesium-137 in Baltic Sea sediments however has remained largely unchanged, with highest concentrations in the Bothnian Sea and the Gulf of Finland.
Habitats and species under threat
This year HELCOM introduces its first biodiversity indicators. The degenerating state of the the Baltic Sea affects marine life in many ways. Macrobenthic communities have been severely degraded by increased eutrophication throughout the Baltic Proper and the Gulf of Finland and are below the longterm averages. Populations of the amphipod Monoporeia affinis have crashed in the Gulf of Bothnia and the invasive polychaete Marenzelleria viridis has spread.
The lack of salt water inflows has diminished the habitat layer for heterotrophic organisms in general and those of marine origin, such as copepods, in particular. Although the total number of copepods has not change dramatically, the ratio between different species has been affected which in turn has had consequences in higher trophic levels. Herring for instance has suffered from a decline in its favoured diet and now competes with sprat for other species of copepods.
Decrease in observed illegal oil spills
An increase in the number of maritime transportation during the past decade has increased the potential for an increased numbers of illegal oil discharges. Since the late 1990s ships have been required to deliver oil or oily water from the machinery spaces as well as from ballast or cargo tanks to reception facilities in ports. As of 1999, the number of observed illegal oil discharges has gradually been decreasing every year, but in 2004, still almost 300 illegal spills were detected.
Information on the long-term varaitions in the Baltic marine environment can be found in:
Fourth Periodic Assessment of the State of the Marine Environment of the Baltic Sea, 1994-1998; Executive Summary (2001)
List of 2005 Indicator Fact Sheets | fwe2-CC-MAIN-2013-20-42382000 |
The Centers for Disease Control and Prevention (CDC) recently confirmed that a citizen of Great Britain, who had lived in Houston, TX for four years, has been diagnosed with variant Creutzfeldt-Jakob disease (vCJD). The 30-year-old man had begun experiencing symptoms of the disease before he moved back to Great Britain earlier this year. He was born in the United Kingdom (U.K.) and resided there from 1980-1996, a period of time when the risk of exposure to Bovine Spongiform Encephalopathy (BSE) through the consumption of contaminated beef was at its peak. BSE, known as mad cow disease in animals, is a progressive, degenerative disease affecting the central nervous system. The CDC was made aware of the case by the U.K. National Creutzfeldt-Jakob Disease Surveillance Unit in Edinburgh, Scotland.
Although the diagnosis of the U.K. man raises some concern, CDC and U.S. government officials do not view this case as proof of further domestic transmission. “He lived in the U.K. for the whole time they had a problem,” said Lawrence B. Schonberger, a medical epidemiologist at the CDC. He added, “almost certainly, this case represents a continuation of the outbreak that is going on in the U.K. and it is just by convention that he happened to have gotten sick here.”
Mad cow disease in animals and vCJD in humans, are prion-related diseases resulting in very serious neurological symptoms and death. There is currently no treatment for vCJD. If a person was to become infected with vCJD, there exists a theoretical possibility of them donating blood and bringing the disease into the U.S. blood supply. While residing in Houston, this particular man was never hospitalized, had not undergone any invasive surgeries or received donated blood, according to the CDC.
Beverly Boyd, a spokeswoman with the Texas Department of Agriculture asserts, “This is not a safety issue for Texas. We have taken all the necessary steps possible to prevent any exposure in the United States and we have a very safe beef supply in Texas and America.”
The CDC has stated that there is no connection between this U.K. human case and the BSE detection in a Texas cow this past summer. In June, the U.S. Department of Agriculture (USDA) confirmed the first case of BSE in a U.S.-born cow. However, it was clearly determined that the meat of that cow did not enter the food supply.
Source: Associated Press, November 22, 2005 | fwe2-CC-MAIN-2013-20-42383000 |
Hindu Religious Critiques
The Hindu view is that there should be peace and good will to all people and all religions, that no one should be discriminated against because of their religious beliefs. Therefore no one should fear that a true Hindu would interfere with their political or social rights, or their religious freedom. However Hindus should not press this tolerance so far that they fail to defend their own rights or allow distortions of their religion to go on unquestioned by others.
On a religious level, Hindus must employ a
different strategy than in the political sphere. While the political
sphere demands avoidance of religious issues unless political in
nature, in the religious sphere Hindus cannot forget that their
religion is under attack and fail to vigorously defend it. They must
be aware of religious issues and their social ramifications and not
ignore them under the guise of political tolerance.
Hinduism is a pluralistic tradition that contains
many different teachers, teachings, and scriptures, and various
names and forms for Divine. It states that though there is One Truth
there are many paths, which it represents by different Gods and
Goddesses, and various yogic approaches. This is a different
approach than Western monotheistic religions that are prone to an
exclusivism of One God and his only or final representative. To such
exclusive monotheistic beliefs pluralistic traditions like the Hindu
are polytheist, pagan and heathen, the enemy that has to be
converted if not destroyed. | fwe2-CC-MAIN-2013-20-42392000 |
Subsets and Splits