text
stringlengths 187
626k
|
---|
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2012 June 23
Explanation: As seen from Frösön island in northern Sweden the Sun did set a day after the summer solstice. From that location below the arctic circle it settled slowly behind the northern horizon. During the sunset's final minute, this remarkable sequence of 7 images follows the distorted edge of the solar disk as it just disappears against a distant tree line, capturing both a green and blue flash. Not a myth even in a land of runes, the colorful but elusive glints are caused by atmospheric refraction enhanced by long, low, sight lines and strong atmospheric temperature gradients.
Authors & editors:
Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U.
|
ASL Literature and Art
This section is a collection of ASL storytelling, poetry, works of art, and other creative works. It also consists of posts on literary aspects of ASL.
Speech language can convey sound effects in storytelling, whereas sign language can convey cinematic effects in storytelling.
Poetry in sign language has its own poetic features such as rhymes, rhythms, meters, and other features that charactierize poetry which is not limited to speech.
Explore ASL literary arts in this section including some visual-linguistic literary works in ASL and discussion.
Selected works of interest
Deconstruct W.O.R.D.: an original poetry performance.
Knowing Fish: poetic narrative video.
Compare three versions of the poem "Spring Dawn" originally written by Meng Hao-jan. The poem is translated by the literary artist Jolanta Lapiak into ASL in video and unique one-of-a-kind photograph print. Watch how ASL rhymes arise in this signed poem.
|
May 19, 2008 A vaccine created by University of Rochester Medical Center scientists prevents the development of Alzheimer's disease-like pathology in mice without causing inflammation or significant side effects.
Vaccinated mice generated an immune response to the protein known as amyloid-beta peptide, which accumulates in what are called "amyloid plaques" in brains of people with Alzheimer's. The vaccinated mice demonstrated normal learning skills and functioning memory in spite of being genetically designed to develop an aggressive form of the disease.
The Rochester scientists reported the findings in an article in the May issue of Molecular Therapy, the journal of The American Society of Gene Therapy.
"Our study demonstrates that we can create a potent but safe version of a vaccine that utilizes the strategy of immune response shaping to prevent Alzheimer's-related pathologies and memory deficits," said William Bowers, associate professor of neurology and of microbiology and immunology at the Medical Center and lead author of the article. "The vaccinated mice not only performed better, we found no evidence of signature amyloid plaque in their brains."
Alzheimer's is a progressive neurodegenerative disease associated with dementia and a decline in performance of normal activities. Hallmarks of the disease include the accumulation of amyloid plaques in the brains of patients and the loss of normal functioning tau, a protein that stabilizes the transport networks in neurons. Abnormal tau function eventually leads to another classic hallmark of Alzheimer's, neurofibrillary tangle in nerve cells. After several decades of exposure to these insults, neurons ultimately succumb and die, leading to progressively damaged learning and memory centers in the brain.
The mice that received the vaccines were genetically engineered to express large amounts of amyloid beta protein. They also harbored a mutation that causes the tau-related tangle pathology. Prior to the start of the vaccine study, the mice were trained to navigate a maze using spatial clues. They were then tested periodically during the 10-month study on the amount of time and distance traveled to an escape pod and the number of errors made along the way.
"What we found exciting was that by targeting one pathology of Alzheimer's -- amyloid beta -- we were able to also prevent the transition of tau from its normal form to a form found in the disease state," Bowers said.
The goal of the vaccine is to prompt the immune system to recognize amyloid beta protein and remove it. To create the vaccine, Bowers and the research group use a herpes virus that is stripped of the viral genes that can cause disease or harm. They then load the virus container with the genetic code for amyloid beta and interleukin-4, a protein that stimulates immune responses involving type 2 T helper cells, which are lymphocytes that play an important role in the immune system.
The research group tested several versions of a vaccine. Mice were given three injections of empty virus alone, a vaccine carrying only the amyloid beta genetic code, or a vaccine encoding both amyloid beta and interlueikin-4, which was found to be the most effective.
"We have learned a great deal from this ongoing project," Bowers said. "Importantly, it has demonstrated the combined strengths of the gene delivery platform and the immune shaping concept for the creation of customized vaccines for Alzheimer's disease, as well as a number of other diseases. We are currently working on strategies we believe can make the vaccine even safer."
Bowers expects the vaccine eventually to be tested in people, but due to the number of studies required to satisfy regulatory requirements, it could be three or more years before human trials testing this type of Alzheimer's vaccine occur.
Grants from the National Institutes of Health supported the study. In addition to Bowers, authors of the Molecular Therapy article include Maria E. Frazer, Jennifer E. Hughes, Michael A. Mastrangelo and Jennifer Tibbens of the Medical Center and Howard J. Federoff of Georgetown University Medical Center.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead.
|
Nephila jurassica: The biggest spider fossil ever found
Spiders are small arthropods, famous for their elasticity, strength and web-making abilities. For some people, spiders are not welcome in the home; as soon as they see one crawling on the ceiling, the first thought that comes to mind is to swat it at once.
But spiders predate us humans by a long way. And while sometimes spiders are tiny creatures, a team of scientists has discovered the largest spider fossil ever in a layer of volcanic ash in Ningcheng County, Inner Mongolia, China. The research was carried out by Paleontologist Professor Paul Selden, of the University of Kansas, with his team.
Named Nephila jurassica, this 165-million-year-old fossil is 2.5 cm in length and has a leg span of almost 9 cm. It is currently the largest known fossilized spider, and is from the family known as Nephilidae, the largest web-weaving spiders alive today.
According to research published online in the 20th April, 2011 issue of Biology Letters, this prehistoric spider was female and shows characteristics of the golden orb weaver. Widespread in warmer regions, the golden silk orb weavers are well-known for the fabulous webs they weave. Females of this family weave the largest orb webs known.
"When I first saw it, I immediately realized that it was very unique not only because of its size, but also because the preservation was excellent," said ChungKun Shih, study co-author, and a visiting professor at Capital Normal University in Beijing, China.
According to a press release: “This fossil finding provides evidence that golden orb-webs were being woven and capturing medium to large insects in Jurassic times, and predation by these spiders would have played an important role in the natural selection of contemporaneous insects.”
|
A fossilised little finger discovered in a cave in the mountains of southern Siberia belonged to a young girl from an unknown group of archaic humans, scientists say.
The missing human relatives are thought to have inhabited much of Asia as recently as 30,000 years ago, and so shared the land with early modern humans and Neanderthals.
The finding paints a complex picture of human history in which our early ancestors left Africa 70,000 years ago to rub shoulders with other distant relatives in addition to the stocky, barrel-chested Neanderthals.
The new ancestors have been named “Denisovans” after the Denisova cave in the Altai mountains of southern Siberia where the finger bone was unearthed in 2008.
A “Denis” is thus a member of an archaic human subspecies.
|
Tips to Facilitate Workshops Effectively
Facilitators play a very important role in the creation of a respectful, positive learning environment during a workshop. Here you will find some tips to facilitate workshops effectively.
- Make sure everybody has a chance to participate. For example, through small group activities or direct questions to different participants. Help the group to avoid long discussions between two people who may isolate the rest of the/other participants. Promote the importance of sharing the space and listening to different voices and opinions.
- Be prepared to make adjustments to the agenda – sometimes you have to cross out activities, but the most important thing is to achieve the general goals of the workshop.
- Make every possible thing to have all the logistics ready beforehand to then be able to focus on the workshop’s agenda.
- Pay attention to the group’s energy and motivation – Plan activities where everyone is able to participate and to stay active and engaged.
- Provide space for the participants to be able to share their own experiences and knowledge. Remember that each one of us has a lot to learn and a lot to teach.
- Relax and have fun! Be a part of the process – You are learning, too, so you don’t have to know it all nor do everything perfect.
- Be prepared for difficult questions. Get familiarized with the topic, know the content of the workshop but remember you don’t have to know all the answers! You can ask other participants what they know about the topic, or you can find out the answers later and share them with the participants after the workshop.
- Focus on giving general information – Avoid answering questions about specific cases. Usually, this can change the direction of the conversation and might be considered as providing legal advice without a license to do so.
- Your work as facilitator is to help the group learn together, not necessarily to present all the information and be the “expert” in the topic.
- Try to be as clear as possible – especially when you are giving the exercises’ instructions. Work as a team with the other facilitators during the whole workshop.
|
A new tool to identify the calls of bat species could help conservation efforts.
Because bats are nocturnal and difficult to observe or catch, the most effective way to study them is to monitor their echolocation calls. These sounds are emitted in order to hear the echo bouncing back from surfaces around the bats, allowing them to navigate, hunt and communicate.
Many different measurements can be taken from each call, such as its minimum and maximum frequency, or how quickly the frequency changes during the call, and these measurements are used to help identify the species of bat.
However, a paper by an international team of researchers, published in the Journal of Applied Ecology, asserts that poor standardisation of acoustic monitoring limits scientists’ ability to collate data.
Kate Jones, chairwoman of the UK-based Bat Conservation Trust
told the BBC that “without using the same identification methods everywhere, we cannot form reliable conclusions about how bat populations are doing and whether their distribution is changing.
"Because many bats migrate between different European countries, we need to monitor bats at a European - as well as country - scale.”
The team selected 1,350 calls from 34 different European bat species from EchoBank, a global echolocation library containing more than 200,000 bat call recordings. This raw data has allowed them to develop the identification tool, iBatsID
, which can identify 34 out of 45 species of bats.
This free online tool works anywhere in Europe, and its creators claim can identify most species correctly more than 80% of the time.
There are 18 species of bat residing in the UK, including the common pipistrelle and greater horseshoe bat.
Monitoring bats is vital not just to this species, but also to the whole ecosystem. Bats are extremely sensitive to changes in their environment, so if bat populations are declining, it can be an indication that other species might be affected in the future.
|
The common name for sedums is Stonecrop. There is a Stonecrop Nursery in eastern New York which was the first garden created by Frank Cabot. Frank created the Garden Conservancy, an organization which strives to preserve some of our exceptional gardens for posterity. Each year it also runs its Open Days Program which opens gardens to the public throughout the country. Frank Cabot went on to create Les Quatre Vents, an outstanding garden at his family home in Quebec.
There are two sedums which most gardeners grow; Sedum acre, a tiny low-growing
groundcover plant with bright yellow flowers. This is being used effectively in the Peace Garden in the plaza between the library and city hall.
The other is Autumn Joy which is in bloom now and will continue to provide color for
months to come. Some references say it requires full sun. Not so! I have it in three locations in my garden. I have several plants growing out of a south-facing wall. But there are tall oaks and maples to the south so that the only time it gets direct sun is in spring before the oaks leaf out. The rest of the year it is dappled light. Another plant is in the east-facing bed on top of my long stone wall where it gets only morning sun. The third plant is in my shrub-perennial border where it gets a bit of sun mid-day. Mine is the ordinary run-of-the-mill Autumn Joy, but there are several cultivars offered in nurseries. Among these are: Crimson, Iceberg, which has white flowers, Autumn Fire and Chocolate Drop, growing only eight inches tall with brown leaves and pink flowers.
There are two native sedums: Roseroot, Sedum rosa and Wild Stonecrop, Sedum ternatum. A third Wild Live-forever, Sedum teliphiodes, grows on cliffs and rocks in Pennsylvania and southward.
|
A Soyuz rocket launched two Galileo satellites into orbit on Friday, marking a crucial step for Europe’s planned navigation system, operator Arianespace announced.
The launch took place at the Kourou space base in French Guiana, at 3:15pm (6:15pm GMT).
Three and three-quarter hours later, the 700kg satellites were placed into orbit.
The new satellites add to the first two in the Galileo navigation system, which were launched on Oct. 21, last year.
Together they create a “mini-constellation.” Four is the minimum number of satellites needed to gain a navigational fix on the ground, using signals from the satellite to get a position for latitude, longitude, altitude and a time reference.
Galileo will ultimately consist of 30 satellites, six more than the US Global Positioning System.
By 2015, 18 satellites should be in place, which is sufficient for launching services to the public, followed by the rest in 2020, according to the European Space Agency.
It is claimed that the system will be accurate to within one meter. The US Global Positioning System, which became operational in 1995 and is currently being upgraded, is currently accurate to between three and eight meters.
In May, the European Commission said the cost by 2015 would be 5 billion euros (US$6.45 billion).
As a medium-sized launcher, Soyuz complements Europe’s heavyweight Ariane 5 and lightweight Vega rockets.
|
Emacs Lisp uses two kinds of storage for user-created Lisp objects: normal storage and pure storage. Normal storage is where all the new data created during an Emacs session are kept (see Garbage Collection). Pure storage is used for certain data in the preloaded standard Lisp files—data that should never change during actual use of Emacs.
Pure storage is allocated only while temacs is loading the
standard preloaded Lisp libraries. In the file emacs, it is
marked as read-only (on operating systems that permit this), so that
the memory space can be shared by all the Emacs jobs running on the
machine at once. Pure storage is not expandable; a fixed amount is
allocated when Emacs is compiled, and if that is not sufficient for
the preloaded libraries, temacs allocates dynamic memory for
the part that didn't fit. The resulting image will work, but garbage
collection (see Garbage Collection) is disabled in this situation,
causing a memory leak. Such an overflow normally won't happen unless
you try to preload additional libraries or add features to the
standard ones. Emacs will display a warning about the overflow when
it starts. If this happens, you should increase the compilation
SYSTEM_PURESIZE_EXTRA in the file
src/puresize.h and rebuild Emacs.
This function makes a copy in pure storage of object, and returns it. It copies a string by simply making a new string with the same characters, but without text properties, in pure storage. It recursively copies the contents of vectors and cons cells. It does not make copies of other objects such as symbols, but just returns them unchanged. It signals an error if asked to copy markers.
This function is a no-op except while Emacs is being built and dumped; it is usually called only in preloaded Lisp files.
The value of this variable is the number of bytes of pure storage allocated so far. Typically, in a dumped Emacs, this number is very close to the total amount of pure storage available—if it were not, we would preallocate less.
This variable determines whether
defunshould make a copy of the function definition in pure storage. If it is non-
nil, then the function definition is copied into pure storage.
This flag is
twhile loading all of the basic functions for building Emacs initially (allowing those functions to be shareable and non-collectible). Dumping Emacs as an executable always writes
nilin this variable, regardless of the value it actually has before and after dumping.
You should not change this flag in a running Emacs.
|
35 - The Philosopher's Toolkit: Aristotle's Logical Works
Peter discusses Aristotle’s pioneering work in logic, and looks at related issues like the ten categories and the famous “sea battle” argument for determinism.
You are missing some Flash content that should appear here! Perhaps your browser cannot display it, or maybe it did not initialize correctly.
• J. Hintikka, Time and Necessity. Studies in Aristotle's Theory of Modality (Oxford:1973).
• W. Leszl, “Aristotle's Logical Works and His Conception of Logic,” Topoi 23 (2004), 71–100.
• R. Smith, "Logic," in J. Barnes (ed.), The Cambridge Companion to Aristotle (Cambridge: 1995), 27-65.
• S. Waterlow, Passage and Possibility (Oxford: 1982).
On the "sea battle" problem:
• G.E.M. Anscombe, “Aristotle and the Sea Battle,” in J.M.E. Moravcsik (ed.), Aristotle: a Collection of Critical Essays, (1967), reprinted from Mind 65 (1956).
• D. Frede, “The Sea-Battle Reconsidered: a Defence of the Traditional Interpretation,” Oxford Studies in Ancient Philosophy 3 (1985).
• J. Hintikka, “The Once and Future Sea Fight: Aristotle’s Discussion of Future Contingents in de Interpretatione 9,” in his Time and Necessity (see above).
|
scintillation counterArticle Free Pass
scintillation counter, radiation detector that is triggered by a flash of light (or scintillation) produced when ionizing radiation traverses certain solid or liquid substances (phosphors), among which are thallium-activated sodium iodide, zinc sulfide, and organic compounds such as anthracene incorporated into solid plastics or liquid solvents. The light flashes are converted into electric pulses by a photoelectric alloy of cesium and antimony, amplified about a million times by a photomultiplier tube, and finally counted. Sensitive to X rays, gamma rays, and charged particles, scintillation counters permit high-speed counting of particles and measurement of the energy of incident radiation.
What made you want to look up "scintillation counter"? Please share what surprised you most...
|
Digital Audio Networking Demystified
The OSI model helps bring order to the chaos of various digital audio network options.
Credit: Randall Fung/Corbis
Networking has been a source of frustration and confusion for pro AV professionals for decades. Fortunately, the International Organization of Standardization, more commonly referred to as ISO, created a framework in the early 1980s called the Open Systems Interconnection (OSI) Reference Model, a seven-layer framework that defines network functions, to help simplify matters.
Providing a common understanding of how to communicate to each layer, the OSI model (Fig. 1) is basically the foundation of what makes data networking work. Although it's not important for AV professionals to know the intricate details of each layer, it is vital to at least have a grasp of the purpose of each layer as well as general knowledge of the common protocols in each one. Let's take a look at the some key points.
The Seven Layers
Starting from the bottom up, the seven layers of the OSI Reference Model are Physical, Data Link, Network, Transport, Session, Presentation, and Application. The Physical layer is just that — the hardware's physical connection that describes its electrical characteristics. The Data Link layer is the logic connection, defining the type of network. For example, the Data Link layer defines whether or not it is an Ethernet or Asynchronous Transfer Mode (ATM) network. There is also more than one data network transport protocol. The Data Link layer is divided into two sub-layers: the Media Access Control (MAC) and the Logical Link Control (above the MAC as you move up the OSI Reference Model).
The seven layers of the Open Systems Interconnection (OSI) Reference Model for network functions.
Here is one concrete example of how the OSI model helps us understand networking technologies. Some people assume that any device with a CAT-5 cable connected to it is an Ethernet device. But it is Ethernet's Physical layer that defines an electrical specification and physical connection — CAT-5 terminated with an RJ-45 connector just happens to be one of them. For a technology to fully qualify as an Ethernet standard, it requires full implementation of both the Physical and Data Link layers.
The Network layer — the layer at which network routers operate — “packetizes” the data and provides routing information. The common protocol for this layer is the Internet Protocol (IP).
Layer four is the Transport layer. Keep in mind that this layer has a different meaning in the OSI Reference Model compared to how we use the term “transport” for moving audio around. The Transport layer provides protocols to determine the delivery method. The most popular layer four protocol is Transmission Control Protocol (TCP). Many discuss TCP/IP as one protocol, but actually they are two separate protocols on two different layers. TCP/IP is usually used as the data transport for file transfers or audio control applications.
Comparison of four digital audio technologies using the OSI model as a framework.
TCP provides a scheme where it sends an acknowledge message for each packet received by a sending device. If it senses that it is missing a packet of information, it will send a message back to the sender to resend. This feature is great for applications that are not time-dependent, but is not useful in real-time applications like audio and video.
Streaming media technologies most common on the Web use another method called User Datagram Protocol (UDP), which simply streams the packets. The sender never knows if it actually arrives or not. Professional audio applications have not used UDP because they are typically Physical layer or Data Link layer technologies — not Transport layer. However, a newcomer to professional audio networking, Australia-based Audinate, has recently become the first professional audio networking technology to use UDP/IP technology over Ethernet with its product called Dante.
The Session and Presentation layers are not commonly used in professional audio networks; therefore, they will not be covered in this article. Because these layers can be important to some integration projects, you may want to research the OSI model further to complete your understanding of this useful tool.
The purpose of the Application layer is to provide the interface tools that make networking useful. It is not used to move audio around the network. It controls, manages, and monitors audio devices on a network. Popular protocols are File Transfer Protocol (FTP), Telnet, Hypertext Transfer Protocol (HTTP), Domain Name System (DNS), and Virtual Private Network (VPN), to name just a few.
Now that you have a basic familiarity with the seven layers that make up the OSI model, let's dig a little deeper into the inner workings of a digital audio network.
Breaking Down Audio Networks
Audio networking can be broken into in two main concepts: control and transport. Configuring, monitoring, and actual device control all fall into the control category and use several standard communication protocols. Intuitively, getting digital audio from here to there is the role of transport.
Control applications can be found in standard protocols of the Application layer. Application layer protocols that are found in audio are Telnet, HTTP, and Simple Network Management Protocol (SNMP). Telnet is short for TELetype NETwork and was one of the first Internet protocols. Telnet provides command-line style communication to a machine. One example of Telnet usage in audio is the Peavey MediaMatrix, which uses this technology, known as RATC, as a way to control MediaMatrix devices remotely.
SNMP is a protocol for monitoring devices on a network. There are several professional audio and video manufacturers that support this protocol, which provides a method for managing the status or health of devices on a network. SNMP is a key technology in Network Operation Center (NOC) monitoring. It is an Application layer protocol that communicates to devices on the network through UDP/IP protocols, which can be communicated over a variety of data transport technologies.
Control systems can be manufacturer-specific, such as Harman Pro's HiQnet, QSC Audio's QSControl, or third party such as Crestron's CresNet, where the control software communicates to audio devices through TCP/IP. In many cases, TCP/IP-based control can run on the same network as the audio signal transport, and some technologies (such as CobraNet and Dante) are designed to allow data traffic to coexist with audio traffic.
The organizing and managing of audio bits is the job of the audio Transport. This is usually done by the audio protocol. Aviom, CobraNet, and EtherSound are protocols that organize bits for transport on the network. The transport can be divided into two categories: logical and physical.
Purely physical layer technologies, such as Aviom, use hardware to organize and move digital bits. More often than not, a proprietary chip is used to organize and manage them. Ethernet-based technologies packetize the audio and send it to the Data Link and Physical layers to be transported on Ethernet devices. Ethernet is both a logical and physical technology that packetizes or “frames” the audio in the Data Link layer and sends it to the Physical layer to be moved to another device on the network. Ethernet's Physical layer also has a Physical layer chip, referred to as the PHY chip, which can be purchased from several manufacturers.
Comparing Digital Audio Systems
The more familiar you are with the OSI model, the easier it will be to understand the similarities and differences of the various digital audio systems. For many people, there is a tendency to gloss over the OSI model and just talk about networking-branded protocols. However, understanding the OSI model will bring clarity to your understanding of digital audio networking (Fig. 2).
Due to the integration of pro AV systems, true networking schemes are vitally important. A distinction must be made between audio networking and digital audio transports. Audio networks are defined as those meeting the commonly used standard protocols, where at least the Physical and Data Link layer technologies and standard network appliances (such as hubs and switches) can be used. There are several technologies that meet this requirement using IEEE 1394 (Firewire), Ethernet, and ATM technologies, to name a few. However, because Ethernet is widely deployed in applications ranging from large enterprises to the home, this will be the technology of focus. All other technologies that do not meet this definition will be considered digital audio transport systems, and not a digital audio network.
There are at least 15 schemes for digital audio transport systems and audio networking. Three of the four technologies presented here have been selected because of their wide acceptance in the industry based on the number of manufacturers that support it.
Let's compare four CAT-5/Ethernet technologies: Aviom, EtherSound, CobraNet, and Dante. This is not to be considered a “shoot-out” between technologies but rather a discussion to gain understanding of some of the many digital system options available to the AV professional.
As previously noted, Aviom is a Physical layer–only technology based on the classifications outlined above. It does use an Ethernet PHY chip, but doesn't meet the electrical characteristics of Ethernet. Therefore, it cannot be connected to standard Ethernet hubs or switches. Aviom uses a proprietary chip to organize multiple channels of audio bits to be transported throughout a system, and it falls in the classification of a digital audio transport system.
EtherSound and CobraNet are both 802.3 Ethernet– compliant technologies that can be used on standard data Ethernet switches. There is some debate as to whether EtherSound technology can be considered a true Ethernet technology because it requires a dedicated network. EtherSound uses a proprietary scheme for network control, and CobraNet uses standard data networking methods. The key difference for both the AV and data professional is that EtherSound uses a dedicated network, and CobraNet does not. There are other differences that may be considered before choosing between CobraNet and EtherSound, but both are considered to be layer two (Data Link) technologies.
Dante uses Ethernet, but it is considered a layer four technology (Transport). It uses UDP for audio transport and IP for audio routing on an Ethernet transport, commonly referred to as UDP/IP over Ethernet.
At this point you may be asking yourself why does the audio industry have so many technologies? Why can't there be one standard like there is in the data industry?
The answer to the first question relates to time. Audio requires synchronous delivery of bits. Early Ethernet networks weren't much concerned with time. Ethernet is asynchronous, meaning there isn't a concern when and how data arrives as long as it gets there. Therefore, to put digital audio on a data network requires a way to add a timing mechanism. Time is an issue in another sense, in that your options depend on technology or market knowledge available at the time when you develop your solution. When and how you develop your solution leads to the question of a single industry standard.
Many people don't realize that the data industry does in fact have more than one standard: Ethernet, ATM, FiberChannel, and SONET. Each layer of the OSI model has numerous protocols for different purposes. The key is that developers follow the OSI model as a framework for network functions and rules for communicating between them. If the developer wants to use Ethernet, he or she is required to have this technology follow the rules for communicating to the Data Link layer, as required by the Ethernet standard.
Because one of the key issues for audio involves time, it's important to use it wisely.
There are two types of time that we need to be concerned with in networking: clock time and latency. Clock time in this context is a timing mechanism that is broken down into measurable units, such as milliseconds. In digital audio systems, latency is the time duration between when audio or a bit of audio goes into a system until the bit comes out the other side. Latency has many causes, but arguably the root cause in audio networks is the design of its timing mechanism. In addition, there is a tradeoff between the timing method and bandwidth. A general rule of thumb is that as the resolution of the timing mechanism increases, the more bandwidth that's required from the network.
Ethernet, being an asynchronous technology, requires a timing method to be added to support the synchronous nature of audio. The concepts and methodology of clocking networks for audio are key differences among the various technologies.
CobraNet uses a time mechanism called a beat packet. This packet is sent out in 1.33 millisecond intervals and communicates with CobraNet devices. Therefore, the latency of a CobraNet audio network can't be less than 1.33 milliseconds. CobraNet was introduced in 1995 when large-scale DSP-based digital systems started replacing analog designs in the market. Because the “sound system in a box” was new, there was great scrutiny of these systems. A delay or latency in some time-critical applications was noticed, considered to be a challenge of using digital systems. However, many believe that latency is an overly exaggerated issue in most applications where digital audio systems are deployed. In fact, this topic could be an article unto itself.
A little history of digital systems and networking will provide some insight on the reason why there are several networking technologies available today. In the late '90s, there were two “critical” concerns in the digital audio industry: Year of 2000 compliance (Y2K) and latency. To many audio pros, using audio networks like CobraNet seemed impossible because of the delay —at that time, approximately 5 milliseconds, or in video terms, less time than a frame of video.
Enter EtherSound, introduced in 2001, which addressed the issue of latency by providing an Ethernet networking scheme with low latency and better bit-depth and higher sampling rate than CobraNet. The market timing and concern over latency gave EtherSound an excellent entry point. But since reducing latency down to 124 microseconds limits available bandwidth for data traffic, a dedicated network is required for a 100-MB EtherSound network. Later, to meet the market demands of lower latency requirements, CobraNet introduced variable latency, with 1.33 milliseconds being the minimum. With the Ethernet technologies discussion thus far, there is a relationship between the bit-depth and sample rate to the clocking system.
Audio is not the only industry with a need for real-time clocking schemes. Communications, military, and industrial applications also require multiple devices to be connected together on a network and function in real-time. A group was formed from these markets, and they took on the issue of real-time clocking while leveraging the widely deployed Ethernet technology. The outcome was the IEEE 1588 standard for a real-time clocking system for Ethernet networks in 2002.
As a late entry to the networking party, Audinate's Dante comes to the market with the advantage of using new technologies like IEEE 1588 to solve many of the current challenges in networking audio. Using this clocking technology in Ethernet allows Dante to provide sample accurate timing and synchronization while achieving latency as low as 34 microseconds. Coming to the market later also has the benefit of Gigabit networking being widely supported, which provides the increased bandwidth requirement of ultra-low latency. It should be noted here that EtherSound does have a Gigabit version, and CobraNet does work on Gigabit infrastructure with added benefits but it is currently a Fast Ethernet technology.
Dante provides a flexible solution to many of the current tradeoffs that require one system on another due to design requirements of latency verses bandwidth, because Dante can support different latency, bit depth, and sample rates in the same system. For example, this allows a user to provide a low-latency, higher bandwidth assignment to in-ear monitoring while at the same time use a higher latency assignment in areas where latency is less of a concern (such as front of house), thereby reducing the overall network bandwidth requirement.
The developers of CobraNet and Dante are both working toward advancing software so that AV professionals and end-users can configure, route audio, and manage audio devices on a network. The goal is to make audio networks “plug-and-play” for those that don't want to know anything about networking technologies. One of the advances to note is called “device discovery,” where the software finds all of the audio devices on the network so you don't have to configure them in advance. The software also has advance features for those who want to dive into the details of their audio system.
Advances in digital audio systems and networking technologies will continue to change to meet market applications and their specific requirements. Aviom's initial focus was to create a personal monitoring system, and it developed a digital audio transport to better serve this application. Aviom's low-latency transport provided a solution to the market that made it the perfect transport for many live applications. CobraNet provides the AV professional with a solution to integrate audio, video, and data systems on an enterprise switched network. EtherSound came to the market by providing a low-latency audio transport using standard Ethernet 802.3 technology. Dante comes to the market after significant change and growth and Gigabit networking and new technologies like IEEE 1588 to solve many of challenges of using Ethernet in real-time systems.
Networking audio and video can seem chaotic, but gaining an understanding of the OSI model helps bring order to the chaos. It not only provides an understanding of the various types of technology, but it also provides a common language to communicate for both AV and data professionals. Keeping it simple by using the OSI model as the foundation and breaking audio networking down into two functional parts (control and transport) will help you determine which networking technology will best suit your particular application.
Brent Harshbarger is the founder of m3tools located in Atlanta. He can be reached at [email protected].
|
"From the creators of Animachines, a playful way for kids to learn actions, animals, and opposites."
Opposites are everywhere in our world. A day at the park will reveal busy kids and active animals doing opposite actions. There, on the slide: Sam goes up as Hiroko zips down. And there, in the tree: one squirrel climbs up and another scurries down. Not far away, Daddy is quiet but Simon is LOUD, while a mother bird is silent as she feeds her chirping babies.
In Kids Do, Animals Too ten pairs explain opposites to young children. Each pair features people and a different animal, including: fast, slow -- dogs in, out -- mice up, down -- squirrels on, off -- frogs ahead, behind -- ants quiet, loud -- birds under, over -- spiders wet, dry -- ducks toward, away -- butterflies asleep, awake -- squirrels, bats
The softly colored artwork features familiar park settings, and includes look-and-find details that cleverly connect each active scene. Meanwhile, the aptly crafted language keeps the action moving and the learning fun.
|
- Types of home dialysis
- Daily HHD
- Nocturnal HHD
- Standard HHD
- News & events
- Message boards
- For professionals
- About us
...everything you need to know about doing dialysis at home.
Here we present a chronological tour of dialysis from the beginning.
All photos by Jim Curtis; descriptions courtesy of Baxter.
The first practical artificial kidney was developed during World War II by the Dutch physician Willem Kolff. The Kolff kidney used a 20-meter long tube of cellophane sausage casing as a dialyzing membrane. The tube was wrapped around a slatted wooden drum. Powered by an electric motor, the drum revolved in a tank filled with dialyzing solution. The patient’s blood was drawn through the cellophane tubing by gravity as the drum revolved. Toxic molecules in the blood diffused through the tubing into the dialyzing solution. Complete dialysis took about six hours. The Kolff kidney effectively removed toxins from the blood, but because it operated at low pressure, it was unable to remove excess fluid from the patient’s blood. Modern dialysis machines are designed to filter out excess fluid while cleansing the blood of wastes.
Blood was drained from the patient into a sterile container. Anticlotting drugs were added, and the filled container was hung on a post above the artificial kidney and connected to the cellulose acetate tubing that was wound around the wooden drum. A motor turned the drum, pulling the blood through the tubing by gravity.
The tank underneath the drum was filled with dialyzing fluid. As the blood-filled tubing passed through this fluid, waste products from the blood diffused through the tubing into the dialyzing fluid. The cleansed blood collected in a second sterile container at the other end of the machine. When all of the blood had passed through the machine, this second container was raised to drain the blood back into the patient.
George Thorn, MD, of the Peter Bent Hospital in Boston, MA, invited Willem Kolff, MD, to meet with Carl Walters, MD, and John Merrill, MD, to redesign and modify the original Kolff Rotating Drum Kidney. The artificial kidney was to be used to support the first proposed transplant program in the United States. This device was built by Edward Olson, an engineer, who would produce over forty of these devices, which were shipped all over the world.
Cellulose acetate tubular membrane, the same type of membrane that is used as sausage casing, was wrapped around the drum and connected to latex tubing that would be attached to the patient’s bloodstream. The drum would be rotating in the dialyzing fluid bath that is located under the drum.
The patient’s blood was propelled through the device by the “Archimedes screw principle” and a pulsatile pump. A split coupling was developed to connect the tubing to the membrane, a component necessary to prevent the tubing and membrane from twisting. This connection is at the inlet and outlet of the rotating drum.
The membrane surface area could be adjusted by increasing or decreasing the number of wraps of tubing. The Plexiglas™ hood was designed to control the temperature of the blood. The cost of this device was $5,600 in 1950.
Murphy WP Jr., Swan RC Jr., Walter CW, Weller JM, Merrill JP. Use of an artificial kidney. III. Current procedures in clinical hemodialysis. J Lab Clin Med. 1952 Sep; 40(3): 436-44.
Leonard Skeggs, PhD, and Jack Leonards, MD, developed the first parallel flow artificial kidney at Case Western Reserve in Cleveland, OH. The artificial kidney was designed to have a low resistance to blood flow and to have an adjustable surface area.
Two sheets of membrane are sandwiched between two rubber pads in order to reduce the blood volume and to ensure uniform distribution of blood across the membrane to maximize efficiency. Multiple layers were utilized. The device required a great deal of time to construct and it often leaked. This was corrected by the use of bone wax to stop the leak.
The device had a very low resistance to blood flow and it could be used without a blood pump. If more than one of these units were used at a time, a blood pump was required. Skeggs was able to remove water from the blood in the artificial kidney by creating a siphon on the effluent of the dialyzing fluid. This appears to be the first reference to negative pressure dialysis.
This technology was later adapted by Leonard Skeggs to do blood chemistries. It was called the SMA 12-60 Autoanalyzer.
This artificial kidney was developed to reduce the amount of blood outside of the body and to eliminate the need for pumping the blood through the device.
Guarino used cellulose acetate tubing. The dialyzing fluid was directed inside the tubing and the blood, entering the device from the top, cascading down the membrane. The metal tubing inside the membrane gave support to the membrane.
The artificial kidney ahd a very low blood volume, but it had limited use because there was concern regarding the possibility of the dialyzing fluid leaking into the blood.
Von Garrelts had constructed a dialyzer in 1948 by wrapping a cellulose acetate membrane around a core. The layers of membrane were separated by rods. It was very bulky and weighed over 100 pounds.
William Inouye, MD, took this concept and miniaturized it by wrapping the cellulose acetate tubing around a beaker and separating the layers with fiberglass screening. He placed this “coil” in a Presto Pressure Cooker in order to enclose it and control the temperature. In addition, he made openings in the pot for the dialyzing fluid. With the use of a vacuum on the dialysate leaving the pot, he was able to draw the excess water out of the patient’s blood. A blood pump was required to overcome resistance within the device.
This device was used clinically and when it was used in a closed circuit, the exact amount of fluid removed could be determined.
Inouye WY, Engelberg J. A simplified artificial dialyzer and ultrafilter. Surg Forum. Proceedings of the Forum Sessions, Thirty-ninth Clinical Congress of the American College of Surgeons, Chicago, Illinois, October, 1953; 4: 438-42.
Home Dialysis Central is made possible through the generous annual contributions of our Corporate Sponsors. Learn more about becoming a Corporate Sponsor.
|
Protractors - Innovation
While only one patent model for a protractor survives in the Smithsonian collections—from an inventor with a colorful personal history—several of the other objects also provide examples of technical innovation. For instance, some are manufactured versions of patented inventions. Others were named for the person with whom they were associated, even if that engineer or craftsman laid no claim to designing that protractor.
"Protractors - Innovation" showing 1 items.
- This brass semicircular protractor is divided by single degrees and marked by tens from 10° to 90° to 10°. It is attached with metal screws to a set of brass parallel rules. Brass S-shaped hinges connect the rules to each other. The bottom left screw on the parallel rules does not attach to the bottom piece. A rectangular brass arm is screwed to the center of the protractor. A thin brass piece screwed to the arm is marked with a small arrow for pointing to the angle markings. The protractor is stored in a wooden case, which also contains a pair of metal dividers (5-1/4" long).
- The base of the protractor is signed: L. Dod, Newark. Lebbeus Dod (1739–1816) manufactured mathematical instruments in New Jersey and is credited with inventing the "parallel rule protractor." He served as a captain of artillery during the Revolutionary War, mainly by making muskets. His three sons, Stephen (1770–1855), Abner (1772–1847), and Daniel (1778–1823), were also noted instrument and clock makers. The family was most associated with Mendham, N.J. (where a historic marker on N.J. Route 24 indicates Dod's house), but Dod is known to have also lived at various times in Newark.
- ID number MA*310890 is a similar protractor and parallel rule.
- References: Bethuel Lewis Dodd and John Robertson Burnet, "Biographical Sketch of Lebbeus Dod," in Genealogies of the Male Descendants of Daniel Dod . . . 1646–1863 (Newark, N.J., 1864), 144–147; Alexander Farnham, "More Information About New Jersey Toolmakers," The Tool Shed, no. 120 (February 2002), http://www.craftsofnj.org/Newjerseytools/Alex%20Farnham%20more%20Jeraey%20Tools/Alex%20Farnham.htm; Deborah J. Warner, “Surveyor's Compass,” National Museum of American History Physical Sciences Collection: Surveying and Geodesy, http://americanhistory.si.edu/collections/surveying/object.cfm?recordnumber=747113; Peggy A. Kidwell, "American Parallel Rules: Invention on the Fringes of Industry," Rittenhouse 10, no. 39 (1996): 90–96.
- date made
- late 1700s
- Dod, Lebbeus
- ID Number
- accession number
- catalog number
- Data Source
- National Museum of American History, Kenneth E. Behring Center
|
Simple technique appears to be safe and effective, review suggests
By Robert Preidt
MONDAY, Oct. 15 (HealthDay News) -- A technique called the "mother's kiss" is a safe and effective way to remove foreign objects from the nostrils of young children, according to British researchers.
In the mother's kiss, a child's mother or other trusted adult covers the child's mouth with their mouth to form a seal, blocks the clear nostril with their finger, and then blows into the child's mouth. The pressure from the breath may expel the object in the blocked nostril.
Before using it, the adult should explain the technique to the child so that he or she is not frightened. If the first attempt is unsuccessful, the technique can be tried several times, according to a review published in the current issue of the CMAJ (Canadian Medical Association Journal).
For their report, researchers in Australia and the United Kingdom examined eight case studies in which the mother's kiss was used on children aged 1 to 8 years.
"The mother's kiss appears to be a safe and effective technique for first-line treatment in the removal of a foreign body from the nasal cavity," Dr. Stephanie Cook, of the Buxted Medical Centre in England, and colleagues concluded. "In addition, it may prevent the need for general anesthesia in some cases."
Further research is needed to compare various positive-pressure techniques and test how effective they are in different situations where objects are in various locations and have spent different lengths of time in the nasal passages, the researchers noted in a journal news release.
The U.S. National Library of Medicine has more about foreign objects in the nose.
SOURCE: CMAJ (Canadian Medical Association Journal), news release, Oct. 15, 2012
Copyright © 2012 HealthDay. All rights reserved. URL:http://www.healthscout.com/template.asp?id=669525
|
The atlas of climate change: mapping the world's greatest challenge
University of California Press
, 2007 - Science
- 112 pages
Today's headlines and recent events reflect the gravity of climate change. Heat waves, droughts, and floods are bringing death to vulnerable populations, destroying livelihoods, and driving people from their homes.
Rigorous in its science and insightful in its message, this atlas examines the causes of climate change and considers its possible impact on subsistence, water resources, ecosystems, biodiversity, health, coastal megacities, and cultural treasures. It reviews historical contributions to greenhouse gas levels, progress in meeting international commitments, and local efforts to meet the challenge of climate change.
With more than 50 full-color maps and graphics, this is an essential resource for policy makers, environmentalists, students, and everyone concerned with this pressing subject.
The Atlas covers a wide range of topics, including:
* Warning signs
* Future scenarios
* Vulnerable populations
* Renewable energy
* Emissions reduction
* Personal and public action
Copub: Myriad Editions
|
"One death is a tragedy. A million deaths is a statistic." -Joseph Stalin
When figuring out how people will respond to a foreign tragedy, it comes down to three things: location, location, location.
And TV cameras too.The September 11, 2001 homicide attacks
killed about 3,000 people yet it's had more impact on American politics and foreign policy than anything since World War II. And to the great extent that American foreign policy impacts the rest of the world, it had a huge impact on international affairs as well.
While 3,000 is pretty big death toll for a single incident, there have been other wars and attacks with greater loss of life that had a relatively miniscule influence on American or international affairs. Why? Because those attacks didn't occur in the heart of New York City. The international response would've been significantly less if the attack had been launched in Kathmandu, Bogota or Algiers (in countries with homegrown terrorist problems).The Asian tsunami of 2004
had a devastating effect and cost an estimated 283,000 lives and over a million displaced. It generated an international response that was probably unprecedented in scale. As someone who regularly reads articles on underfunded international crisis appeals, I was heartened by the response to the tsunami. That it hit easily accessible coastal regions, including many tourist areas, made it easier to TV crews to get images. That Europeans and Americans were amongst the victims, if a tiny fraction, ensured that it got coverage in the western media.
But if I told you there was a conflict that has cost almost 15 times as many lives as the tsunami, could you name that crisis? If I told you there was a crisis that, in mortality terms, was the equivalent of a three 9/11s every week for the last 7 years
, would you know which one I'm talking about?
I bet few westerners could, even though it's by far the deadiest conflict of the last 60 years.
The war in the Democratic Republic of the Congo (formerly Zaire) is killing an estimated 38,000 people each month, according to the British medical journal The Lancet
. And if not for the involvement of humanitarian non-governmental organizations and UN relief agencies, the toll would be much higher.Most of the deaths are not caused by violence but by malnutrition and preventable diseases after the collapse of health services, the study said
, notes the BBC. Since the war began in 1998, some 4m people have died, making it the world's most deadly war since 1945, it said.
A peace deal has ended most of the fighting but armed gangs continue to roam the east, killing and looting.
The political process in the DRC is slowly inching in the right direction. Voters in the country recently approved a new constitution
, to replace the one imposed on it by the outgoing Belgian colonialists. EU officials praised the referendum as free and fair, probably the first truly open poll in the country's history. Elections are scheduled for June of this year.
However, instability reigns in much of the country, particularly the east. And central government throughout the entireity of the country has never been strong in this gigantic country. There are 17,000 UN peacekeepers doing the best they can but the country's the size of Western Europe. (By contrast the Americans and British have ten times as many troops in Iraq, a country that's less than 1/5 the size of the DRC. And we know how many problems they're having there)
And this shows why war should ALWAYS be a last resort. Most of the deaths have not been directly caused by war
(bullet wounds, landmines, etc). Most of the deaths have been caused by factors provoked by war's instability and destruction. The destruction of all infrastructure like roads and medical clinics. The inability to get to sources of clean water. The fear of leaving the house to tend the fields or go to the market.
38,000 people a month. If you get pissed off at Howard Dean or Pat Robertson, spare a little outrage for this.
And maybe a few bucks.
WANNA HELP? TAKE YOUR PICK
-Doctors Without Borders
-World Food Program
-Catholic Relief Services
|
Want to stay on top of all the space news? Follow @universetoday on TwitterSedimentary rock covers 70% of the Earth. Erosion is constantly changing the face of the Earth. Weathering agents…wind, water, and ice…break rock into smaller pieces that flow down waterways until the settle to the bottom permanently. These sediments( pebbles, sand, clay, and gravel) pile up and for new layers. After hundred or thousands of years these rocks become pressed together to form sedimentary rock.
Sedimentary rock can form in two different ways. When layer after layer of sediment forms it puts pressure on the lower layers which then form into a solid piece of rock. The other way is called cementing. Certain minerals in the water interact to form a bond between rocks. This process is similar to making modern cement. Any animal carcasses or organisms that are caught in the layers of sediment will eventually turn into fossils. Sedimentary rock is the source of quite a few of our dinosaur findings.
There are four common types of sedimentary rock: sandstone, limestone, shale, and conglomerate. Each is formed in a different way from different materials. Sandstone is formed when grains of sand are pressed together. Sandstone may be the most common type of rock on the planet. Limestone is formed by the tiny pieces of shell that have been cemented together over the years. Conglomerate rock consists of sand and pebbles that have been cemented together. Shale forms under still waters like those found in bogs or swamps. The mud and clay at the bottom is pressed together to form it.
Sedimentary rock has the following general characteristics:
- it is classified by texture and composition
- it often contains fossils
- occasionally reacts with acid
- has layers that can be flat or curved
- it is usually composed of material that is cemented or pressed together
- a great variety of color
- particle size varies
- there are pores between pieces
- can have cross bedding, worm holes, mud cracks, and raindrop impressions
This is only meant to be a brief introduction to sedimentary rock. There are many more in depth articles and entire books that have been written on the subject. Here is a link to a very interesting introduction to rocks. Here on Universe Today there is a great article on how sedimentary rock show very old signs of life. Astronomy Cast has a good episode on the Earth’s formation.
|
by George Heymann | @techeadlines
Google has started a new page on Google Plus to share their vision of what its augmented reality glasses could be. They are soliciting suggestions from users on what they would like to see from Project Glass.
“We think technology should work for you—to be there when you need it and get out of your way when you don’t.
A group of us from Google[x] started Project Glass to build this kind of technology, one that helps you explore and share your world, putting you back in the moment. We’re sharing this information now because we want to start a conversation and learn from your valuable input. So we took a few design photos to show what this technology could look like and created a video to demonstrate what it might enable you to do.”
Video after the break.
Read the rest of this entry »
Filed under: Android, General technology, Google, Media, Services, Augmented reality, eyeglasses, feature, featured, glasses, Google, Google Plus, Google Project Gass, google virtual reality glasses, New York Times, Project Glass, technology, turn by turn directions, UI, User interface, video chat, Virtual reality, Virtual reality glasses
January 18, 2012 • 11:07 am
$3 million grant from the Bill & Melinda Gates Foundation will fund development
by Eric Klopfer
MIT Education Arcade
With a new $3 million grant from the Bill & Melinda Gates Foundation, the MIT Education Arcade is about to design, build and research a massively multiplayer online game (MMOG) to help high school students learn math and biology.
In contrast to the way that Science, Technology, Engineering and Math (STEM) are currently taught in secondary schools — which often results in students becoming disengaged and disinterested in the subjects at an early age — educational games such as the one to be developed give students the chance to explore STEM topics in a way that deepens their knowledge while also developing 21st-century skills.
Read the rest of this entry »
Filed under: Gaming, Services, Software, 3 million dollar grant, Associate Professor, Augmented reality, Bill & Melinda Gates Foundation, Common Core, Common Core standards, Education, education arcade, education program, Eric Klopfer, Filament Games, game to help teach math and science, High school, Math, mit, MIT education arcade, MMOG, Next generation science, StarLogo TNG, STEM, Washington, Washington D.C.
by George Heymann
Blackberry no sooner made its official Blackberry Bold 9900 announcement and then T-mobile tweeted that it would be carrying the device. The 9900 will be T-mobile’s first 4G capable Blackberry. It is rumored to be available on T-Mobile in the June/July timeframe.
The Blackberry Bold 9900/9930 will be a touch screen device with a 1.2 GHz processor, 8 GB of onboard memory, expandable to 32GB, HSPA+ 14.4 capable, 5 megapixel camera with flash, 720p HD video recording, dual-band WiFi, a built-in compass (magnetometer) and Near Field Communication (NFC) technology featuring the new Blackberry 7 OS.
Press release after the break
Filed under: Blackberry, Hardware, 4G, Augmented reality, BlackBerry, Blackberry Bold 9900, Blackberry Bold 9930, Evolution-Data Optimized, Evolved HSPA, Hertz, High-definition video, HSPA+, Research In Motion, T-Mobile, Touchscreen, Wi-Fi
|
||This article has multiple issues. Please help improve it or discuss these issues on the talk page.
||This article's factual accuracy may be compromised due to out-of-date information. (July 2012)
||This article needs additional citations for verification. (July 2012)
||An editor has expressed a concern that this article lends undue weight to certain ideas, incidents, controversies or matters relative to the article subject as a whole. (July 2012)
Culinary arts is the art of preparing and cooking foods. The word "culinary" is defined as something related to, or connected with, cooking. A culinarian is a person working in the culinary arts. A culinarian working in restaurants is commonly known as a cook or a chef. Culinary artists are responsible for skilfully preparing meals that are as pleasing to the palate as to the eye. They are required to have a knowledge of the science of food and an understanding of diet and nutrition. They work primarily in restaurants, delicatessens, hospitals and other institutions. Kitchen conditions vary depending on the type of business, restaurant, nursing home, etc. The Table arts or the art of having food can also be called as "Culinary arts".
Careers in culinary arts
Related careers
Below is a list of the wide variety of culinary arts occupations.
- Consulting and Design Specialists – Work with restaurant owners in developing menus, the layout and design of dining rooms, and service protocols.
- Dining Room Service – Manage a restaurant, cafeterias, clubs, etc. Diplomas and degree programs are offered in restaurant management by colleges around the world.
- Food and Beverage Controller – Purchase and source ingredients in large hotels as well as manage the stores and stock control.
- Entrepreneurship – Deepen and invest in businesses, such as bakeries, restaurants, or specialty foods (such as chocolates, cheese, etc.).
- Food and Beverage Managers – Manage all food and beverage outlets in hotels and other large establishments.
- Food Stylists and Photographers – Work with magazines, books, catalogs and other media to make food visually appealing.
- Food Writers and Food Critics – Communicate with the public on food trends, chefs and restaurants though newspapers, magazines, blogs, and books. Notables in this field include Julia Child, Craig Claiborne and James Beard.
- Research and Development Kitchens – Develop new products for commercial manufacturers and may also work in test kitchens for publications, restaurant chains, grocery chains, or others.
- Sales – Introduce chefs and business owners to new products and equipment relevant to food production and service.
- Instructors – Teach aspects of culinary arts in high school, vocational schools, colleges, recreational programs, and for specialty businesses (for example, the professional and recreational courses in baking at King Arthur Flour).
Occupational outlook
The occupation outlook for chefs, restaurant managers, dietitians, and nutritionists is fairly good, with "as fast as the average" growth. Increasingly a college education with formal qualifications is required for success in this field. The culinary industry continues to be male-dominated, with the latest statistics showing only 19% of all 'chefs and head cooks' being female.
Notable culinary colleges around the world
- JaganNath Institute of Management Sciences, Rohini, Delhi, India
- College of Tourism & Hotel Management, Lahore, Punjab, Pakistan
- Culinary Academy of India, Hyderabad, Andhra Pradesh, India
- Culinary Academy of India, Hyderabad, Andhra Pradesh, India
- ITM School of Culinary Arts, Mumbai, Maharashtra, India
- Welcomgroup Graduate School of Hotel Administration, Manipal, Karnataka, India
- Institute of Technical Education (College West) – School of Hospitality, Singapore
- ITM (Institute of Technology and Management) – Institute of Hotel Management, Bangalore, Karnataka, India
- Apicius International School of Hospitality, Florence, Italy
- Le Cordon Bleu, Paris, France
- École des trois gourmandes, Paris, France
- HRC Culinary Academy, Bulgaria
- Institut Paul Bocuse, Ecully, France
- Mutfak Sanatlari Akademisi, Istanbul, Turkey
- School of Culinary Arts and Food Technology, DIT, Dublin, Ireland
- Scuola di Arte Culinaria Cordon Bleu, Florence, Italy
- Westminster Kingsway College (London)
- University of West London (London)
- School of Restaurant and Culinary Arts, Umeå University (Sweden)
- Camosun College (Victoria, BC)
- Canadore College (North Bay, ON)
- The Culinary Institute of Canada (Charlottetown, PE)
- Georgian College (Owen Sound, ON)
- George Brown College (Toronto, ON)
- Humber College (Toronto, ON)
- Institut de tourisme et d'hôtellerie du Québec (Montreal, QC)
- Niagara Culinary Institute (Niagara College, Niagara-on-the-Lake, ON)
- Northwest Culinary Academy of Vancouver (Vancouver, BC)
- Nova Scotia Community College (Nova Scotia)
- Pacific Institute of Culinary Arts (Vancouver, BC)
- Vancouver Community College (Vancouver, BC)
- Culinary Institute of Vancouver Island (Nanaimo, BC)
- Sault College (Sault Ste. Marie, ON)
- Baltimore International College, Baltimore, Maryland
- California Culinary Academy, San Francisco, California
- California School of Culinary Arts, Pasadena, California
- California State, Pomona, California
- California State University Hospitality Management Education Initiative
- Chattahoochee Technical College in Marietta, Georgia
- Cooking and Hospitality Institute of Chicago
- Coosa Valley Technical College, Rome, Georgia
- Culinard, the Culinary Institute of Virginia College
- Cypress Community College Hotel, Restaurant Management, & Culinary Arts Program in Anaheim
- Classic Cooking Academy, Scottsdale, Arizona
- Center for Kosher Culinary Arts, Brooklyn, New York
- Culinary Institute of America in Hyde Park, New York
- Culinary Institute of America at Greystone in St. Helena, California
- The Culinary Institute of Charleston, South Carolina
- L'Ecole Culinaire in Saint Louis, Missouri and Memphis, Tennessee
- Glendale Community College (California)
- International Culinary Centers in NY and CA which include:
- Institute for the Culinary Arts at Metropolitan Community College, Omaha, Nebraska
- Johnson & Wales University, College of Culinary Arts
- Kendall College in Chicago, Illinois
- Lincoln College of Technology
- Manchester Community College in Connecticut
- New England Culinary Institute in Vermont
- Orlando Culinary Academy
- Pennsylvania Culinary Institute
- The Restaurant School at Walnut Hill College, Philadelphia, Pennsylvania,
- Scottsdale Culinary Institute
- Secchia Institute for Culinary Education: Grand Rapids Community College, Grand Rapids, MI
- The Southeast Culinary and Hospitality College in Bristol, Virginia
- Sullivan University Louisville, Kentucky
- Los Angeles Trade–Technical College
- Texas Culinary Academy
- Central New Mexico Community College, Albuquerque, NM
- AUT University (Auckland University of Technology)
- MIT (Manukau Institute of Technology)
- Wintec, Waikato Institute of Technology
See also
- McBride, Kate, ed. The Professional Chef/ the Culinary Institute of America, 8th ed. Hoboken, NJ: John Wiley & Sons, INC, 2006.
Further reading
- Beal, Eileen. Choosing a career in the restaurant industry. New York: Rosen Pub. Group, 1997.
- Institute for Research. Careers and jobs in the restaurant business: jobs, management, ownership. Chicago: The Institute, 1977.
External links
|
With the arrival of cold weather, the Occupational Safety and Health Administration is reminding employers to take necessary precautions to protect workers from the serious, and sometimes fatal, effects of carbon monoxide exposure.
Recently, a worker in a New England warehouse was found unconscious and seizing, suffering from carbon monoxide poisoning. Several other workers at the site also became sick. All of the windows and doors were closed to conserve heat, there was no exhaust ventilation in the facility, and very high levels of carbon monoxide were measured at the site.
Every year, workers die from carbon monoxide poisoning, usually while using fuel-burning equipment and tools in buildings or semi-enclosed spaces without adequate ventilation. This can be especially true during the winter months when employees use this type of equipment in indoor spaces that have been sealed tightly to block out cold temperatures and wind. Symptoms of carbon monoxide exposure can include everything from headaches, dizziness and drowsiness to nausea, vomiting or tightness across the chest. Severe carbon monoxide poisoning can cause neurological damage, coma and death.
Sources of carbon monoxide can include anything that uses combustion to operate, such as gas generators, power tools, compressors, pumps, welding equipment, space heaters and furnaces.
To reduce the risk of carbon monoxide poisoning in the workplace, employers should install an effective ventilation system, avoid the use of fuel-burning equipment in enclosed or partially-enclosed spaces, use carbon monoxide detectors in areas where the hazard is a concern and take other precautions outlined in OSHA's Carbon Monoxide Fact Sheet. For additional information on carbon monoxide poisoning and preventing exposure in the workplace, see OSHA's Carbon Monoxide Poisoning Quick Cards (in English and Spanish).
Under the Occupational Safety and Health Act of 1970, employers are responsible for providing safe and healthful workplaces for their employees. OSHA's role is to ensure these conditions for America's working men and women by setting and enforcing standards, and providing training, education and assistance. For more information, visit www.osha.gov.
|
Sitting around a table at Meyers Center BOCES in Saratoga, students, teaching assistants, and teachers were busy crocheting. They weren’t making afghans or shawls– but rather, they were turning plastic into possibility. Little did they know they were also making history.
Profiles of 17 human rights defenders from around the globe, with links to accompanying lesson plans.
Browse the curriculum. Download the complete 158-page guide or individual lesson plans.
Speak Truth to Power: Voices from Beyond the Dark brings the voices of human rights defenders into your classroom.
Find out how you can support your students in their efforts to defend human rights.
The Speak Truth to Power project will enter the video documentary world with a contest challenging middle and high school students in New York to create a film based on the experience of courageous human rights defenders around the world.
It is with great sadness that the family of Professor Wangari Maathai announces her passing away on 25th September, 2011, at the Nairobi Hospital, after a prolonged and bravely borne struggle with cancer.
New to the STTP site this month is a lesson plan based on the work of Congressman John Lewis, who has dedicated his life to protecting human rights, securing civil liberties, and building what he described as “The Beloved Community” in America.
|
The Ancient Forests of North America are extremely diverse. They include the boreal forest belt stretching between Newfoundland and Alaska, the coastal temperate rainforest of Alaska and Western Canada, and the myriad of residual pockets of temperate forest surviving in more remote regions.
Together, these forests store huge amounts of carbon, helping tostabilise climate change. They also provide a refuge for large mammalssuch as the grizzly bear, puma and grey wolf, which once ranged widelyacross the continent.
In Canada it is estimated that ancient forest provides habitat forabout two-thirds of the country's 140,000 species of plants, animalsand microorganisms. Many of these species are yet to be studied byscience.
The Ancient Forests of North America also provide livelihoods forthousands of indigenous people, such as the Eyak and Chugach people ofSouthcentral Alaska, and the Hupa and Yurok of Northern California.
Of Canada's one million indigenous people (First Nation, Inuit andMétis), almost 80 percent live in reserves and communities in boreal ortemperate forests, where historically the forest provided their foodand shelter, and shaped their way of life.
Through the Trees - The truth behind logging in Canada (PDF)
On the Greenpeace Canada website:
Interactive map of Canada's Borel forest (Flash)
Fun animation that graphically illustrates the problem (Flash)
Defending America's Ancient Forests
|
This week’s illustration post focuses on perhaps the most popular illustrated genre of the Renaissance: the emblem book. Emblem books were a genre developed in the early 16th century as digestible and curious works which combined an image, a motto and an explanatory text. This genre remained popular on the continent well into the 17th century, although, strangely, it never found any real footing in the U.K. The illustrations were most commonly woodcuts and the mottoes Latin, however Greek and vernacular mottoes were common as well.
The intention of most emblem books was to deliver a moral lesson through text and image, but often the connection between these two elements is obscure. There are many projects that have developed over the last decade to provide a digital repository for these books worth exploring (Emblematica Online [Univeristy of Illinois and Herzog August Bibliothek], University of Glasgow and the Emblem Project Utrecht are good examples).
Although St Andrews 17th century holdings remain largely unexplored, I did come across a wonderful example of a Spanish emblem book published at the height of the genre’s popularity. This week’s post features the emblems from Sebastián de Covarrubias y Orozco’s Emblemas morales (1610). This work was published under the direction of Don Francisco Gómez de Sandoval, 1st Duke of Lerma, shortly after Covarrubias had recovered from a serious illness. This is perhaps Covarrubias’s most popular publication on emblems, but he is most well-known for his Tesoro de la lengua castellana o española (1611).
From the beginning of this book it is quite obvious that the author’s roots, both as a canon in the Catholic Church and as a keen lexicographer, are quite influential. However, Covarrubias does stray off into the weirder and esoteric world of the emblem books of the day: dragons, serpents, snake-eating deer and oversized-tops (above, which look like the ship from Flight of the Navigator to this blogger). Covarrubias also draws on everyday scenes of fishermen and farmers toiling in the field to root this emblem book in the real-world.
I’ve sampled here some of my favourites from this book, however a full scan is available at Emblematica Online. This work was not Covarrubias’s first foray into the world of emblem books, in 1589 he published a work, also entitled Emblemas morales, which provides almost 100 pages worth of text about emblems and their origins and then provides 100 examples afterword (none of which are repeated in his 1610 work). This work was republished in 1604 with the same text but with new woodcuts.
Covarrubias’s 1610 Emblemas morales, however, is a completely new work, featuring new emblems and mottoes with shorter verse explanations.
|
The latest weapon unleashed to battle California's growing energy demand comes in the form of free software. The State of California Energy Commission's Public Interest Energy Research (PIER) program is responsible for the development of a free software package dubbed the Sensor Placement and Orientation Tool (SPOT). The purpose of this software is to help designers establish correct photosensor placement relative to a given daylighting and electric lighting design.
Daylighting systems, which use natural lighting to supplement electric lighting, are sensitive to photosensor placement and performance. However, until now, there have been no easy-to-use tools to help designers predict performance and determine optimum sensor positioning. Using an Excel worksheet interface, SPOT accounts for variables such as room geometry, surface reflectances, solar orientation, electric lighting layout, and window design to help determine the best location for photosensors. It also helps designers comply with the daylighting requirements in California's Title 24 energy code, which calls for separate controls for daylit areas and offers substantial energy budget credits for automatic daylighting controls.
This software may be downloaded for free at www.archenergy.com/SPOT/index.html.
|
The Handel and Haydn Society is a chorus and period instrument orchestra in the city of Boston, Massachusetts. Founded in 1815, it is one of the oldest performing arts organizations in the USA. Most widely known for its performances of George Frideric Handel's Messiah, the group gave its American premiere in 1818 and has performed the piece annually since 1854.
The Handel and Haydn Society was founded as an oratorio society in Boston on April 20, 1815, by Gottlieb Graupner, Thomas Smith Webb, Amasa Winchester, and Matthew S. Parker, a group of Boston merchants and musicians who were eager to improve the performance of choral music in a city that, at the time, offered very little music of any kind. The name of the Society reflects the foundersí wish to bring Boston audiences the best of the old (G.F. Handel) and the best of the new (Haydn) in concerts of the highest artistic quality. The first performance by the Society was held on Christmas night in 1815 at King's Chapel, and included a chorus of 90 men and 10 women.
From its earliest years, the Handel and Haydn Society established a tradition of innovation, performing the American premieres of G.F. Handelís Messiah in 1818, Haydnís The Creation in 1819, Verdiís Requiem in 1878, Amy Beach's Mass in 1892, and numerous other works by G.F. Handel, Mozart, J.S. Bach, and others.
The Society was also an early promoter of composer Lowell Mason, publishing his first collection of hymns and later electing him as the group's President. Mason's music was extremely influential and much of it is still performed today. He is best known for composing the music for the popular carol, Joy to the World. Mason was also instrumental in establishing music education in the USA.
Throughout the 19th and 20th centuries, Handel and Haydn staged music festivals to commemorate its own anniversaries and such significant events as the end of the Civil War. The Society organized Americaís first great music festival in 1857, and in later years gave benefit concerts to aid the Union Army, victims of the Chicago fire in 1871, and Russian Jewish refugees in 1882. Over the years, the Handel and Haydn Society has performed for such luminaries as President James Monroe, Grand Duke Alexis of Russia, Admiral Dewey, and Queen Elizabeth II.
By the mid 20th century, the Handel and Haydn Society had begun to move toward vocal and instrumental authenticity. In 1967, an acknowledged expert in Baroque performance practice, Thomas Dunn, became the Society's Artistic Director and transformed the group's large amateur chorus into one of approximately 30 professional singers. In 1986, Christopher Hogwood succeeded Thomas Dunn as Artistic Director and added period-instrument performances and a new verve to the high choral standards of the Society. In October 1986, Handel and Haydn presented its first period instrument orchestra concert under Christopher Hogwoodís baton, and by the 1989-1990 season all of the Society's concerts were performed on period instruments. The Society has remained committed to historically informed performance following the end of Christopher Hogwood's tenure as Artistic Director in the spring of 2001.
Handel and Haydn Society announced the appointment of Harry Christophers as Artistic Director on September 26, 2008. Harry Christophers, a regular guest conductor of the Society, began his tenure as Artistic Director with the 2009-2010 season and is the organizationís thirteenth artistic leader since its founding in 1815. The initial term of Harry Christophersí contract with the Society extends through the 2011-2012 season.
Harry Christophers has conducted the Handel and Haydn Society each season since his first appearance in September 2006, when he led a sold-out performance in the Esterházy Palace at the Haydn Festival in Eisenstadt, Austria. Held in the same location where Haydn lived and worked for nearly 40 years, this Austrian appearance marked the Societyís first in Europe in its then 191-year history. Harry Christophers returned to conduct the Society in Boston in a critically acclaimed performance of G.F. Handelís Messiah in December 2007, followed by an appearance at Symphony Hall in January 2008. Founder and Music Director of the renowned UK-based choir and period-instrument orchestra, The Sixteen, he is also in demand as a guest conductor for leading orchestras and opera companies worldwide and in the USA.
Welsh conductor Grant Llewellyn joined Handel and Haydn in the 2001-2002 season as Music Director. Grant Llewellyn did not have a background in period-instrument performance prior to joining the Society, but has won wide acclaim from critics and musicians for his energetic and compelling conducting. He has been noted for his charming personality and for his ability to produce exceptional performances from the Society's musicians.
During his tenure as Music Director, the Society produced several recordings that have met with considerable commercial success, including Peace and All is Bright which both appeared on Billboard Magazine's Classical Top 10 chart. Handel and Haydn Society was also awarded its first Grammy Award for a collaboration with the San Francisco choral ensemble Chanticleer for the 2003 recording of Sir John Tavener's Lamentations and Praises.
The Society also entered into a multi-year relationship with Chinese director Chen Shi-Zheng starting in 2003. This has yielded fully-staged productions of Monteverdi's Vespers (in 2003) and Orfeo (in 2006) that Chen sees as the start of a cycle of Monteverdi's surviving operas and his Vespers. The 2006 Orfeo was co-produced by the English National Opera. Chen also directed a production of Purcell's Dido and Aeneas in 2005 for Handel and Haydn. Grant Llewellyn concluded his tenure in 2006.
In July 2007, the ensemble made a historic appearance at London's Royal Albert Hall as part of the BBC Proms concert series, presenting Haydn's oratorio Die Jahreszeiten (The Seasons), with Sir Roger Norrington conducting.
|
Capital and oldest city of the kingdom of Navarra, Spain. Next to Tudela, it possessed the most important Jewish community. The Jewry was situated in the Navarreria, the oldest quarter of the city. When Navarra came under the guardianship of Philip the Fair, and the Pamplonians refused to pay him homage, the Jewry was destroyed by the French troops, the houses were plundered, and many Jews were killed (1277). In 1280, upon the complaint of the Jews, the city was directed to restore to them the confiscated propertiesand to assign to them other ground for building purposes. In 1319 the city council, in conjunction with the bishop, to whom the Jews were tributary, had resolved, in compliance with the wish of King Charles I., to rebuild the Jewry; but this was not done until 1336.
The new Jewry was near the Puente de la Magdalena, and was surrounded with strong walls to guard it against invasion. In the Jewry was the Alcaceria, where the Jews carried on considerable traffic in silk goods, while in a separate street were stores in which they sold jewelry, shoes, etc. Some of the Jews were artisans, and were employed by the royal court; others practised medicine. The physician Samuel, in recognition of his services as surgeon to the English knight Thomas Trivet, was presented by King Charles in 1389 with several houses situated in the Jewry and which had formerly been in the possession of Bonafos and his son Sento, two jugglers. In 1433 the physician Maestre Jacob Aboazar, who had his house near the Great Synagogue, accompanied the queen on a journey abroad. Contemporary with him was the physician Juce (Joseph).
In 1375 the Jews of Pamplona numbered about 220 families, and paid a yearly tax of 2,592 pounds to the king alone. They had, as in Estella and Tudela, their independent magistracy, consisting of two presidents and twenty representatives. Gradually the taxes became so burdensome that they could no longer be borne. In 1407 King Charles III. issued an order that the movable property of the Jews should be sold at auction, and the most notable members placed under arrest, unless they paid the tax due to him. To escape these frequent vexations many of the Jews resolved to emigrate; and a part of the Jewry was thus left uninhabited. No sooner had Leonora ascended the throne as coregent (1469) than she issued an order to the city magistrate to require the Jews to repair the dilapidated houses.
The policy of Ferdinand and Isabella triumphed in the Cortes of Pamplona in 1496. Two years later the Jews were expelled from Navarra. Many emigrated; and those who were unable to leave the city embraced Christianity. Ḥayyim Galipapa was rabbi of Pamplona in the fourteenth century; and the scholar Samuel b. Moses Abbas was a resident of the city.
- Kayserling, Gesch. der Juden in Spanien, i. 34, 43. 73, 93, 105 et seq.;
- Rios, Hist. ii. 452, iii. 200;
- Jacobs, Sources, s.v. Pamplona.
|
Most exercise-related injuries have the same basic cause - the overstressing of muscles, tendons, ligaments, bones, and other tissue. With sufficient precautions and care, risks can be minimized. Warming up slowly and cooling down properly can help prevent many stress injuries. To be effective, your warm-up and cool-down exercises should use the same muscles as your main exercise. For example, if you jog, begin by walking for several minutes, then jog slowly, before breaking into a full stride. Do this before and after your regular exercise. Every athlete should include a 15-minute warm up and cool down program as part of the workout. This will increase flexibility, reduce muscle soreness, and improve overall performance. Other good principles to follow during exercise are: know your body's limitations and warning signals; drink plenty of water; and never combine heavy eating with heavy exercising. For more information on the benefits of warming up and cooling down, consult a physician.
|
A group of researchers have developed a way to identify pirated movies by reducing the original to a signature genetic code. The system can match even videos that have been altered or had their colors changed to the source, an area where many video piracy mechanisms fall short.
Drs. Alex and Michael Bronstein and Professor Ron Kimmel have come up with a way to isolate a certain subset of data from video files that serves an analogous role to a fingerprint at a crime scene. While the creators haven't published research on this exact project in order to guard the proprietary technology, it works by applying a series of grids over the film to reduce it to a series of numbers.
Once the film has been systematically reduced, copyright holders can take the "DNA signature" of the video and scan sites that host pirated videos for it. According to the three researchers, the signature should be able to find correct matches even if the videos' borders have been changed, commercials have been added, or scenes have been edited out, which is a capability that sites that patrol for piracy, like YouTube, currently lack.
There are no details on the limitations of the system, such as video length, style, or quality. "We have a fully working prototype and have established a company that commercializes it," Dr. Bronstein told Ars.
While the website is no more revealing about how the video DNA matching works, Bronstein adds that they've already had a few companies interested in licensing the technology.
|
In the era of Facebook and Twitter, it's clear we love talking about ourselves—indeed, the topic makes up 40% of our everyday conversation. Now scientists can explain why: Doing so stimulates the brain the same way sex, food, and cash do. Researchers scanned subjects' brains and found that parts linked to those pleasures lit up when subjects chatted about themselves.
Researchers also offered study subjects up to 4 cents per question to talk about topics other than themselves, like President Obama. Turns out "people were even willing to forgo money in order to talk about themselves," says a researcher. Subjects would part with 17% to 25% of the potential cash for an opportunity for self-disclosure, the Wall Street Journal reports.
|
by Nate Jones, Vertebrate Ecology Lab
(still in the Bering Sea) … Of course the bad weather I’ve been writing about was nothing compared to what happens on the Bering during the months of February or March, and the Gold Rush fishes regularly during that time of year, so I had complete faith in the seaworthiness of the ship and the judgment and skill of the crew. I took comfort in that thought, and stumbled down to my bunk for what became a grueling 72 hours of bumps, rolls, and queasy stomachs. During this stormy time the crew exchanged watches at the helm, keeping the ship pointed into the fury.
We all hoped for the best, but by the time the seas had calmed to (a more manageable?) 8-10’, the hungry ocean had damaged and ripped off much of our scientific equipment, snapping several ¼” steel bolts and ripping welds clean apart!
The Gold Rush itself weathered this storm in fine shape (wish we could say the same of our scientific equipment!), and there were no major injuries to anyone on board. It really was quite a minor event in the context of the Bering Sea; just another blowy, bumpy day or two out on the water.
But, it impressed me and I couldn’t help contemplating darker scenarios – what happens when there is a true emergency? What if someone had been swept overboard, or, worse yet, what if the ship itself had been damaged or taken on water and started to go down? Such things do happen, although not as frequently now as they have in the past (coast guard regulations and improvements in technology and crew training have contributed to much increased safety).
In my next post I’ll put up some images from training exercises that are routinely undertaken to help prepare crew and passengers (scientists) for emergencies at sea…
|
Teachers, register your class or environmental club in our annual Solar Oven Challenge! Registration begins in the fall, and projects must be completed in the spring to be eligible for prizes and certificates.
Who can participate?
GreenLearning's Solar Oven Challenge is open to all Canadian classes. Past challenges have included participants from grade 3 through to grade 12. Older students often build solar ovens as part of the heat unit in their Science courses. Other students learn about solar energy as a project in an eco-class or recycling club.
How do you register?
1. Registration is now open to Canadian teachers. To register, send an email to Gordon Harrison at GreenLearning. Include your name, school, school address and phone number, and the grade level of the students who will be participating.
2. After you register, you will receive the Solar Oven Challenge Teacher's Guide with solar oven construction plans. Also see re-energy.ca for construction plans, student backgrounders, and related links on solar cooking and other forms of renewable energy. At re-energy.ca, you can also see submissions, photos and recipes from participants in past Solar Oven Challenges.
3. Build, test and bake with solar ovens!
4. Email us photos and descriptions of your creations by the deadline (usually the first week of June).
5. See your recipes and photos showcased at re-energy.ca. Winners will be listed there and in GreenLearning News.
|
Rurik DynastyArticle Free Pass
Rurik Dynasty, princes of Kievan Rus and, later, Muscovy who, according to tradition, were descendants of the Varangian prince Rurik, who had been invited by the people of Novgorod to rule that city (c. 862); the Rurik princes maintained their control over Kievan Rus and, later, Muscovy until 1598.
Rurik’s successor Oleg (d. 912) conquered Kiev (c. 882) and established control of the trade route extending from Novgorod, along the Dnieper River, to the Black Sea. Igor (allegedly Rurik’s son; reigned 912–945) and his successors—his wife, St. Olga (regent 945–969), and their son Svyatoslav (reigned 945–972)—further extended their territories; Svyatoslav’s son Vladimir I (St. Vladimir; reigned c. 980–1015) consolidated the dynasty’s rule.
Vladimir compiled the first Kievan Rus law code and introduced Christianity into the country. He also organized the Kievan Rus lands into a cohesive confederation by distributing the major cities among his sons; the eldest was to be grand prince of Kiev, and the brothers were to succeed each other, moving up the hierarchy of cities toward Kiev, filling vacancies left by the advancement or death of an elder brother. The youngest brother was to be succeeded as grand prince by his eldest nephew whose father had been a grand prince. This succession pattern was generally followed through the reigns of Svyatopolk (1015–19); Yaroslav the Wise (1019–54); his sons Izyaslav (1054–68; 1069–73; and 1077–78), Svyatoslav (1073–76), and Vsevolod (1078–93); and Svyatopolk II (son of Izyaslav; reigned 1093–1113).
The successions were accomplished, however, amid continual civil wars. In addition to the princes’ unwillingness to adhere to the pattern and readiness to seize their positions by force instead, the system was upset whenever a city rejected the prince designated to rule it. It was also undermined by the tendency of the princes to settle in regions they ruled rather than move from city to city to become the prince of Kiev.
In 1097 all the princes of Kievan Rus met at Lyubech (northwest of Chernigov) and decided to divide their lands into patrimonial estates. The succession for grand prince, however, continued to be based on the generation pattern; thus, Vladimir Monomakh succeeded his cousin Svyatopolk II as grand prince of Kiev. During his reign (1113–25) Vladimir tried to restore unity to the lands of Kievan Rus; and his sons (Mstislav, reigned 1125–32; Yaropolk, 1132–39; Vyacheslav, 1139; and Yury Dolgoruky, 1149–57) succeeded him eventually, though not without some troubles in the 1140s.
Nevertheless, distinct branches of the dynasty established their own rule in the major centres of the country outside Kiev—Halicz, Novgorod, and Suzdal. The princes of these regions vied with each other for control of Kiev; but when Andrew Bogolyubsky of Suzdal finally conquered and sacked the city (1169), he returned to Vladimir (a city in the Suzdal principality) and transferred the seat of the grand prince to Vladimir. Andrew Bogolyubsky’s brother Vsevolod III succeeded him as grand prince of Vladimir (reigned 1176–1212); Vsevolod was followed by his sons Yury (1212–38), Yaroslav (1238–46), and Svyatoslav (1246–47) and his grandson Andrew (1247–52).
Alexander Nevsky (1252–63) succeeded his brother Andrew; and Alexander’s brothers and sons succeeded him. Furthering the tendency toward fragmentation, however, none moved to Vladimir but remained in their regional seats and secured their local princely houses. Thus, Alexander’s brother Yaroslav (grand prince of Vladimir, 1264–71) founded the house of Tver, and Alexander’s son Daniel founded the house of Moscow.
After the Mongol invasion (1240) the Russian princes were obliged to seek a patent from the Mongol khan in order to rule as grand prince. Rivalry for the patent, as well as for leadership in the grand principality of Vladimir, developed among the princely houses, particularly those of Tver and Moscow. Gradually, the princes of Moscow became dominant, forming the grand principality of Moscow (Muscovy), which they ruled until their male line died out in 1598.
What made you want to look up "Rurik Dynasty"? Please share what surprised you most...
|
By A. M. Sullivan
CHAPTER IX. (continued)
From the Atlas and Cyclopedia of Ireland (1900)
It was to the rugged and desolate Hebrides that Columba turned his face when he accepted the terrible penance of Molaise. He bade farewell to his relatives, and, with a few monks who insisted on accompany him whithersoever he might go, launched his frail currochs from the northern shore. They landed first, or rather were carried by wind and stream, upon the little isle of Oronsay, close by Islay; and here for a moment they thought their future abode was to be. But when Columba, with the early morning, ascending the highest ground on the island, to take what he thought would be a harmless look toward the land of his heart, lo! on the dim horizon a faint blue ridge—the distant hills of Antrim! He averts his head and flies downward to the strand! Here they cannot stay, if his vow is to be kept. They betake them once more to the currochs, and steering further northward, eventually land upon Iona, thenceforth, till time shall be no more, to be famed as the sacred isle of Columba! Here landing, he ascended the loftiest of the hills upon the isle, and "gazing into the distance, found no longer any trace of Ireland upon the horizon." In Iona accordingly he resolved to make his home. The spot from whence St. Columba made this sorrowful survey is still called by the islesmen in the Gaelic tongue, Carn-cul-ri-Erinn, or the Cairn of Farewell—literally, The back turned on Ireland.
Writers without number have traced the glories of Iona. Here rose, as if by miracle, a city of churches; the isle became one vast monastery, and soon much too small for the crowds that still pressed thither. Then from the parent isle there went forth to the surrounding shores, and all over the mainland, off-shoot establishments and missionary colonies (all under the authority of Columba), until in time the Gospel light was ablaze on the hills of Albyn; and the names of St. Columba and Iona were on every tongue from Rome to the utmost limits of Europe!
"This man, whom we have seen so passionate, so irritable, so warlike and vindictive, became little by little the most gentle, the humblest, the most tender of friends and fathers. It was he, the great head of the Caledonian Church, who, kneeling before the strangers who came to Iona, or before the monks returning from their work, took off their shoes, washed their feet, and after having washed them, respectfully kissed them. But charity was still stronger than humility in that transfigured soul. No necessity, spiritual or temporal, found him indifferent. He devoted himself to the solace of all infirmities, all misery and pain, weeping often over those who did not weep for themselves.
"The work of transcription remained until his last day the occupation of his old age, as it had been the passion of his youth; it had such an attraction for him, and seemed to him so essential to a knowledge of the truth that, as we have already said, three hundred copies of the Holy Gospels, copied by his own hand, have been attributed to him."
But still Columba carried with him in his heart the great grief that made life for him a lengthened penance. "Far from having any prevision of the glory of Iona, his soul," says Montalembert, "was still swayed by a sentiment which never abandoned him—regret for his lost country. All his life he retained for Ireland the passionate tenderness of an exile, a love which displayed itself in the songs which have been preserved to us, and which date perhaps from the first moment of his exile. . . . 'Death in faultless Ireland is better than life without end in Albyn.' After this cry of despair follow strains more plaintive and submissive."
"But it was not only in these elegies, repeated and perhaps retouched by Irish bards and monks, but at each instant of his life, in season and out of season, that this love and passionate longing for his native country burst forth in words and musings; the narratives of his most trustworthy biographers are full of it. The most severe penance which he could have imagined for the guiltiest sinners who came to confess to him, was to impose upon them the same fate which he had voluntarily inflicted on himself—never to set foot again upon Irish soil! But when, instead of forbidding to sinners all access to that beloved isle, he had to smother his envy of those who hard the right and happiness to go there at their pleasure, he dared scarcely trust himself to name its name; and when speaking to his guests, or to the monks who were to return to Ireland, he would only say to them, 'you will return to the country that you love.' "
"We are now," said Dr. Johnson, " treading that illustrious island which was once the luminary of the Caledonian regions; whence savage clans and roving barbarians derived the benefits of knowledge and the blessings of religion....Far from me and from my friends be such frigid philosophy as may conduct us indifferent and unmoved over any ground which has been dignified by wisdom, bravery, or virtue. That man is little to be envied whose patriotism would not gain force upon the plain of Marathon, or whose piety would not grow warmer among the ruins of Iona."—Boswell's "Tour to the Hebrides."
From a sad, comfortless childhood Giles Truelove developed into a reclusive and uncommunicative man whose sole passion was books. For so long they were the only meaning to his existence. But when fate eventually intervened to have the outside world intrude upon his life, he began to discover emotions that he never knew he had.
A story for the genuine booklover, penned by an Irish bookseller under the pseudonym of Ralph St. John Featherstonehaugh.
FREE download 23rd - 27th May
Join our mailing list to receive updates on new content on Library, our latest ebooks, and more.
You won't be inundated with emails! — we'll just keep you posted periodically — about once a monthish — on what's happening with the library.
|
Diabetic Ketoacidosis Symptoms
The body's cells need two energy requirements to function. The blood stream delivers both oxygen and glucose to the front door of the cell. The the oxygen is invited in, but the glucose needs a key to open the door. The insulin molecule is that key. When we eat, the body senses the levels of glucose in the blood stream and secretes just the right amount of insulin from the pancreas so that cells and the body can function.
People with diabetes don't have the luxury of that auto-sensing. They need to balance the amount of glucose intake with the amount of insulin that needs to be injected. Not enough insulin and the glucose levels in the blood stream start to rise; too much insulin, and they plummet.
The consequences of hypoglycemia (hypo=low, glycemia=glucose in the blood) are easy to understand. No energy source, no function - and the first organ to go is the brain. It needs glucose to function and without it, the brain shuts down quickly. Confusion, lethargy, and coma occur quickly. It's interesting that brain cells don't need insulin to open their doors to glucose, so when people develop coma from low blood sugar, they waken almost instantaneously upon treatment. Blood sugar is one of the first things checked on scene of a comatose patient, because it's so easy to fix and very embarrassing for an EMT to miss.
|
History of Rapa Nui
Your location: Welcome! (Main page) / Culture, history / History of Rapa Nui
Was the first highest rank leader of Easter Island, he is believed to have brought his people on 2 boats more than 1.000 (perhaps even 1.700) years ago to the island.
Western literature refers to Hotu Matu'a as to "Rapa Nui's first king", although it is known that there were no real kings, but rather tribal rulers in Polynesia, we continue to use this term. Hotu Matu'a was considered the "Ariki Mau" by the locals, this meaning sort of a "major leader" or "highest ruler".
The settlement of the island
With certainly we can affirm that Easter Island has been inhabited for over 1.200 years. But, specialists still debate on when the first settlers lead by the legendary Hotu Matu'a arrived.
Specialists consider that the island was colonized sometime between 300 BC and 800 BC. Pollen analysis, DNA analysis and also the studies of local legends point out to various periods between this interval. Of course, some people might have arrived later and others earlier, but it is generally accepted by the great public that the island was uninhabited before 300 BC, despite the fact that there is scientific evidence that Easter Island was inhabited before Hotu Matu'a's arrival.
According to the legends, the Ariki Mau, Hotu Matu'a arrived from an island or group of island called Hiva. Linguistic analysis of the Rapa Nui language suggests that the place of origin was the Marquesas Islands.
Legends say that a person called Hau-Maka (Haumaka) had a dream in which his spirit travelled to an island located far away in order to look for new land for the ruler Hotu Matu'a.
Hau-Maka's dream trip took him to the Mata Ki Te Rangi, meaning "Eyes that look to the Sky", an island located in the center of the Earth. This piece of land was called "Te Pito 'o the Kainga", meaning "center of the Earth".
After Hau-Maka woke up, he told about his dream to Motu Matu'a, the supreme leader who ordered 7 men to travel to the island. So they did and they return to Hiva with the news that indeed, there is new land far away. Following this discovery, Hotu Matu'a traveled with 2 boats with settlers and colonized what we call today "Easter Island".
Several hundreds of years ago a bloody conflict had broken out on Easter Island. This is attributed to a variety of factors: remoteness, overpopulation, deforestation, tribal rivalry.
Easter Island is one of the most isolated islands
in the World. Even today, if you fly with a modern airplane from Santiago,
Chile, it takes 5-6 hours to get there. Imagine how difficult it was about
1.500 years ago!
It is believed that this island was formed by ancient volcanic eruptions.
Roggeveen, the Dutch discoverer of the island had estimated a number of 2.000 - 3.000 inhabitants in 1722. But specialists analyzed the bones, the legends and have come to the conclusion that around the 1500s and the 1700s there could have been as many as 10.000 - 15.000 people living on the island.
Overpopulation could have been the primary reason why the locals started fighting each other. This is believed to have lead to the splitting of the population into several tribes, families. Some think there were 2 tribes fighting, others believe there were multiple families.
During the fights, many moai statues and ahu platforms were destroyed, magnificent statues pulled down. Perhaps it was a revenge against the god(s)? Or just because of anger at the constructors? Perhaps the towards the ancestors who had cut down so many trees in order to move the statues?
The tribal wars have even lead to cannibalism.
During Roggeveen's visit it was noticeable that life on the island has degenerated due to deforestation and the depletion of the island's natural resources.
Today, Easter Island has very few trees, this is due to the fact that the locals had used up all wood for firewood, boat and house construction, but certainly have irresponsibly cut down large amounts of trees for building tools to move and put the moai into place.
Once there were forests of palm trees on Rapa Nui, now the only palm trees that exist were planted. So are all other trees which were brought here from other islands and the Americas.
The disappearing of forests has coincided with the conflict on the island. There was not enough wood to make fishing boats, therefore the islands could forget about going out for fishing and also about leaving the island! The disappearance of wood has also led to the decrease of the number of bids, which could not construct nests anymore. The locals found themselves stuck for good on what they believed to be the "Center of Earth".
The discovery of Easter Island
On Sunday, April 5th, 1722, the first Europeans arrived to the island called by locals "Te Pito 'o Te Henua".
Because it was discovered on Easter, it was named "Easter Island".
The discoverer was Jacob Roggeveen, a Dutch captain.
The name we hear so often, "Rapa Nui" is a newer one, given to the island by Polynesians in the mid 1800.
The oldest name known for this island is "Te Pito 'o Te Henua".
Over 800 statues are today on the island, but when Roggeveen discovered Easter Island, these were in pretty good shape, many in place. Afterwards many have fallen. It is generally believed that there were revolts, conflicts and the islanders pulled them down. There even are theories that point out to the possibility of tsunami tidal waves, which could have demolished moai statues, for example it is strongly suggested that the site of Ahu Tongariki was destroyed by such a force coming from the ocean.
Recovery from the conflicts, colonization and more tragedy
Following the drastic decrease of population induced by the tribal violence and famine, Rapa Nui had recovered only by the mid 1800s, when about 4.000 people lived there. But in the 1800s and the 1900s, more and more Europeans and South Americans arrived to Easter Island, which had become part of Chile in 1888.
Tragically many Rapa Nui people were forcefully deported to Peru and Chile, many others died of diseases brought in by the white man.
All these have almost lead to the extermination of the whole population. In 1877 only 111 Rapa Nui people existed on the island.
Later, the island's population took a positive turn and many Polynesians, Amerindians and white men from Chile and Peru came to settle here.
Today tourism, fishing, some agriculture account for the main economic resources of the island. In fact, tourism which so far helped the island may be its biggest threat as more and more people flock to this tiny triangular land on a weekly basis.
|
In a commentary in the February issue of the journal Nature, a team of scientists from University of California, San Francisco has suggested that sugar should be regulated like alcohol and cigarettes.
In an article entitled ‘The toxic truth about sugar’, the UCSF group suggests that like alcohol and tobacco, sugar is a toxic, addictive substance that should be highly regulated with taxes, laws on where and to whom it can be advertised, and even age-restricted sales.
In response, the American Beverage Association issued the following statement:
"The authors of this commentary attempt to address the critical global health issue of non-communicable diseases such as heart disease and diabetes. However, in doing so, their comparison of sugar to alcohol and tobacco is simply without scientific merit. Moreover, an isolated focus on a single ingredient such as sugar or fructose to address health issues noted by the World Health Organization to be caused by multiple factors, including tobacco use, harmful alcohol use, an unhealthy diet and lack of physical activity, is an oversimplification.
“There is no evidence that focusing solely on reducing sugar intake would have any meaningful public health impact. Importantly, we know that the body of scientific evidence does not support that sugar, in any of its various forms - including fructose, is a unique cause of chronic health conditions such as obesity, diabetes, hypertension, cardiovascular disease or metabolic syndrome."
Source: American Beverage Association
- If you enjoyed this article, you may also like this: ABA puts calorie information at your fingertips
- Caffé Culture 2013, in pictures
- Tetra Pak receives DuPont Continuing Innovation Award
- Pom-Bear Zoo snacks from Intersnack
|
Blocking production of a pyruvate kinase splice-variant shows therapeutic promise
Cold Spring Harbor, N.Y.
– Cancer cells grow fast. That’s an essential characteristic of what makes them cancer cells. They’ve crashed through all the cell-cycle checkpoints and are continuously growing and dividing, far outstripping our normal cells. To do this they need to speed up their metabolism.
CSHL Professor Adrian Krainer and his team have found a way to target the cancer cell metabolic process and in the process specifically kill cancer cells.
Nearly 90 years ago the German chemist and Nobel laureate Otto Warburg proposed that cancer’s prime cause was a change in cell metabolism – i.e., in cells’ production and consumption of energy. In particular cancer cells have a stubborn propensity to eschew using glucose as a source to generate energy. This is known as the Warburg Effect.
While metabolic changes are an important feature in the transformation of normal cells into cancer cells they are not now thought to be cancer’s primary cause. Despite this, metabolic changes remain an attractive target for cancer therapy, as Krainer and colleagues show in a paper published online today in Open Biology
, the open-access journal of Great Britain’s Royal Society.This image compares glioblastoma cells untreated or treated with antisense oligonucleotides (ASO) that modulate splicing for PK-M. The cells are visible under light microscopy in the left column, and the DNA in their nuclei shows up when using the blue dye DAPI in the second column. PK-M2 is visualized using a red stain in the third column, with the merge of the images in each row in the forth column. The 2nd and 3rd rows show cells that have been treated with ASOs. The red dye is nearly all gone indicating that there is less PK-M2 and that the ASOs have worked. Image courtesy of Zhenxun Wang and Adrian Krainer. (click to enlarge)
One difference between metabolism in cancer and normal cells is the switch in cancer to the production of a different version, or isoform, of a protein produced from the pyruvate kinase-M (PK-M) gene. The protein version produced in normal cells is known as PK-M1, while the one produced by cancer cells is known as PK-M2.
PK-M2 is highly expressed in a broad range of cancer cells. It enables the cancer cell to consume far more glucose than normal, while using little of it for energy. Instead, the rest is used to make more material with which to build more cancer cells.
PK-M1 and PK-M2 are produced in a mutually exclusive manner -- one-at-a-time, from the same gene, by a mechanism known as alternative splicing. When a gene’s DNA is being copied into the messenger molecule known as mRNA, the intermediate template for making proteins, a cellular machine called the spliceosome cuts and pastes different pieces out of and into that mRNA molecule.
The non-essential parts that are edited out are known as introns, while the final protein-coding mRNA consists of a string of parts pasted together known as exons. The bit that fits into the PK-M1 gene-coding sequence is known as exon 9, while it is replaced in PK-M2 by exon 10. In this way alternative splicing provides the cell with the ability to make multiple proteins from a single gene.
Krainer, an authority on alternative splicing, previously published research
on the protein regulators that facilitate the splicing mechanism for PK-M. His team showed that expression of PK-M2 is favored in cancer cells by these proteins, which act to repress splicing for the PK-M1 isoform. In the study published today the team explains that it decided to target the splicing of PK-M using a technology called antisense, rather than target the proteins that regulate the splicing mechanism.
Using a panel of antisense oligonucleotides (ASOs), small bits of modified DNA designed to bind to mRNA targets, they screened for new splicing regulatory elements in the PK-M gene. The idea was that one or more ASOs would bind to a region of the RNA essential for splicing in exon 10 and reveal that site by preventing splicing of exon 10 from occurring.
Indeed, this is what happened. “We found we can force cancer cells to make the normal isoform, PK-M1,” sums up Krainer. In fact, a group of potent ASOs were found that bound to a previously unknown enhancer element in exon 10, i.e., an element that predisposes for expression of the PK-M2 isoform, thus preventing its recognition by splicing-regulatory proteins. This initiated a switch that favored the PK-M1 isoform.
When they then deliberately targeted the PK-M2 isoform for repression in cells derived from a glioblastoma, a deadly brain cancer, all the cells died. They succumbed through what is known as programmed cell death or apoptosis -- a process whereby the cell shuts down its own machinery and chops up its own DNA in committing a form of cellular suicide.
As to why the cells die when PK-M2 is repressed: the team found it was not due to the concomitant increase in PK-M1 (the cells survived even when extra PK-M1 was introduced). Rather, it was the loss of the PK-M2 isoform that was associated with the death of the cancer cells. How this works is still unclear but a subject of investigation in the Krainer laboratory.
The next step will be to take their ASO reagents into mouse models of cancer to see if they behave the same way there. While there are some technical and methodological obstacles to overcome, Krainer is optimistic.
“PK-M2 is preferentially expressed in cancer cells, a general feature of all types of cancer -- it’s a key switch in their metabolism,” he says. Thus targeting the alternative splicing mechanism of PK-M2 using ASOs has the potential to be a cancer therapeutic with many applications.
The paper can be obtained online at the following link: Zhenxun Wang, Hyun Yong Jeon, Frank Rigo, C. Frank Bennett and Adrian R. Krainer. 2012 Manipulation of PK-M mutually exclusive alternative splicing by antisense oligonucleotides. Open Biology 2: 120133. http://rsob.royalsocietypublishing.org/content/2/10/120133.full
The research described in this release was supported by the National Cancer Institute grant CA13106, the St. Giles Foundation, and a National Science Scholarship from the Agency for Science, Technology and Research, Singapore. About Cold Spring Harbor Laboratory
Founded in 1890, Cold Spring Harbor Laboratory (CSHL) has shaped contemporary biomedical research and education with programs in cancer, neuroscience, plant biology and quantitative biology. CSHL is ranked number one in the world by Thomson Reuters for impact of its research in molecular biology and genetics. The Laboratory has been home to eight Nobel Prize winners. Today, CSHL's multidisciplinary scientific community is more than 360 scientists strong and its Meetings & Courses program hosts more than 12,500 scientists from around the world each year to its Long Island campus and its China center. Tens of thousands more benefit from the research, reviews, and ideas published in journals and books distributed internationally by CSHL Press. The Laboratory's education arm also includes a graduate school and programs for undergraduates as well as middle and high school students and teachers. CSHL is a private, not-for-profit institution on the north shore of Long Island. For more information, visit www.cshl.edu.
Written by: Edward Brydo
n, Science Writer
|
Some folks laugh at the notion of Uncle Sam reaching his hand literally into our backyards and regulating almost every drop of water. But, a bill in Congress would do just that. And if it passes, not just farmers and ranchers would be affected, but all landowners.
The Clean Water Restoration Act, or S. 787, gives the government the right to extend its reach to any body of water from farm ponds, to storm water retention basins, to roadside ditches, to desert washes, even to streets and gutters. The legislation leaves no water unregulated and could even impact standing rainwater in a dry area.
Private property owners beware.
While it has "restoration" in its title, it does anything but. The Clean Water Restoration Act is not a restoration of the Clean Water Act at all. It is a means for activists to remove any bounds from the scope of Clean Water Act jurisdiction to extend the government's regulatory reach. But, what the activists won't tell you is that the Clean Water Act is working, and has been for the last 36 years.
Put simply, this legislation would replace the term "navigable waters" from the Clean Water Act with "all interstate and intrastate waters." Farm Bureau supports the protection of U.S. navigable waters, as well as rivers and streams that flow to navigable waters -- all of which are already protected under current law. But, if the Clean Water Act is applied to all waters, farmers and ranchers would be significantly impacted due the number of farming activities that would require permits.
Under this new law, areas that contain water only during a rain would be subject to full federal regulation. Further, not only would many areas not previously regulated require federal permits, those permits would be subject to challenge in federal court, delaying or halting these activities resulting in a huge impact on rural economies.
Farmers and ranchers do a good job taking care of the land. As I often say, they are America's first environmentalists. They use modern conservation practices to protect our nation's water supplies. Many times these efforts are put in place voluntarily because farmers are driven by a strong stewardship ethic.
However, the restoration bill largely disregards the positive conservation role farmers and ranchers are playing. It replaces good works with strict rules. Rather than restore the Clean Water Act, it just brings a new truckload of restrictions for the people who do most to protect our water.
The Clean Water Restoration Act is regulatory overkill. It is written to give the federal government control of structures such as drainage ditches, which are only wet after rainfall. Taking these changes one step further, it would likely give federal regulators the ability to control everyday farming activities in adjacent fields.
Hard-working farm families can't afford, nor do they deserve, Uncle Sam's hand reaching into their backyards, their fields or even their puddles of rainwater.
Bob Stallman is president of the American Farm Bureau Federation.
|
This following chronology looks back at the problem of xenophobia since South Africa’s first democratic elections in 1994.
The Zulu-based Inkatha Freedom Party (IFP) threatens to take “physical action” if the government fails to respond to the perceived crisis of undocumented migrants in South Africa.
IFP leader and Minister of Home Affairs Mangosutho Buthelezi says in his first speech to parliament: “If we as South Africans are going to compete for scarce resources with millions of aliens who are pouring into South Africa, then we can bid goodbye to our Reconstruction and Development Programme.”
In December gangs of South Africans try to evict perceived “illegals” from Alexandra township, blaming them for increased crime, sexual attacks and unemployment. The campaign, lasting several weeks, is known as “Buyelekhaya” (Go back home).
A report by the Southern African Bishops’ Conference concludes: “There is no doubt that there is a very high level of xenophobia in our country … One of the main problems is that a variety of people have been lumped together under the title of ‘illegal immigrants’, and the whole situation of demonising immigrants is feeding the xenophobia phenomenon.”
Defence Minister Joe Modise links the issue of undocumented migration to increased crime in a newspaper interview.
In a speech to parliament, Home Affairs Minister Buthelezi claims “illegal aliens” cost South African taxpayers “billions of rands” each year.
A study co-authored by the Human Sciences Research Council and the Institute for Security Studies reports that 65 percent of South Africans support forced repatriation of undocumented migrants. White South Africans are found to be most hostile to migrants, with 93 percent expressing negative attitudes.
Local hawkers in central Johannesburg attack their foreign counterparts. The chairperson of the Inner Johannesburg Hawkers Committee is quoted as saying: “We are prepared to push them out of the city, come what may. My group is not prepared to let our government inherit a garbage city because of these leeches.”
A Southern African Migration Project (SAMP) survey of migrants in Lesotho, Mozambique and Zimbabwe shows that very few would wish to settle in South Africa. A related study of migrant entrepreneurs in Johannesburg finds that these street traders create an average of three jobs per business.
Three non-South Africans are killed by a mob on a train travelling between Pretoria and Johannesburg in what is described as a xenophobic attack.
In December The Roll Back Xenophobia Campaign is launched by a partnership of the South African Human Rights Commission (SAHRC), the National Consortium on Refugee Affairs and the United Nations High Commissioner for Refugees (UNHCR).
The Department of Home Affairs reports that the majority of deportations are of Mozambicans (141,506) followed by Zimbabweans (28,548)
A report by the SAHRC notes that xenophobia underpins police action against foreigners. People are apprehended for being “too dark” or “walking like a black foreigner”. Police also regularly destroy documents of black non-South Africans.
Sudanese refugee James Diop is seriously injured after being thrown from a train in Pretoria by a group of armed men. Kenyan Roy Ndeti and his room mate are shot in their home. Both incidents are described as xenophobic attacks.
In Operation Crackdown, a joint police and army sweep, over 7,000 people are arrested on suspicion of being illegal immigrants. In contrast, only 14 people are arrested for serious crimes.
A SAHRC report on the Lindela deportation centre, a holding facility for undocumented migrants, lists a series of abuses at the facility, including assault and the systematic denial of basic rights. The report notes that 20 percent of detainees claimed South African citizenship or that they were in the country legally.
According to the 2001 census, out of South Africa’s population of 45 million, just under one million foreigners are legally resident in the country. However, the Department of Home Affairs estimates there are more than seven million undocumented migrants.
Protests erupt at Lindela over claims of beatings and inmate deaths, coinciding with hearings into xenophobia by SAHRC and parliament’s portfolio committee on foreign affairs.
Cape Town’s Somali community claim that 40 traders have been the victims of targeted killings between August and September.
Somali-owned businesses in the informal settlement of Diepsloot, outside Johannesburg, are repeatedly torched.
In March UNHCR notes its concern over the increase in the number of xenophobic attacks on Somalis. The Somali community claims 400 people have been killed in the past decade.
In May more than 20 people are arrested after shops belonging to Somalis and other foreign nationals are torched during anti-government protests in Khutsong township, a small mining town about 50km southwest of Johannesburg. According to the International Organisation of Migration, 177,514 Zimbabweans deported from South Africa pass through their reception centre across the border in Beitbridge since its opening in May 2006.
In March human rights organisations condemn a spate of xenophobic attacks around Pretoria that leave at least four people dead and hundreds homeless.
Sources include: IRIN, Human Rights Watch, SAMP, SAHRC, Centre for the Study of Violence and Reconciliation
|
Bipolar disorder, also known as manic-depressive illness, is a brain disorder that causes unusual shifts in mood, energy, activity levels, and the ability to carry out day-to-day tasks. Symptoms of bipolar disorder are severe. They are different from the normal ups and downs that everyone goes through from time to time. Bipolar disorder symptoms can result in damaged relationships, poor job or school performance, and even suicide. But bipolar disorder can be treated, and people with this illness can lead full and productive lives. Bipolar disorder often develops in a person's late teens or early adult years. At least half of all cases start before age 25. Some people have their first symptoms during childhood, while others may develop symptoms late in life. Bipolar disorder is not easy to spot when it starts. The symptoms may seem like separate problems, not recognized as parts of a larger problem. Some people suffer for years before they are properly diagnosed and treated. Like diabetes or heart disease, bipolar disorder is a long-term illness that must be carefully managed throughout a person's life.
|
Girl Scout Preserves Florida's Wildlife
March 23, 2012 – Miami, Fla. – Senior Girl Scout Caitlin Kaloostian earned the Gold Award, the highest award a Girl Scout can receive, by completing a new butterfly garden and wildlife themed mural for The Florida Fish and Wildlife Conservation Commission (FWC).
The FWC works with community and youth organizations to demonstrate the importance of safeguarding Florida’s Natural resources and to encourage the next generation of conservationists. To further this mission, Kaloostian held fundraisers such as garage sales and solicited help from fellow high school students to raise funds necessary to complete the project. With the help of her troop, Girl Scout Troop 305, Kaloostian painted a mural at the FWC’s Division of Law Enforcement Office which featured native fish and wildlife including a Florida panther, a manatee, an alligator, fish and many other animals. The butterfly garden uses native plants to attract butterflies and birds. Both the mural and the butterfly garden will help to preserve Florida’s future.
|
Commercially-pressed CD's and CD-R or CD-RW disks are fundamentally different technologies, which is why a commercial CD will continue to be readable long after a CD-R has become unusable.
A CD drive uses a focused laser beam that is reflected from the media surface in the CD disc. The beam is reflected onto a sensor that detects changes in the amount of energy that is reflected. The original (commercial) process used perforated aluminum as the media surface. When you use the term "pressed" you are using an old vinyl record term, but the production process is pretty much the same. There is a "master" disk that is put into a press which is filled with polycarbonate. The master disk has little pins sticking up everywhere there is to be a hole in the aluminum. The disk is cooled, and liquid aluminum is spun onto it. This results in an aluminum layer with holes in it.
When the disc is played, the laser reflects strongly from the shiny aluminum or less strongly (or not at all) from the holes. The reflection/non-reflection is translated into the ones and zeros of the binary data stored on the disk.
Over time the aluminum can oxidize or there can be other changes in the plastic and other materials that make the disc unusable. These are long term effects and the ultimate statistical life of a commercial CD is often debated, without conclusion, by the experts.
The CD-R and CD-RW do not use an aluminum media surface. Instead, they use a dye. When the disc is written, a high-powered laser causes spots on the disk to turn dark (hence the term "burning"). When played back, the sensor in the player sees the difference in reflectivity of the dark and not-so-dark spots as the binary data.
Unfortunately, because the dye is a light-sensitive chemical, over time it will fade. This can happen from the heat of the reading laser, from ambient light, and from chemical degradation in the dye and support media.
CD-R/RW media is safe for backup, and for creating alternate media (copying music files to play in your car so that if they are damaged from heat or wear out you can make another one, and preserve your originals elsewhere), and similar purposes. However, they are not safe for archival storage because they are not stable enough for that purpose.
Side note: when burning CD's for use in a car, for best results get "music CD's" which are designed for that application, or slow your burning speed down to 12x or 16x to get a darker spot from your high speed burner. The car will read the disc more reliably.
Insofar as tape storage is concerned, tape is also not a good archival choice of media. It's generally better than CD-R, although I haven't seen any comparative studies.
Major data centers who use tape storage refresh the storage periodically. Their Tape Management System (TMS) remembers the date the tape was recorded, and will call it up to be copied periodically. The old tape is then erased and reused until it reaches end of life (sometimes a fixed usage or time interval, sometimes when the number of recoverable errors reaches a threshold) at which time it is scrapped.
The whole issue of long term archive is complex, and goes beyond media. For media, if a data center stored its files on a 9 track magnetic tape twenty years ago, how would it retrieve that data today (you cannot find working 9 track drives). What if it had used an early Magneto-Optical (MO) drive? Small businesses have trouble when their tape drive fails, and they can't buy another drive in that old format.
File formats are another problem. I have word processing documents that deceased family members created years ago. I no longer have word processing software that will import some of those formats. I can (sometimes) extract the raw text and then try to reformat it in a current program, but if I don't have a printed original I don't know how it was intended to be formatted.
The only archival format that has stood the test of time is paper.
Submitted by: Kevin G. of Dallas, TX
Well, Carl, that so-called expert sure has stirred the waters and a LOT of people are wondering about the same question. However, your friendly Federal Government has studied the problem even longer.
To be specific, the National Archives and Records Administration, in charge of all of the record archiving of the government, has no standard on media storage, and requested NIST, that's National Institute of Standards and Technology, to write a new standard on media durability.
If you never heard of NIST, you're not alone, as NIST is more of a background organization, but suffice to say, they're the ones who creates the standards, references, and accuracy tests for all industries, from DNA to Time accuracy (in fact, if NIST operates one of the Internet "clocks" you can calibrate your PC to). NIST DNA reference material improves forensic DNA test accuracy. NIST also invented closed captioning and many other technology, but enough about NIST.
A gentleman by the name of Fred Byers spent a whole year testing various media, and wrote a guide for NIST to librarians who need to archive information on how to care for optical media such as CD-R and DVD-R's and such. In the guide, he basically stated that with proper handling (store in low humidity, no scratching, stored vertically, etc.) a DVD-R should last 30 years with no fear of losing any information. However, that is NOT an absolute number as it is dependent on a LARGE NUMBER OF FACTORS, some of which in your control, and some not:
Factors that affect disc life expectancy include the following:
type -- as recordable media is more durable than rewritable media manufacturing quality -- you get what you pay for condition of the disc before recording -- obvious quality of the disc recording -- garbage in, garbage out handling and maintenance -- scratches are bad for any discs environmental conditions -- humidity and temperature can warp disc, ruining the reflective layer in the media. light, esp. UV light can destroy the dye used in recordable media, etc.
Let us discuss each factor in a bit more detail
All types of media can be damaged through warpage (disc bending), scratches, and reflective layer breakdown due to oxygen leakage.
Recordable media, in addition, is susceptible to UV rays, which affects the dye used in the process.
Rewritable media, with phase-change recording, is even more susceptible to UV ray and temperature.
It is generally acknowledged that certain brands of media are better than others, and often the stuff on sale is not the stuff you may want to buy and keep around.
What you may not know is that there are only like 16 media manufacturers in the world. They make the media for all the brands that you see in the market, and some brands / factories are known to make high grade media (i.e. they tested best for maintaining data integrity, even when the media was subject to aging tests). While few independent labs did comprehensives tests, a test in Europe a while back for CD-R's revealed that Taiyo Yuden (Verbatim), Kodak (Kodak), and TDK (TDK) kept the most data intact.
Condition of the disc before recording
A disc should be brand new when used. While shelf live of a media is up to 5 years, why take chances? Buy them as you need tem.
Handling and maintenance
Scratches are bad for any discs, as it breaks open the substrate layer and allows air to tarnish the inside silver reflective layer inside.
Scratches also can make information on the media unreadable by interrupting the laser's path.
Environmental conditions -- humidity and temperature can warp the media, and exposure to UV light can destroy the dye used in CD-R's and DVD-R's.
Hope that answers your questions.
Submitted by: Kasey C. of San Francisco, CA
In the '80's, the CD was introduced in the market and portrayed as "THE" solution to the vinyl records.
The CD could be thrown in a mud pool, step on it, scratch it, nothing would harm he CD.
Now we all know that CD's has a lower lifetime as their vinyl counterparts and are more susceptible to errors than them. This is also true for the CD as a media to record software.
The early CD's, were recorded at maximum 640 MB. Mostly not even 640 MB but something like 528 MB. This made them less susceptible to scratches.
But as the CD technology was in a constant evolution, overburning a CD to 800 MB and more became common use. Also the DVD was introduced, offering 4/9GB on a wafer of the size of a CD.
It is obvious the the tracks are becoming so small that the finest scratch, the smallest fault, can ruin the CD/DVD forever.
Answering your question, there is no miracle solution to keep CD/DVD from deterioration trough age. But with a little bit of care, you can have many years of pleasure of your recordings.
1. Buy only CD/DVD from a good brand.
Buying low priced CD/DVD will mostly result in very disappointing experiences.
2. Dont overburn a CD/DVD.
While the overburn technique is now widely accepted by most software, it is still not fully reliable and mostly dont approved by the CD manufacturers.
3. Put every CD back in the jewel case after use, clean them as prescribed by the manufacturer, and avoid as much as possible touching the reading surface of the CD.
As a final remark, CD/DVD are nowadays not expensive and if you can make a backup of them, make a backup and store it in a safe place.
I use an external harddisk of 250GB (<300 CD's) to store a backup of the CD's/DVD's I have. Price of the harddisk is about 100$.
I have been able many times to rescue recordings which were otherwise lost forever by this harddisk.
Hope this helps,
Submitted by: Carlos
You have just discovered what most people don't discover until they actually lose data: commercial CDs and home-burned CDs are not the same. While a commercial premade CD will last a very long time if it is cared for properly, a home-burned CD will begin to deteriorate. The reason is that the home-burned version uses dyes to accomplish what the premade CD does by having it built into the disk. This is, of course, an oversimplified explanation, but it will suffice.
There are a few ways to maximize the amount of time a CD will last. First of all, buy good quality CDs to begin with. Stick with brand names that you are familiar with and have used successfully in the past.
Do not assume that just because a blank CD is made by a well-known company that it will be high quality.
Test them out by actually using them. One of the best ways to do this is to use them for your regular system backups. Be sure to actually restore from those backups periodically (easier if you have another computer handy that you can wipe out data on) or else use a backup program that allows you to mount the backup as a "virtual drive" and retrieve data from it.
This lets you know if there is a problem with a brand deteriorating unusually fast.
Second, never use labels on CDs. I found this out the hard way. Labels cause the CD to deteriorate much more rapidly than it otherwise would. Certain inks used in pens have been reported to do the same, but I have never encountered this problem, so it shouldn't be too severe. Do be certain, however, that you are gentle when marking CDs. Use a felt tip and do not press hard.
Third, put a note somewhere on the CD that tells you when you made it. This lets you monitor how long it has been since the CD was burned. If the data is irreplaceable, burn it to a new CD every 2 years.
As for the recommendation to use magnetic tapes, that has its own set of problems. Magnetic tapes also deteriorate, and they are subject to some damage that CDs are immune to, notably damage from electrical or magnetic fields.
In short, CDs are good for long term storage-- but don't assume that "long term" means forever. Check them regularly and burn them to new media when problems develop or even before if you can't replace what's on them.
As for storage, that is pretty much common sense. Keep the CDs in a case or an envelope if they are not actually being used. Avoid temperature extremes and handle gently. I also recommend making two copies of every important CD. This practice just saved my data when I discovered that the labels on my CDs had wiped out some irreplaceable family photos. It costs twice as much, but if the data is important to you then it isn't really very expensive, is it?
Submitted by: Denise R. of Lebanon, Missouri
Hi Carl N,
Your question has been set by a lot of people over the last 10 years. I've burned CD in 100s over the years and only found 2 discs with missing information. Lifetime of CDs is not limited due to one parameters only, more issues are setting the limit of lifetime. One is related conditions of storage and how you handle the discs. In other word, how careless or careful you are as the user. Then the material used in the CDs - how cheap a blank CD did you buy. And lastly your burning equipment, that is the laser diode.
When pressed CDs were introduced in beginning of the 80s lovers of vinyl records claimed, that CDs would last for 2 years only. But as you have experienced CDs from this period can still be played. I remember one report from about 1990, which claimed a lifetime of only 3-4 years. Looking into the report, it turned out that the condition of storage was -30C (some -25F) and reading/playing equipment had a worn laser diode. Most of us can only say: I don't store my CDs in the freezer and todays laser diodes doesn't wear out as they used to.
Turning towards recordable CDs, the whole issue is a matter of having a whole bunch of clear holes placed in circles in a foil. Readability of these holes are depending on a number of issues. How clear are the holes? Is edge of the hole clear? Is the reflectivity of the materials sufficient? Is the laser still as effective as it was? or has the surface become matte? For the early CD-burners this was jeopardized by increasing burning speed and some blank CD had doubtful foil material. Adding, to this some CD-burners were even sold with writing speeds beyond its capability. Many blank CDs were rejected in this period due to bad burners rather than bad discs. It is my conclusion, that this interim period has given us some doubtful discs.
You have to be careful with your CDs and also a little bit extra careful with your own. They don't like heat, bright light, bending, and writing with aggressive writing pens is also nasty. Especially pens with unknown chemicals may etch the CDs, it is just like burning, but this time controlled by the chemicals.
In case you wanna increase the lifetime of your recordings, you may buy CDs which is claimed to 300+ years lifetime. These CDs are referred to as GOLD-CD. These CDs has a special layer which include some 24ct gold. The advantages of these CDs are the ability to create clear holes into it with reduced oxidation or corrosion over time. Amazing almost also unbelievable 300 year. Just 100 years could be great for me. In 10-20 years everything would be transferred to new media type anyway. I saw a report on the 300 years at
Price of these GOLD CDs is 10-20x times the usual ones.
You have been suggested to use magnetic tape. Nor tapes does not last forever. As matter of fact the sound quality decays over time; frequency range is decreased by each use. This loss you can not be restored as with a digital media. Only digital storage keeps its audio frequency range over time and use. Like R&R;DIGITAL media is here to stay. You may call it CD, DVD, MPx, Blu-ray or whatever, but it's digital.
I believe that todays discs and equipment can provide a disc with sufficient lifetime for most of us and may even restore your more doubtful discs from the early burning time with success. Even discs which are registered as 'No disc' may be restored by copying it today. In case you wanna assure yourselves; let the PC verify the burned disc, this option is normally disabled by default.
What shall I do with my precious discs from the early days? My best recommendation would be to make a new copy, while the old is still readable. This is easy and cheap to most of us today as having two drives in your PC is not uncommon. Lastly, the quality and lifetime of recorded discs is today likely to most depended on your own care.
Submitted by: Leif M. of Helsingor, Denmark
Regarding problems developing over time with recordable CD-R media, I've run into some of this myself, but I also have quite a few discs that were made back when the very first 1x CD burners were made available to the public, and they still read just fine for me.
I suspect that there are several factors involved here.
1. I'm certain there's a difference in quality between brands of CD-R media. A number of my really old CD-Rs that still read flawlessly today were Kodak branded, and were considered expensive "premium quality" discs at the time. They're even physically a little bit thicker than most other media I've handled. By contrast, some of the generic media I purchased because of the low price on 100-pack spindles has actually developed "bubbles" where you can see the dye that's sandwiched between the layers of plastic is disintegrating. (Of course it won't read if small spots are completely gone!) There were/are several different types of dye used for CD-R media, as well, and I wouldn't be surprised if it's turning out that some types have better longevity than others. For example, Verbatim was known for using their trademark blue-tinted dye, while others were shades of green or gold.
2. From what I've read and observed, handling makes a big difference too. Leaving your CD-R's exposed to sunlight (as folks tend to do with music CDs used in their cars or trucks) probably shaves years off of their lifespan. Putting them in some type of jewel-case or sleeve when not in use is a very good idea. Boxes of empty jewel-cases can be purchased fairly inexpensively at most office supply and electronics chain stores.
3. A CD-R holding computer data is inherently more "fragile" and subject to data loss than a CD-R recorded as an audio disc. The standard used for recording audio CDs incorporates quite a bit of error correction information to handle small scuffs and scratches on the media, but besides that, audio data is spread out over a much larger portion of the CD-R. If you have a .ZIP file stored on a CD-R, for example, a pinhole-sized mark someplace on the disc where that .ZIP file is stored can easily be enough to prevent the whole archive from extracting properly. By contrast, the same sized mark might only cause a very brief "stutter" at one point of a song on a music disc (or not pose a discernable problem at all, due to the error correction).
If your audio discs are already deteriorating to the point where players are rejecting them as "unreadable" or they're skipping badly, it sounds to me like things have gotten pretty bad. The only recommendation I'd have is to re-record your music to fresh, good-quality CD-R media and throw out the old ones - and in the future, make a habit of transferring your music to fresh discs every few years or so.
Luckily, in the case of computer software backups, they tend to become so outdated, you no longer really need to keep them by the time the media they're recorded on starts failing. But for those trying to preserve digital photos and the like, I'd recommend this same procedure. Make a fresh set of backups every so often and discard the old media - before it fails on you and you lose something priceless!
Submitted by: Tom W.
CD burned media fails after time.
I am a practicing technician and this is not a new complaint. It is my firm belief that most consumers burn their media at the fastest speed possible for both their software and the media they use. This is fine but there may well be a trade-off in doing this.
What most consumers do not perhaps understand is that commercially produced CD's have actual pits pressed into them that represent the digital data of the original sound data. A burned CD on the other hand is made by fabricating a photo sensitive layer to mimic the pits found in pressed media.
I have found three major causes for this consumers problem they are as
1. A slower burn makes a stronger image representation in the photo
sensitive layer of a burned CD. A faster burn while successful may not impress the photo sensitive layer as effectively as a slow burn. Over time the burn fails as the photo sensitive layer deteriorates.
2. Sunlight and other forms of intense light can effect a burned CD
because it can cause a distortion in the burned media's photo sensitive layer.
3. Scratches by far are more evident on burned media and more easily
caused than on pressed media. Most consumers seem to ignore the manufactures warning and suggestions. Handling of the disc in a careful manner as advised by the manufacturer is the best policy here. I use a camera lens cloth to clean surface of all my media. A camera lens cloth will not scratch the disc surface. Paper and regular household cloths will cause scratches.
Observe the above and I do believe you will have better results.
One more thing always use the media recommended by the burner manufacturer. It is endorsed and guaranteed to work, many of the cheap non name discs out there are just not up to par. Its just like the old cassette tape days.
Most audiophiles went for tapes like Maxell, JVC, Sony etc. but as everyone knows there were a lot of bogus brands out there for the un-informed to purchase.
Submitted by: Peter K.
> I recently read an article by a data storage expert who claimed that
> burned CD-Rs and CD-RWs can be expected to last only two to five years
> and not a whole lot more. I personally have commercially pressed CDs
> from the 1980s that still play fine, but I have begun to notice that
> some of my burned CD-Rs are beginning to skip
you mention that there are basically two types of CDs: Those that are created with all information in place and those you can buy and write on.
The first type is quite robust as the information has been "engraved" into the surface just below the reflector. The most critical part of such a CD is the reflector, most often a very thin layer of aluminum.
The second type of CD works a bit differently: There is a dye layer below the reflector and the information is written onto the CD-R(W) by "burning"
and thereby locally changing the optical properties of the dye. The most critical part is the dye, besides the reflector as above. If the dye degrades the CD easily gets unreadable. The dye of CD-RW is even more critical as it must be "resetable" - another constraint.
> The expert suggests that for secure long-term storage, high -quality
> magnetic tape is the way to go.
This solution is quite expensive as you need a tape drive and enough tape cartridges, but has the advantage of a much larger storage capacity. If the manufacturers say their tape cartridges are reliable for a very long time they have one advantage above CD-R: This type of storage device has been around long enough to prove it. CD-R has been on the market for no more than 10 years.
The best strategy for the private user is: Have a good archive strategy, save often and store the media carefully in a dry, dark, cool place. If you store every file more than once you have a better chance to retrieve it.
There is no real alternatives to CD-R. Use high-quality ones. Do not use any DVD variety as their reliability is much less. DVD may be used for an image backup of your boot drive so you can restore your present configuration for the months to come.
Submitted by: Alexander V.
Unlike pressed original CDs, burned CDs have a relatively short life span of between two to five years, depending on the quality of the CD. There are a few things you can do to extend the life of a burned CD, like keeping the disc in a cool, dark space, but not a whole lot more.
The problem is material degradation. Optical discs commonly used for burning, such as CD-R and CD-RW, have a recording surface consisting of a layer of dye that can be modified by heat to store data. The degradation process can result in the data "shifting" on the surface and thus becoming unreadable to the laser beam.
Many of the cheap burnable CDs available at discount stores have a life span of around two years, In fact, there are some of the better-quality discs offer a longer life span, of a maximum of five years. Distinguishing high-quality burnable CDs from low-quality discs is difficult, I think because few vendors use life span as a selling point.
I've had good luck with Verbatim media, and bad results with TDK. Playback with the TDK discs I used degraded steadily over time, in spite of very little use, and not much in the way of scratches or other blemishes on the disc. On the other hand, the Verbatim discs I've used have held up well over time, and under more use than the TDK ones I used.
Opinions vary on how to preserve data on digital storage media, such as optical CDs and DVDs. I have my own view: To overcome the preservation limitations of burnable CDs, Im suggesting using magnetic tapes, which, as I read, can have a life span of 30 years to 100 years, depending on their quality. Even if magnetic tapes are also subject to degradation, they're still the superior storage media.
But I want to point out that no storage medium lasts forever and, consequently, consumers and business alike need to have a migration plan to new storage technologies.
A Good Question to get in this subject is Does Burning Speed makes difference in quality of CDs? Someone told me that the burning speed makes a difference in the quality of the records. The lower it is, the deeper it burns and therefore the better the quality is. I heard that there are some audio technicians decide to burn masters at 2x and copies at 4x due to getting digital noise from higher burn rates. Might just depend on the burner quality and the burning program...
Hoping you get the Point of my explanation.
Submitted by: Sameer T.
Everyone who owns a business is always trying to be enticed by the security and the longevity of magnetic tape. And although I'm apt to agree with them on its durability, I don't use it to back up important data in my business. I have two problems with magnetic tape vs. CD or DVD. The first problem is hardware. Data backed up on a CD or DVD can be loaded into any computer with a drive capable of handling the media. The same can be said about tape backup, however you are more likely to find a computer off the shelf with a compatible CD or DVD drive vs a magnetic tape drive. The other problem is the need for long term storage.
As a business owner, I'm backing up my important data every one to three days. I've been using CD-RW media to do this for years. If a disk gets corrupt, you can reformat it using your burning software, then use it again. If you are concerned about your CD becoming corrupt, simply burn two or three. The cost of three CD or even DVD media is much more reasonable than the cost of one of those tapes. And if my server dies, I can buy any computer at any store, and load the data onto the new computer right away and I'm back in the game. I have several people trying to convince me of the benefit of a paper backup system. I find it easier (and cheaper) to have multiple electronic backups. My business server has a RAID 1 card and two hard drives which mirror each other. I have the CD backup, and I then take this data and save it to a secure partition on another machine in a separate location.
The likelihood of all of these systems failing at one time is highly unlikely. And if it does, I'm taking the day off, because that's just real bad luck. As far as long term storage (like music), I've noticed that CD-RW can lose the data in the long haul, but haven't had any problems with CD-R media. I have some that skip, but not for a reason I didn't know about. I buy the large spools of CD-R which don't come with jewel cases. So these disks get abused. If you know you are going to keep something for a really long time, I would make sure the disks you buy have jewel cases. And you can apply the multiple disk system with this as well. The media is easy on the wallet, and the more backups you have, the lower the risk you will actually lose the data.
I don't think I answered all the questions, but that's my take.
Submitted by: Dave K.
|
'Old Earth Scientists'... I've never heard that before ... You aren't suggesting that there are 'new earth scientists' are you ?
Well sort of, there are commonly two types of scientists - old earth (who believe that the earth is billions of years old) and then young earth who believe that the earth is around 6,000 years old.As far as Science is concerned the big bang occured between approx 14 - 18 billion years ago
As stated above, there are both old earth scientists and young earth scientists. Old earth scientists believe in the big bang theory and that the age of the earth is in the order of billions of years. Having said that perhaps the above statement should read "As far as old earth scientists believe, the big bang occurred between approx 14 - 18 billion years ago." Furthermore, when you say "concerned" it makes the assumption that the big bang actually did happen. The big bang is a theory and unless scientists can replicate it, it will forever remain a theory.thats not a theory formed by 'old earth scientists' that is calculated using every method we have at our disposal
Unfortunately your statement falls short from the beginning - remember, the big bang theory is just that, a theory.measuring the expansion rate of the universe, measuring light from distant stars etc..there are too many to mention.
The Bible also confirms that the universe is expanding. Isaiah 40:22 teaches that God “stretches out the heavens like a curtain, and spreads them out like a tent to dwell in.” This verse was written thousands of years before secular scientists accepted an expanding universe. It was until more recently that scientists changed their mind from the universe being constant to actually expanding.
There are a few theories floating around with respect to the apparent red shift of stellar objects. old earth scientists believe it to be a result of bodies moving away from earth. As such, they have suggested that there should be no fully formed stellar bodies further away than about 8 billion light years. Astronomers have pointed telescopes into supposed redshift deserts (I.E. locations in space where there should be no fully formed bodies) and they found a sky full of fully formed galaxies.
Measuring light from distant stars relies on the assumption that light has always moved at a constant rate, which unfortunately has not been proven.1. The moon moves away from the earth at around 4cm per year. If the earth was billions of years old, the moon could not be as close to the earth as it is.
That suggests that the moon has always been in orbit around the earth for the 4.5 billion years...it hasn't
Unfortunately this is not what old earth scientists believe. They believe that the earth and moon have been around for over 4 billion years.2. Oil deposits in the earth are under extreme pressure. If the earth was billions of years old this pressure would have caused the oil to have seeped through the rock layers and eventually the pressure would all be gone - I.E. there would be no oil under pressure today
The oil deposits aren't 4.5 billion years old either... they are from rotting animal/vegetable sources from much later .. millions of years not billions
I should have written this statement differently I.E. millions of years. The problem still stands however that if oil has was around millions of years ago, then it could not be under pressure today.3. The sun is shrinking at a rate of five feet per hour. this means that the sun would have been touching the earth a mere 11million years ago (let alone billions of years ago)
No, that asssumes a constant state universe...the universe is very far from constant...its expanding and has been since the beginning. Nobody has ever suggested that the earth - moon - sun position has been in existance let alone constant since the big bang.
Don't old earth scientists make assumptions also? If you look above, old earth scientists make the assumption that the speed of light is constant. Furthermore they still hold to the assumption that the earth, moon and sun have been around for over 4 billion years.4. Helium is added to the atmosphere everyday. Basically there is not enough helium in the atmosphere to support billions of years.
Helium hasn't been added for 4.5 billion years ... again the earth wouldn't have had an atmosphere until recently ( recent related to its 4.5 billion age )
According to old earth scientists. The oxygen enriched atmosphere (basically as we know it today) was formed around 2.7 billion years ago. The amount of helium contained within our atmosphere today is only enough to support thousands of years, certainly not billions.5. Comets lose mass over time, there would be no comets left if the universe was billions of years old. (because comets were apparently a by product of the big bang)
Thats misleading. The origin and time of origin of comets is not claimed to be the big bang. Thats a straw man.
(I am guessing that a straw is another way of saying clutching at straws?)
Again with this one I should not have just skimmed over it but should have elaborated. Comets have long been a good evidence due to their fragile nature and life expectancy. Comets are commonly huge chunks of ice traveling at tremendous speeds through space, when they come close to a star, they begin to melt and so form a trail of moisture. This can't last forever and it will eventually disintegrate. Here in-lies a problem for old earth scientists because there should be no comets left - they should all have been disintegrated by now (giving the billions of years). And if we are talking about clutching at straws - here's a good one for you.
Old earth scientists have come up with another theory to try and explain why we still have comets today. So in comes the Oort Cloud. The Oort Cloud is a hypothetical spherical cloud of comets which may lie roughly 1 light year away from our sun. Apparently, these comets become dislodged from the Oort Cloud by the gravitational pull of passing stars and the milky way itself (due to it apparently being at the outer edges of our milky way) These comets are then free to move about and disintegrate (which is how we see comets today) Now this Oort Cloud has not been detected or seen it is another theory - it is just a hypothetical cloud to try and fit in with the mold of an old universe.6. The earths magnetic field decays by approximately 5% every century, this means that a mere 10,000 years ago, the earths magnetic field would have been so strong that the heat it would have produced would have made life on earth impossible.
No doubt taken from Barnes's magnetic field argument 1973. The decay rate he stated has been debunked and stated as flawed.
How has it been debunked?7. fossilized dinosaur bones - these bones have been found and it is impossible for them to have lasted for millions of years.
Why not ? They have
The evidence available suggests an asteroid hit the earth approx 65 million years ago leading to a catastrpohic global event. There is a layer of iridium in the earths stratography that supports this theory.
Speaking of clutching at straws - "Why not? they have" This goes against what old earth scientists have been telling us for years! Blood cells decay at a much faster rate than the rate at which bones can fossilize. How then can you have a fossilized dinosaur bone which contain blood cells?
If we are talking about debunking theories or practices - radio carbon dating techniques have terrible flaws and rely on many assumptions. Therefor how can you be sure that your 65 million years is accurate?8. Salt is added everyday to the dead sea by inflows. Since it has no outlet - the salt content continues to grow. The amount of salt contained within it is not enough to support billions of years.
The dead Sea didn't spring into existance billions of years ago. Its a result of millions of years of constant change on the earth by volcanic, tectonic, atmospheric activity. The dead sea is a baby compared the age of the earth
I would have thought that you would line up the forming of the seas as we know them now with the catastrophic global event that wiped out the dinosaurs. If not that, then what are you basing your idea that the dead sea is a baby compared to the age of the earth? are we talking thousands of years, hundreds of thousands, millions or perhaps billions?9. The earths population doubles every 50 years (approx) it would take around about 4,000 years to reach the number of people that are on earth today (Lines up nicely with the world wide flood of Noah's day) if we use this figure for millions of years - the earth could not contain the amount of people.
Also that matches for the evolution model. The expansion in the earths population is also linked to the expansion of civilisation .. .not just the existance of humans and their descendants.
Could you expand on which evolution model you're referring to?10. Spiral galaxies appear this way due to their 'rotation' this rotation would eventually cause them to straighten out I.E. lose their spiral. There should be no spiral galaxies if the universe was actually billions of years old.
That again is a straw man. The big bang theory doesn't suggest spiral galaxies popping into existance at the moment of the big bang. They are formed over many millions of years
Why not? The big bang suggests that everything else popped into existence at the moment of the big bang. If this is not the case - then how did they form?The earth, the universe and everything in it was brought about in creation week. It was a divine event brought about by a supernatural creator.
No it wasn't ( that which can be asserted without evidence can also be dismissed without evidence )
We have just been discussing a page full of evidences!And faith .....
Would you build an electronic project based on faith ? would you cross the road by faith ?
But you yourself are obviously a man of great faith. You believe that the universe and all it contains was brought about by a supposed big bang. To put it lightly - 'Nothing became something and the something exploded'
Where did this matter come from in the first place? doesn't the big bang go against the law of conservation of mass and energy?
If you are dismissing faith, then you must have proof of the big bang. You obviously weren't there when the supposed big bang took place so therefor it would stand to reason that you can replicate the big bang - after all, we are dismissing faith here.If I am sick I see a Doctor, If I have trouble seeing I go to an optician etc etc. Faith would not heal me or make me see. Rather countless selfless individuals who over thousands of years have devoted their lives to bettering mankind.
Yes indeed! Isn't it interesting how even though we apparently all stemmed from a common singularity we are all unique and have our own special gifts and talents? If we look to God's word though, we find that we all have been given these unique gifts and talents - some to be doctors, some to be opticians, some to make super pong tables and some to be astronauts!
But back on topic, isn't there an underlying reason that you go to a doctor? You go specifically to a doctor because you have faith in him. If you didn't have faith in him and all his years of training then you would just go to anyone wouldn't you?Its just not the case at all. For a start evolution doesn't need a set of ready to be assembled parts lying around. Its a process beginning with the smallest building blocks at chemical level and taking millions and millions of years to progress.
Fair enough, Let's walk though this one step at a time starting from the beginning - how did the very first building block get here?
Also a 747 ( or an LED pong table ) isn't carrying about obselete parts of earlier less successful aircraft in its frame like we are.
Could you list these supposed obsolete parts and explain why they are not required (I think you'll find that every part of our body plays it's own important role)
You say that you have faith in fellow humans. Why is that? If we are just a result of random chemical reactions then why do you trust in them?
On that note, why does anyone have morals? why do we have laws and rules? if we are the by product of natural selection in that it is survival of the fittest, who is to say that I can't go out and kill someone - after all this is how we supposedly came to be!
Do you feel sorrow when a family member or close friend dies? I am guessing that you would, but hold on a second - why on earth would you get sad if this is simply what you are arguing for in motion? To expand, If we are brought about by the strongest cells living on and the weaker ones dying off, isn't it good that your family member or friend has died because it means that the strong have survived and the weak are now dead? you should be sitting there giving hi five's to everyone shouting "Way to go natural selection!"
And finally, Why on earth would scientists use evidence from the past to predict the future? If the universe came about by disorder and random chemical reactions then how on earth could we use this information to reliably predict the future. Uniformity does not make any sense in a universe created by random chance and disorder.
Of course this is not the case, we find that the universes history is very much ordered because God designed it that way.
|
Cancer of the cervix (sir-vix) is one of the most common cancers in women. There seems to be a connection between cervical cancer and sexual activity at an early age, especially when multiple partners are involved. Cervical cancer grows without symptoms, that's why a yearly pap smear is so important. A pap smear can detect the presence of cancer cells at an early stage. When precancerous cells are found, usually called dysplasia (dis-PLAY-zha), they can be removed in the doctor's office using various procedures that burn or freeze the cells off the cervix. If the cancer has advanced, the recommended treatment usually includes a combination of chemotherapy, radiation and surgery, which will prevent a woman from bearing children. For more information about cervical cancer, contact your healthcare provider.
|
Race for survival
On the brink of extinction, Honu'ea struggle to the sea
October 23, 2008A full moon floods the wide swath of sand that conceals Orion's nest. Big Beach is lit up so bright we can see where the first ones emerged, the football-sized divot on a small mound of sand cordoned of with yellow caution tape.
This is the third and final full moon to hit the still-gravid mound.
Sheryl King, a biologist with the Hawaii Wildlife Fund, sits at the head of a circle consisting of a dozen or so of us. She explains what we are to do should more baby honu'ea—hawksbill turtles—dig their way out on one of our shifts: make sure they head in the right direction—toward the sea. Keep cats, mongooses, and crabs away. If one flips over in a footprint, push up sand beneath it so it can right itself, but don't ever touch a hatchling.
We determine who stakes out when, then hit the hay, or rather the sand.
Hawksbill nests typically gestate for around 60 days, King said, but she adjusts a nest's "due date" according to various factors, among them temperature and shade.
King spotted Orion, the mother, depositing this particular clutch around 64 days prior to the first hatchlings' emergence. She was watching for nesting hawksbills as part of the dawn patrol, a U.S. Fish and Wildlife Service effort to spot nesting females as part of the Honu'ea Recovery Project. She estimated the nest would begin to hatch on October 11. She was only 2 days off.
The project takes place with help from several entities, including FWS, the Hawaii Department of Land and Natural Resources and the National Oceanic and Atmospheric Administration. The aim is to get the honu'ea population up to more stable number.
Orion herself likely hatched very close to the spot where she dropped off her most recent batch.
"They tend to return to their natal beach," said HWF co-founder Hannah Bernard. "We don't really understand how they find their way."
Yet Orion doesn't stick around for very long after nesting. According to King, she spends most of her days off the coast of Oahu and comes to the vicinity of Big Beach every three to four years just to nest.
"I first tracked her, and named her, in 2001," King said. "We've tracked her with satellite transmitters so we have a good handle on her movements."
This is Orion's third or fourth nest this season. Two other nests were laid on island this year by an as yet unidentified female, which Bernard says is a good thing—one more nesting female adding to the species' extremely small gene pool.
This is one of only ten or so nesting areas archipelago-wide. There are three on Maui. Other sites include Kameahame Beach on the Big Island and a black sand beach at the mouth of Moloka'i's Halawa River. Ninety percent of honu'ea nesting occurs on the Ka'u Coast of the Big Island.
Nests contain an average of 140 eggs. But while a single hawksbill may lay nearly a thousand eggs in a given year, Hawaii's honu'ea aren't exactly thriving. King said that they have a one in 10,000 chance of making it to adulthood. Volunteers stake out the nest for 24 hours a day as the due date approaches to help ensure the hatchlings' instinctual seaward striving goes without predatory incident.
HWF volunteer coordinator Angie Hofmann compares the hatching of a sea turtle nest to childbirth. Everyone was antsy in the days leading up to the hatching. A handful of volunteers parked nest-side in beach chairs day and night, eyes locked on the mound for even the tiniest movement. One volunteer called it a "watched pot."
Only this one boils.
The first batch emerged at around 5am on Monday, October 13. Forty-eight hatchlings made their way to the water that morning, but Orion's nest was still far from empty.
Glimpsing these tiny hatchlings, bellies full of yolk, as they march toward the sea is an extraordinary sight on its own, but there is a particular sense of urgency for the little ones whose prolific mama chose to deposit them in the shade of a keawe tree at Big Beach.
Honu'ea are not the enormous green guys that bob up beside you when you're snorkeling at Black Rock or Molokini. Honu'ea are smaller—they grow to be up to 270 pounds, whereas the greens round out at 400. Honu'ea have a beak rather than a rounded snout—hence the Anglo name, hawksbill.
Most importantly, honu'ea are endangered under the U.S. Endangered Species Act and most people you ask will say they're critically endangered; greens are not.
Though their plight is severe and stemming from the same source, green sea turtles are listed as threatened, which means that their numbers are much higher than those of the honu'ea.
Statewide, according to King and Bernard, there are fewer than 100 nesting female honu'ea. Fewer than ten of these will nest throughout the isles in any given year. Only five or six total dig their nests on Maui's coastline.
"That's critically low," Bernard said, adding that the entire Hawaii hawksbill population is extremely vulnerable. "The greater your numbers, the greater your resilience."
They cite anthropogenic—human—causes for the species' alarmingly low numbers: runoff, traffic, lights that disorient nesting turtles, introduced predators, habitat loss and more.
Hawksbills across the globe were once plundered for their shells, which were made into combs, jewelry and even guitar picks. In Japan, according to the 1999 Jay April documentary Red Turtle Rising, they were seen as a sign of longevity, and thus stuffed and hung on the walls in many homes. In Hawaii their shells were used to make dinnerware, jewelry and medicine, though a kapu (taboo) barred honu'ea meat from being consumed (they dine primarily on poisonous sponges, which makes their meat toxic). The tortoiseshell pattern that may or may not constitute your sunglass frames was inspired by the hawksbill. In 1973 real tortoiseshell was banned worldwide under the Convention on International Trade in Endangered Species (CITES).
|Researchers have tracked Orion's (the mama turtle) route and found she likes to hang out on Oahu, but comes to Maui to nest.|
It may be illegal to mess with them these days, but they're not exactly bouncing back.
That's why the 140 or so hatchlings here at Big Beach, barely larger than your big toe, need to make it the ocean.
So far the turnout has been outstanding. The first night saw 48 turtles scamper into the tide. The next night more than 100 came out. Tonight we'll see the stragglers to the shore, if there are any.
The next day King will excavate the nest carefully with her hands for any that didn't make it out, dead or alive. Live hatchlings will be placed in the water after dusk. Eggshells will be counted and unhatched eggs will be sent to a NOAA lab in Honolulu for DNA testing.
My one to 2am shift comes and goes without a peep. I've been instructed to shine a red flashlight on the nest every few minutes, but the mound is frozen.
I fall asleep after my shift with few expectations.
At some bleary hour a voice startles me awake.
"There's a turtle!" King says as she passes my tent. "A turtle just hatched!"
It's barely a quarter past five in the morning. Volunteers climb out of sleeping bags and tents and flood the area around the nest. One hatchling moves slowly toward the sea in the moonlight, almost a silhouette at this dark hour. Its tracks look like tire tread from a mountain bike. We inch along behind it, awestruck.
After 20 minutes the turtle is at the edge of the sea. Although its flippers have just had a killer workout, the hatchling takes to the waves effortlessly after the lapping water swallows it whole.
Any number of things could have thrown off the hatchling and its siblings. Had this been a beach up the road they may have gone toward bright lights. They may have gone toward South Kihei Road and gotten smashed, which has happened before with nesting mothers; once in 1993 and once in 1996, thanks to speeding motorists. A feral cat (of which there are many) could have gotten to them. King says that even ghost crabs prey on sea-bound hatchlings, gouging out their eyes in a horrific display King herself has witnessed in the northwest Hawaiian Isles.
Hofmann said her major concern is the long-term impact of development on nesting. While Big Beach is a state park and thus can't be built upon, two proposed developments—Wailea 670 and the expansion of Makena Resort—could increase the volume of beachgoers that may, inadvertently or otherwise, disturb the nests.
"If they both get their way there'd be another city down here," she said.
The proposed development sites may be pretty far mauka of where the turtles nest, but storm runoff has an obvious impact on their ability to successfully hatch and make it to the sea, as does lighting.
Hofmann said that, given how close honu'ea are to extinction, developers should reconsider how they determine appropriateness when choosing a building site.
"The turtles have chosen this as their nesting place," she said.
While there are several well-documented hawksbill nesting sites statewide, there is no bureaucratic mechanism that can designate them as a critical habitat.
Bernard said that the only defense for sites with impending developments so far has been a lighting ordinance that the county adopted in 2007, which she said was watered-down.
"It's not the bill that we hoped for," she said, "but it's a start."
Just after six in the morning the camp gets jostled awake once again. Three more babies have come out, a volunteer says. I hop to my feet. The last ones to emerge on their own are making it to sea in the new daylight, each on a separate trajectory, seemingly unaware of one another but probably very aware of us.
We scare away the looming ghost crabs. We clear the path of debris, as the turtles' tiny flippers hoist them along the final stretch of sand.
It takes one honu'ea a few tries to take to the water; the oncoming surf pushes it off course. The other two swim off almost instantly.
Nobody knows where they're headed. They return to near shore areas after about five to 10 years, but the time in between is known as the lost years. One theory is that they attach themselves to little clumps of seaweed, floating wherever the current takes them. Those ready to nest, of course, eventually make it back to the beach of their birth using some mysterious sense that we don't yet understand. The hope is they'll stick around long enough for us to find out. MTW
For more information on how you can help hawksbills visit wildhawaii.org. To find out more about the role of honu'ea in Hawaiian history and culture check out the award-winning 1999 documentary Red Turtle Rising, directed by Jay April. The film is available for free on the Web at filmmaui.com and through the World Turtle Trust.
|Entertainment and lifestyle news for Maui, Hawaii and the surrounding Islands. Maui Time Weekly is Mauis only independent and locally owned newspaper.
Mail this link to a friend|
|
Lier Psychiatric Hospital (Lier Psykiatriske Sykehus or Lier Asyl in Norweigan) in Norway, has a long history as an institution. The sickest people in the society was stowed away here and went from being people to be test subjects in the pharmaceutical industry’s search for new and better drugs. The massive buildings house the memory of a grim chapter in Norwegian psychiatric history the authorities would rather forget.
UPDATE: When you have read this post you might be interested in reading my report one year later!
The buildings welcome you
Many of the patients never came out again alive and many died as a result of the reprehensible treatment. It was said that the treatment was carried out voluntarily, but in reality the patients had no self-determination and the opportunity to make their own decisions.
Must be creepy at night
There is little available information about the former activities at Lier Hospital. On this page (In Norwegian) you can read more about the experiments that were carried out on this Norwegian mental hospital in the postwar period from 1945 to 1975. It’s about the use of LSD, electroshock, brain research funded by the U.S. Department of Defense and drug research sponsored by major pharmaceutical companies. It is perhaps not surprising that they try to forget this place and the events taking place here.
Chair in room
One of many rooms
Things that is left behind including bath tub
It was also performed lobotomy here. That’s a procedure that involves knocking a needle-like object into the eye socket and into the patients head to cut the connection between the anterior brain lobes and the rest of the brain. Lobotomy was primarily used to treat schizophrenia but also as a soothing treatment for other disorders. The patients who survived were often quiet, but generally this surgery made the patients worse. Today lobotomy is considered barbaric and it is not practiced in Norway.
From a window
Lier Psyciatric Hospital, or Lier Asylum as it was called originally, was built in 1926 and had room for nearly 700 patients at the most. In 1986, many of the buildings were closed and abandoned and they still stand empty to this day. Some of the buildings are still in operation today for psychiatric patients.
Exterior of the A building
Desinfection bath tub
These photos are from my visit there as a curios photographer. The place was clearly ravaged by the youths, the homeless and drug addicts who have infiltrated the buildings during its 23 years of abandonment. On net forums people has written up and down about ghost stories and the creepy atmosphere. I was curious how I would experience the place myself. But I found it was pretty quiet and peaceful. I went there during the day so I understand that during nighttime, one should look far for a more sinister place. The floor consisted of a lot of broken glass and other debris.
View through window
A pile with electrical boxes or something
These days, there has been provided money to demolish the. 15 million NOKs is the price. Neighbors cheer but the historic, photographers and ghost hunting kids think it’s sad. This is the most visited, and just about the only and largest urban exploration site in Norway.
I have read and recommend Ingvar Ambjørnsen first novel, “23-Salen”, which is about when he worked as a nurse at Lier Psychiatric Hospital for one year. The book provides insight into how life for patients and nurses turned out in one of the worst wards.
The famous motorized wheelchair
Doorways and peeling paint
Top floor, view to the roof and empty windows
Disused stairs outside
|
Three-dimensional printing is being used to make metal parts for aircraft and space vehicles, as well as industrial uses. Now NASA is building engine parts with this technique for its next-generation heavy-lift rocket.
The agency says that its Space Launch System (SLS) will deliver new abilities for science and human exploration outside Earth's orbit by carrying the Orion Multi-Purpose Crew vehicle, plus cargo, equipment, and instruments for science experiments. It will also supply backup transportation to the International Space Station, and it will even go to Mars.
NASA is using 3D printing to build engine parts for its next-generation Space Launch System. Shown here is the first test piece produced on the M2 Cusing Machine at the Marshall Space Flight Center.
(Source: NASA Marshall Space Flight Center/Andy Hardin)
NASA's Marshall Space Flight Center is using a selective laser melting (SLM) process to produce intricate metal parts for the SLS rocket engines with powdered metals and the M2 Cusing machine, built by Concept Laser of Germany. NASA expects to save millions in manufacturing costs and reduce manufacturing time. SLM, a version of selective laser sintering, is known for its ability to create metal parts with complex geometries and precise mechanical properties.
The SLS will weigh 5.5 million pounds, stand 321 feet tall, and provide 8.4 million pounds of thrust at liftoff. Its propulsion system will include liquid hydrogen and liquid oxygen. Its mission will launch Orion without a crew in 2017; the second will launch Orion with up to four astronauts in 2021. NASA's goal is to use SLM to manufacture parts that will be used on the first mission.
The rocket's development and operations costs will be reduced using tooling and manufacturing technology from programs such as the space shuttle. For example, the J-2X engine, an advanced version of J-2 Saturn engines, will be used as the SLS upper stage engine. Some SLM-produced engine parts will be structurally tested this year and used in J-2X hot-fire tests.
In a NASA video, Andy Hardin, engine integration hardware lead for the Marshall Space Flight Center SLS engines office, discusses the initial testing and building stages:
We do a lot of engineering builds first to make sure we have the process [worked] out. There's always weld problems that you have to deal with, and there's going to be problems with this that we will have to work out, too. But this has the potential to eliminate a lot of those problems, and it will have the potential to reduce the cost by as much as half in some cases on a lot of parts.
Since final parts won't be welded, they are structurally stronger and more reliable, which also makes for a safer vehicle.
Ken Cooper, advanced manufacturing team lead at the Marshall Space Flight Center, says in the video that the technique is especially useful for making very complex shapes that can't be built in other ways, or for simplifying the building of complex shapes. But geometry is not the deciding factor; whether the machine can do it or not is decided by the size of the part.
|
OAKLAND OUTDOORS: Oakland County residents can visit the ‘Arctic Circle’ right in Royal Oak
The date of the winter solstice on Dec. 21 draws near.
What better time than now to plan an adventuresome trek into the Arctic — a journey to the cryosphere, a land of ice, snow and frozen sea water. The Arctic is a landscape of mountains, fjords, tundra and beautiful glaciers that spawn crystal-colored icebergs. It is the land of Inuit hunters, polar bears and seals and is rich with mysteries that science is still working to unravel. Those wishing to reach the Arctic must travel north of the Arctic Circle.
And just where is the Arctic Circle?
The climatologists at the National Snow and Ice Data Center define the Arctic Circle as the imaginary line that marks the latitude at which the sun does not set on the day of the summer solstice and fails to rise on the day of the winter solstice, a day that is just around the corner. Arctic researchers describe the circle as the northern limit of tree growth. The circle is also defined as the 10 Degree Celsius Isotherm, the zone at which the average daily summer temperature fails to rise above 50 degrees Fahrenheit.
Polar bears are hungry in this foreboding landscape where the mercury can plunge quickly to 60 degrees below zero and winds are ferocious.
Perhaps it is time to stop reading my words, bundle up the kids and hike into the Arctic. Why not today! Children will almost certainly see seals just yards away and, if the timing is right, go nose to nose with a mighty polar bear that may swim their way and give a glance that could be interpreted as a one-word question: “Tasty?”
And here is the rest of the story, a special trail tale that takes visitors to a one-of-a-kind place.
I am amazed at how many residents of Oakland County have yet to hike through an acrylic tunnel known as the Polar Passage — a public portal to the watery world beneath the land above the Arctic Circle.
The 70-foot long tunnel is the highlight of a 4.2-acre living exhibit at the Detroit Zoo, the Arctic Ring of Life. And a hike through that exhibit and underwater viewing tunnel brings encounters with three polar bears, a gray seal, two harbor seals, one harp seal and three arctic fox.
A visit to the Arctic Ring of Life is more than a fun hike. It opens our eyes to the life of the Inuit people and introduces visitors to a fragile world in danger from events that can no longer be denied — climate change, global warming and rising sea waters. Continued...
Upon entering the park, check a map for the location. It’s easy to find and is rich with historical, cultural and natural information of the Inuit, the Arctic people. Before Europeans arrived, they had never even heard the word, “Eskimo.” The arrival of Europeans in the early 1800s was not good news for the pale-skinned strangers. They carried foreign diseases, and missionaries followed that sadly enticed the trusting Intuits to give up their own religion and become Christians. The native people were encouraged, and sometimes forced, to abandon their traditional lifestyles and live in the village. The jury may still be out, but some historians claimed that the Inuit were eager to embrace the new ways to forgo the harsh reality of nomadic life. Today, the Arctic Ring of Life’s Nunavut Gallery gives visitors a glimpse of a disappearing culture torn between the old and the new, but still rich with tradition, folk art, spirituality and creativity.
The Inuit once existed almost exclusively on meat and fat with only limited availability of seasonal plants. The Inuits hunted for survival and considered it disrespectful to hunt for sport. And as I dug deeper into their history to prepare for my tunnel trek, I discovered that metal for their early tools were chipped flake by flake from large meteorites and then pounded into tools like harpoon points.
The fascinating relationship between the Inuit, their environment and creatures that dwell above the Arctic Circle is more deeply understood when visitors trek through the highlight of the exhibit, the polar passage tunnel. A polar bear on a tundra hill, built high to afford the bears a wide range of smells, may be sniffing the air for zoo visitors.
The tunnel has acrylic walls four inches thick, is 12 feet wide, eight feet high and offers great views when a seal or polar bear swims. About 294,000 gallons of saltwater surround visitors.
Some wonder why the polar bears do not eat the seals. They can’t. A Lexan wall separates the species.
Be sure to take the time to explore the Exploration Station of the Arctic Ring of Life. It contains many of the accoutrements of a working research station complete with telemetry equipment, computers and displays of snowshoes and parkas from the arctic. Portholes provide views of the seals and bears.
Visitors may want to do what I did — enter through the tunnel a second time and then read all the well placed interpretive signs above ground. Stories of the hunters and the hunted await and include myth-busting facts on mass suicides of lemmings that have been alleged to jump off cliffs into the sea. And be sure to let a child put his or her foot in the imprint of the polar bear track on the pathway. And, of course, save time to hike the rest of the zoo. Cold weather is a perfect time to explore minus the summer crowds.
Jonathan Schechter’s column appears on Sundays. Look for his Earth’s Almanac blog at www.earthsalmanac.blogspot.com Twitter: OaklandNature E-mail:[email protected].
For more information about the Detroit Zoo’s hours, exhibits and special events, visit detroitzoo.org. The zoo is located at 8450 West 10 Mile Road in Royal Oak. No additional fee to visit Arctic Ring of Life.
See wrong or incorrect information in a story. Tell us here
Location, ST | website.com
National Life Videos
- Fire destroys home, damages business in Waterford (1825)
- Long lines to see Stan Lee, Norman Reedus plague Motor City Comic Con on Saturday WITH VIDEO (1063)
- PAT CAPUTO: Detroit Red Wings show why they are Stanley Cup contenders (805)
- Nearby neighbors concerned after man convicted of murder paroled, moves to Pontiac group home (788)
- PAT CAPUTO: Detroit Red Wings show why Stanley Cup dream not far fetched (785)
- Pontiac council votes against Schimmel’s plan to eliminate health care for retirees (686)
- Fall Out Boy wants to "Save Rock and Roll" with new CD (4)
- New backcourt leads Lathrup over Dragons (4)
- Oakland County Sheriff’s Office veteran — Michigan's first black police captain — honored in retirement WITH VIDEO (3)
- Fire destroys home, damages business in Waterford (3)
- Despite Angelina Jolie, breast cancer treatment has a long way to go - COLUMN (2)
- PITTS: Woman ran to Key West, where else? (2)
- Bloomfield Hills mother graduates college with daughter, stepson (2)
Recent Activity on Facebook
Caren Gittleman likes talking cats. She'll discuss everything about them, from acquiring a cat, differences in breeds, behaviors, health concerns, inside versus outside lifestyles, toys, food, accessories, and sharing cat stories. Share your stories and ask her questions about your favorite feline.
Roger Beukema shares news from Lansing that impacts sportsmen (this means ladies as well) and talks about things he finds when he goes overseas to visit my children, and adding your comments into the mix.
Join Jonathan Schechter as he shares thoughts on our natural world in Oakland County and beyond.
|
Student Health Checklist
Being a college student warrants a quick health check in order for you to remain healthy all school year. By following a few simple strategies, you can “rock on” in school, work and your social life.
Get involved in regular cardio exercise
for 30 minutes five times a week such as walking, swimming, running or any other activity that increases your heart rate.
De-stress by deep breathing and stretching exercises
two to three times a week. Spring for a DVD of yoga or get involved in a yoga class.
Get plenty of sleep!
This is very important in every aspect of your life!! As a college student, it is recommended that you log six to eight hours per night.
Take time out every half hour to stretch, walk around, or deep breathe if you are working at a computer.
Drink plenty of water.
Dehydration can make you more vulnerable to illness and infections, therefore, it is important that you down plenty of non-alcoholic fluids. And if water isn’t your thing, juice, tea, and other beverages will work as well.
Utilizing your BFFs.
Having the right friends and someone to talk to and count on is extremely important for your mental health. Seek out groups and activities that will attract new friends who will be supportive of you and vice versa
Eat your fruits and veggies.
A good rule of thumb is to make sure that half of your plate is filled with fruits and vegetables as these foods are bursting with nutrients that help keep infection and diseases at bay.
Fight the flu.
Get a flu shot to avoid being laid up this year for a week with fever and sickness. This vaccine is made available to each residential student and all Gallia County students in mid- October here at URG. Being a college student, you often are in close quarters with roomies and classmates, so make sure you get that flu shot!!
Back off the alcohol.
Alcohol has empty calories and is a risk factor for accidents, injuries and regrettable risky behaviors. Once you turn 21, try sticking to the recommended daily limit of no more than two beers or glasses of wine for men and one for women.
Kick the bad habits!
Rub snuff, smoke cigarettes or do other drugs? STOP!! All these can impose serious health threats so start kicking those bad habits today. Talk with your healthcare provider for assistance or check out your local health department
|
Journal Issue: Juvenile Justice Volume 18 Number 2 Fall 2008
The Prevalence of Mental Disorders among Adolescent Offenders
Two kinds of studies address questions about the social consequences of the links between mental disorders and delinquency. One type examines the degree of "overlap" between a community's population of youth with mental disorders and its population of youthful offenders. Knowing this overlap gives some notion of the risk of official delinquency for youth with mental disorders and the degree to which mental disorders of youth contribute to a community's overall delinquency. The second type of study examines the proportion of youth with mental disorders within juvenile justice facilities or programs. These studies provide information with which to formulate policy about treating and managing youth with mental disorders in juvenile justice custody.
It is important to recognize that these two types of research begin with very different populations, even though they both address the relation between mental disorder and delinquency. The first typically focuses on all delinquent youth in the community, while the second examines only delinquent youth placed in juvenile pretrial detention centers when they are arrested or in juvenile correctional facilities when they are adjudicated. This distinction is further complicated, as discussed later, by the fact that not all youth in juvenile justice facilities are necessarily delinquent.
Epidemiologic Studies of Mental Disorder and Delinquency
Some studies have identified a significant overlap between the populations of youth served by community mental health agencies and youth in contact with the community's juvenile court.23 These studies are few in number, but they have found that the risk of juvenile court involvement among a community's young mental health clients is substantial. For example, a study in one city found that adolescents in contact with the community's mental health system during a nine-month period were two to three times more likely to have a referral to the juvenile justice system during that period than were youth in the city's general population.24 Youth in contact with a mental health system's services, however, are not the sum of a community's youth with mental health needs because many receive no services. The results of the study above probably represent the proportion of more seriously disturbed youth who have juvenile justice contact. Even so, merely knowing that youth "have contact" with the juvenile justice system tells us little about their offenses or even whether they offended at all.
Very few studies have used samples that make it possible to identify both the proportion of delinquent youth in a community who have mental disorders and the proportion of youth with mental disorders who have been delinquent. The few that have, however, are large studies with careful designs.
One examined a community population (drawn from several cities) that identified youth with persistent serious delinquency (repeat offending) and youth with persistent mental health problems (manifested multiple times).25 About 30 percent of youth with persistent mental health problems were persistently delinquent. But among all persistently delinquent youth, only about 15 percent had persistent mental health problems.
Another recent study examined the relation between mental disorders during adolescence and criminal behavior when those youth became adults.26 Delinquencies and adult criminal arrests were recorded for a sample of youth in a large geographic region aged nine through twenty-one. The youth were also assessed for mental disorders three times between the ages of nine and sixteen. A diagnosis at any one of these three points identified the youth as having a mental disorder "sometime during childhood or adolescence."
In this study, youth who were arrested between the ages of sixteen and twenty-one included a considerably greater share of youth who had had mental disorders in adolescence than those who were not arrested—for males, 51 percent as against 33 percent. This finding does not mean that 51 percent of the arrested group had mental disorders at the time of their arrest, but that they had had a mental disorder sometime in adolescence. It also does not mean that the majority of youth who had mental disorders in adolescence were arrested in adulthood. A different statistical procedure in this study, called "population attributable risk," addressed that question. It showed that the risk of adult arrest among individuals who had mental disorders at some time during adolescence was about 21 percent for women and 15 percent for men.
These few studies suggest the following conclusions, all of which need further confirmation. First, consistent with the clinical research reviewed earlier, youth who have mental disorders are at greater risk of engaging in offenses than youth without mental disorders. It is possible that treating their disorders would reduce that risk. But most youth with mental disorders do not engage in offenses that involve them in juvenile or criminal justice systems. Second, youth with mental disorders represent only a minority of all youth who engage in delinquent behavior, although the share is somewhat disproportionately greater than their prevalence in the general community. If those youth received treatment that reduced their delinquency, it is possible that overall rates of delinquency in the community would fall somewhat, but the majority of delinquencies are not related to mental disorders.
Third, rates of delinquency are higher among youth with certain types of emotional disorders— for example, depression or anxiety co-morbid with substance use disorders— and among youth with chronic and multiple disorders (seriously emotionally disturbed youth). Finally, a few studies have suggested that youth with mental disorders make up a somewhat greater proportion (although still a minority) of youth who were arrested for more serious and violent delinquencies or crimes.27
Mental Disorder in Juvenile Justice Settings
Research on the subset of delinquent youth who enter juvenile pretrial detention centers and correctional programs cannot tell us the relation between mental disorder and delinquency, because most youth who engage in delinquencies are not placed in secure juvenile justice programs. Such studies, however, are extremely important for public policy, because they identify the scope and nature of mental disorder among youth for whom the juvenile justice system has custodial responsibility.
Until recently the precise prevalence of mental disorders among youth in juvenile justice custody was unknown. Estimates varied widely from study to study, largely because of inadequate research methods or differences from one study site to another.28 In the past decade, however, well-designed studies executed in a variety of sites have provided a reliable and consistent picture. Those studies have found that among youth in various types of juvenile justice settings—for example, pretrial detention centers where youth are taken soon after arrest—about one-half to two-thirds meet criteria for one or more mental disorders.29 The prevalence of mental disorders is much higher in juvenile justice settings than it is among youth in the U.S. general population, which is about 15 to 25 percent.30
Across these studies, the rate is higher for girls than for boys.31 The overall prevalence rate does not vary greatly between younger and older adolescents or for youth with various ethnic and racial characteristics, although age and race differences are sometimes found for specific types of disorders and symptoms.32 As described in the earlier clinical review, about two-thirds of youth in juvenile justice custody who meet criteria for a mental disorder (that is, about one-third to one-half of youth in custody) meet criteria for more than one disorder.33
I will focus later on the implications of these statistics for the juvenile justice system's best response to mental disorders among youth in its custody. The high prevalence of mental disorder in juvenile justice facilities does not necessarily define the need for treatment. Some youth who meet criteria for mental disorders are experiencing their disorders temporarily and need only emergency services, while a smaller share—about one in ten—represents a core group of youth with chronic mental illness who can be expected to continue to need clinical services into adulthood.34 Some are functioning fairly well despite their symptoms, while others are barely able to function at all. And some have mental health needs, such as learning disabilities, that were not even included in the recent studies of prevalence among youth in juvenile justice settings.
Reasons for the High Prevalence of Mental Disorders in Juvenile Justice Programs
Why are mental disorders so prevalent among adolescent offenders in juvenile justice settings? Three perspectives—clinical, socio-legal, and inter-systemic—help to explain. They are not competing explanations. All probably play a role, and no evidence suggests that one is more important than the others.
From a clinical perspective, it is likely that the same symptoms of mental disorder that increase the risk of aggression also increase the likelihood that youth will be placed in secure juvenile justice facilities for any significant period of time. When police officers arrest youth, usually those youth are not placed in pretrial detention. Nor is detention reserved for the most serious offenders—in fact, youth arrested for very violent offenses typically do not make up the majority of youth in detention. Those youth who are detained more than a few hours are those who have been more unruly or unmanageable at the time of their arrest, which satisfies detention criteria regarding a risk that they will be endangered, or might endanger others, if not detained.
Youth with mental disorders frequently have symptoms involving impulsiveness, anger, and cognitive confusion that can make them less manageable and a greater risk to themselves or others, especially under the stress associated with their offense and arrest. Thus, among youth who are detained, a significant share is likely to have mental disorders that create unmanageable behavior—more so than for youth without mental disorders and more so than their peers with less severe mental disorders. This likelihood makes it no surprise that youth with mental disorders contribute disproportionately to detention populations.
From a socio-legal perspective, recent changes in laws applied to youths' delinquencies may have increased the likelihood that youth with mental disorders will enter the juvenile justice system. Before the 1990s, law enforcement officers, juvenile probation departments, prosecutors, and judges typically had some discretion regarding whether they would arrest or prosecute youth with mental disorders when they engaged in illegal behaviors, especially if those behaviors involved minor offenses committed by younger adolescents without offense histories. But a wave of serious juvenile violence during the late 1980s caused virtually all states to revise their juvenile justice statutes during the 1990s to rein in this discretion.35 Under the new laws, certain charges or offenses required legal responses based on the nature of the offense alone, not the characteristics or needs of the individual youth. Penalties more often involved custody in secure juvenile facilities, thus reducing the likelihood that youth could receive mental health services in the community after their adjudication. An unintended consequence of these changes in law, therefore, was an increase in the share of youth with mental disorders coming into the system rather than being diverted on the basis of the juvenile court's discretion.
A final, inter-systemic, explanation involves the dynamic relation between systems that serve youth. During the 1990s, most states saw a reduction in the availability of public mental health services for children, especially inpatient services.36 It is possible that less adequate treatment contributed to increased delinquencies among youth with mental disorders. But it is certain that many communities began using the juvenile justice system to try to fill the gap caused by decreased availability of mental health services.
This phenomenon was documented in media articles, the observations of juvenile justice personnel, and government reports beginning in the mid-1990s and continuing into the early 2000s.37 Some parents of children with serious mental disorders began urging police to arrest their children, knowing that courts could "order" mental health services that were becoming nearly impossible for parents to get on their own. Soon the local juvenile pretrial detention center was becoming the community's de facto mental health center that provided emergency mental health services or simply acted as a holding place for seriously disturbed youth who had nowhere to go.
In summary, these three factors—clinical, socio-legal, and inter-systemic—may together produce a prevalence of mental disorder in juvenile justice settings that does not represent the actual relation between adolescent mental disorder and delinquency. That high prevalence does, however, represent a demand on the juvenile justice system to respond to youth in custody who have mental disorders, and the demand is almost overwhelming. Some of those youth are in secure custody because they have committed serious crimes, others because the legal system has widened the door to juvenile justice processing, and many because their symptoms make them difficult to handle and they have no place else to go.
The problem requires a solution, and the multiple causes of the problem as well as the various types of youth involved suggest that the solution will be complex. What have clinicians and researchers learned that can help us determine the appropriate response?38
|
VLADNIC, Romania (AP) -- Their origins are a mystery. The most widely accepted theory is that the Csango people of Romania's remote eastern Carpathian mountains began settling around the 13th Century, dispatched by Hungarian rulers to defend the kingdom's easternmost frontier.
Of Roman Catholic faith and speaking a medieval Hungarian dialect, they still live in relative isolation, harvesting vegetables and nuts, some of which they exchange for oil, rice and other necessities brought in by village-to-village Romanian salesmen.
The Csangos fear that their culture and language are eroding. Since Romania joined the European Union in 2007, an entire generation of adults has been lured away by the prospect of jobs in countries like Italy and Spain.
In some homes, grandparents are looking after as many as 14 grandchildren.
A group of doctors from a Hungarian Catholic charity recently visited Vladnic and the nearby Csango villages of Ciucan and Tuta to give eye and dental tests.
For 14-year-old Tuta resident Annamaria Laura Lupo, the eye exam was long overdue. She said she's been having headaches and it was affecting her studies at school. Now, she and her mother are awaiting glasses that will be sent from Budapest.
Dentist Renata Jaky said there was a near total lack of oral hygiene. "When I ask the kids how many times they brush their teeth, the most usual answer is 'never,'" she said.
Children are key to the villages' survival and many start work before sunrise.
"I get up at five in the morning and start the day by milking the goat and taking the animals out to pasture. Only after that do I go to school," said Gyorgy Radavoi, a 14-year-old boy. "My mother died and my father works abroad all year. I'm happy when my granddad isn't drunk, we have food to eat and it's warm at home."
|
Publisher Council on Foreign Relations
Release Date Last Updated: May 21, 2013
Scope of the Challenge
Oceans are the source of life on earth. They shape the climate, feed the world, and cleanse the air we breathe. They are vital to our economic well being, ferrying roughly 90 percent of global commerce, housing submarine cables, and providing one-third of traditional hydrocarbon resources (as well as new forms of energy such as wave, wind, and tidal power). But the oceans are increasingly threatened by a dizzying array of dangers, from piracy to climate change. To be good stewards of the oceans, nations around the world need to embrace more effective multilateral governance in the economic, security, and environmental realms.
The world's seas have always been farmed from top to bottom. New technologies, however, are making old practices unsustainable. When commercial trawlers scrape the sea floor, they bulldoze entire ecosystems. Commercial ships keep to the surface but produce carbon-based emissions. And recent developments like offshore drilling and deep seabed mining are helping humans extract resources from unprecedented depths, albeit with questionable environmental impact. And as new transit routes open in the melting Arctic, this once-forgotten pole is emerging as a promising frontier for entrepreneurial businesses and governments.
But oceans are more than just sources of profit—they also serve as settings for transnational crime. Piracy, drug smuggling, and illegal immigration all occur in waters around the world. Even the most sophisticated ports struggle to screen cargo, containers, and crews without creating regulatory friction or choking legitimate commerce. In recent history, the United States has policed the global commons, but growing Indian and Chinese blue-water navies raise new questions about how an established security guarantor should accommodate rising—and increasingly assertive—naval powers.
And the oceans themselves are in danger of environmental catastrophe. They have become the world's garbage dump—if you travel to the heart of the Pacific Ocean, you'll find the North Pacific Gyre, where particles of plastic outweigh plankton six to one. Eighty percent of the world's fish stocks are depleted or on the verge of extinction, and when carbon dioxide is released into the atmosphere, much of it is absorbed by the world's oceans. The water, in response, warms and acidifies, destroying habitats like wetlands and coral reefs. Glacial melting in the polar regions raises global sea levels, which threatens not only marine ecosystems but also humans who live on or near a coast. Meanwhile, port-based megacities dump pollution in the ocean, exacerbating the degradation of the marine environment and the effects of climate change.
Threats to the ocean are inherently transnational, touching the shores of every part of the world. So far, the most comprehensive attempt to govern international waters produced the United Nations Convention on the Law of the Sea (UNCLOS). But U.S. refusal to join the convention, despite widespread bipartisan support, continues to limit its strength, creating a leadership vacuum in the maritime regime. Other states that have joined the treaty often ignore its guidelines or fail to coordinate policies across sovereign jurisdictions. Even if it were perfectly implemented, UNCLOS is now thirty years old and increasingly outdated.
Important initiatives—such as local fishery arrangements and the United Nations Environment Program Regional Seas Program—form a disjointed landscape that lacks legally-binding instruments to legitimize or enforce their work. The recent UN Conference on Sustainable Development ("Rio+20") in Rio de Janeiro, Brazil, convened over one hundred heads of state to assess progress and outline goals for a more sustainable "blue-green economy." However, the opportunity to set actionable targets to improve oceans security and biodiversity produced few concrete outcomes. As threats to the oceans become more pressing, nations around the world need to rally to create and implement an updated form of oceans governance.
Oceans Governance: Strengths and Weaknesses
Overall assessment: A fragmented system
In 1982, the United Nations Convention on the Law of the Sea (UNCLOS) established the fundamental legal principles for ocean governance. This convention, arguably the largest and most complex treaty ever negotiated, entered into force in 1994. Enshrined as a widely accepted corpus of international common law, UNCLOS clearly enumerates the rights, responsibilities, and jurisdictions of states in their use and management of the world's oceans. The treaty defines "exclusive economic zones" (EEZs), which is the coastal water and seabed—extending two hundred nautical miles from shore—over which a state has special rights over the use of marine resources; establishes the limits of a country's "territorial sea," or the sovereign territory of a state that extends twelve nautical miles from shore; and clarifies rules for transit through "international straits." It also addresses—with varying degrees of effectiveness—resource division, maritime traffic, and pollution regulation, as well as serves as the principal forum for dispute resolution on ocean-related issues. To date, 162 countries and the European Union have ratified UNCLOS.
UNCLOS is a remarkable achievement, but its resulting oceans governance regime suffers several serious limitations. First, the world's leading naval power, the United States, is not party to the convention, which presents obvious challenges to its effectiveness—as well as undermines U.S. sovereignty, national interests, and ability to exercise leadership over resource management and dispute resolution. Despite the myriad military, economic, and political benefits offered by UNCLOS, a small but vocal minority in the United States continues to block congressional ratification.
Second, UNCLOS is now thirty years old and, as a result, does not adequately address a number of emerging and increasingly important international issues, such as fishing on the high seas—a classic case of the tragedy of the commons—widespread maritime pollution, and transnational crime committed at sea.
Third, both UNCLOS and subsequent multilateral measures have weak surveillance, capacity-building, and enforcement mechanisms. Although various UN bodies support the instruments created by UNCLOS, they have no direct role in their implementation. Individual states are responsible for ensuring that the convention's rules are enforced, which presents obvious challenges in areas of overlapping or contested sovereignty, or effectively stateless parts of the world. The UN General Assembly plays a role in advancing the oceans agenda at the international level, but its recommendations are weak and further constrained by its lack of enforcement capability.
Organizations that operate in conjunction with UNCLOS—such as the International Maritime Organization (IMO), the International Tribunal on the Law of the Sea (ITLOS), and the International Seabed Authority (ISA)—play an important role to protect the oceans and strengthen oceans governance. The IMO has helped reduce ship pollution to historically low levels, although it can be slow to enact new policy on issues such as invasive species, which are dispersed around the world in ballast water. Furthermore, ITLOS only functions if member states are willing to submit their differences to its judgment, while the ISA labors in relative obscurity and operates under intense pressure from massive commercial entities.
Fourth, coastal states struggle to craft domestic policies that incorporate the many interconnected challenges faced by oceans, from transnational drug smuggling to protecting ravaged fish stocks to establishing proper regulatory measures for offshore oil and gas drilling. UNCLOS forms a solid platform on which to build additional policy architecture, but requires coastal states to first make comprehensive oceans strategy a priority—a goal that has remained elusive thus far.
Fifth, the system is horizontally fragmented and fails to harmonize domestic, regional, and international policies. Domestically, local, state, and federal maritime actors rarely coordinate their agendas and priorities. Among the handful of countries and regional organizations that have comprehensive ocean policies—including Australia, Canada, New Zealand, Japan, the European Union, and most recently the United States—few synchronize their activities with other countries. The international community, however, is attempting to organize the cluttered oceans governance landscape. The UN Environmental Programme Regional Seas Program works to promote interstate cooperation for marine and coastal management, albeit with varying degrees of success and formal codification. Likewise, in 2007 the European Union instituted a regional Integrated Maritime Policy (IMP) that addresses a range of environmental, social, and economic issues related to oceans, as well as promotes surveillance and information sharing. The IMP also works with neighboring partners to create an integrated oceans policy in places such as the Arctic, the Baltic, and the Mediterranean.
Lastly, there is no global evaluation framework to assess progress. No single institution is charged with monitoring and collecting national, regional, and global data on the full range of oceans-related issues, particularly on cross-cutting efforts. Periodic data collecting does take place in specific sectors, such as biodiversity conservation, fisheries issues, and marine pollution, but critical gaps remain. The Global Ocean Observing System is a promising portal for tracking marine and ocean developments, but it is significantly underfunded. Without concrete and reliable data, it is difficult to craft effective policies that address and mitigate emerging threats.
Despite efforts, oceans continue to deteriorate and a global leadership vacuum persists. Much work remains to modernize existing institutions and conventions to respond effectively to emerging threats, as well as to coordinate national actions within and across regions. The June 2012 United Nations Conference on Sustainable Development , also known as Rio+20, identified oceans (or the "blue economy") as one of the seven priority areas for sustainable development. Although experts and activists hoped for a new agreement to strengthen the sustainable management and protection of oceans and address modern maritime challenges such as conflicting sovereignty claims, international trade, and access to resources, Rio+20 produced few concrete results.
Maintaining freedom of the seas: Guaranteed by U.S. power, increasingly contested by emerging states
The United States polices every ocean throughout the world. The U.S. navy is unmatched in its ability to provide strategic stability on, under, and above the world's waters. With almost three hundred active naval ships and almost four thousand aircraft, its battle fleet tonnage is greater than the next thirteen largest navies combined. Despite recently proposed budget cuts to aircraft carriers, U.S. naval power continues to reign supreme.
The United States leverages its naval capabilities to ensure peace, stability, and freedom of access. As Great Britain ensured a Pax Britannicain the nineteenth century, the United States presides over relatively tranquil seas where global commerce is allowed to thrive. In 2007, the U.S. Navy released a strategy report that called for "cooperative relationships with more international partners" to promote "greater collective security, stability, and trust."
The United States pursues this strategy because it has not faced a credible competitor since the end of the Cold War. And, thus far, emerging powers have largely supported the U.S. armada to ensure that the oceans remain open to commerce. However, emerging powers with blue-water aspirations raise questions about how U.S. naval hegemony will accommodate new and assertive fleets in the coming decades. China, for instance, has been steadily building up its naval capabilities over the past decade as part of its "far sea defense" strategy. It unveiled its first aircraft carrier in 2010, and is investing heavily in submarines outfitted with ballistic missiles. At the same time, India has scaled up its military budget by 64 percent since 2001, and plans to spend nearly $45 billion over the next twenty years on its navy.
Even tensions among rising powers could prove problematic. For example, a two-month standoff between China and the Philippines over a disputed region of the South China Sea ended with both parties committing to a "peaceful resolution."China, Taiwan, Vietnam, Malaysia, Brunei, and the Philippines have competing territorial and jurisdictional claims to the South China Sea, particularly over rights to exploit its potentially vast oil and gas reserves. Control over strategic shipping lanes and freedom of navigation are also increasingly contested, especially between the United States and China.
Combating illicit trafficking: Porous, patchy enforcement
In addition to being a highway for legal commerce, oceans facilitate the trafficking of drugs, weapons, and humans, which are often masked by the flow of licit goods. Individual states are responsible for guarding their own coastlines, but often lack the will or capacity to do so. Developing countries, in particular, struggle to coordinate across jurisdictions and interdict. But developed states also face border security challenges. Despite its commitment to interdiction, the United States seizes less than 20 percent of the drugs that enter the country by maritime transport.
The United Nations attempts to combat the trafficking of drugs, weapons, and humans at sea. Through the Container Control Program (PDF), the UN Office on Drugs and Crime (UNODC) assists domestic law enforcement in five developing countries to establish effective container controls to prevent maritime drug smuggling. The UNODC also oversees UN activity on human trafficking, guided by two protocols to the UN Convention on Transnational Organized Crime. Although UN activity provides important groundwork for preventing illicit maritime trafficking, it lacks monitoring and enforcement mechanisms and thus has a limited impact on the flow of illegal cargo into international ports. Greater political will, state capacity, and multilateral coordination will be required to curb illicit maritime trafficking.
New ad hoc multilateral arrangements are a promising model for antitrafficking initiatives. The International Ship and Port Facility Security Code, for instance, provides a uniform set of measures to enhance the security of ships and ports. The code helps member states control their ports and monitor both the people and cargo that travel through them. In addition, the U.S.-led Proliferation Security Initiative facilitates international cooperation to interdict ships on the high seas that may be carrying illicit weapons of mass destruction, ballistic missiles, and related technology. Finally, the Container Security Initiative (CSI), also spearheaded by the United States, attempts to prescreen all containers destined for U.S. ports and identify high-risk cargo (for more information, see section on commercial shipping).
One way to combat illicit trafficking is through enhanced regional arrangements, such as the Paris Memorandum of Understanding on Port State Control. This agreement provides a model for an effective regional inspections regime, examining at least 25 percent of ships that enter members' ports for violations of conventions on maritime safety. Vessels that violate conventions can be detained and repeat offenders can be banned from the memorandum's area. Although the agreement does not permit searching for illegal cargo, it does show how a regional inspections regime could be effective at stemming illegal trafficking.
Securing commercial shipping: Global supply chains at risk
Global shipping is incredibly lucrative, but its sheer scope and breadth presents an array of security and safety challenges. The collective fleet consists of approximately 50,000 ships registered in more than 150 nations. With more than one million employees, this armada transports over eight billion tons (PDF) of goods per year—roughly 90 percent of global trade. And the melting Arctic is opening previously impassable trade routes; in 2009, two German merchant vessels traversed the Northeast Passage successfully for the first time in recent history. But despite impressive innovations in the shipping industry, maritime accidents and attacks on ships still occur frequently, resulting in the loss of billions of dollars of cargo. Ensuring the safety and security of the global shipping fleet is essential to the stability of the world economy.
Internationally, the International Maritime Organization (IMO) provides security guidelines for ships through the Convention on the Safety of Life at Sea, which governs everything from construction to the number of fire extinguishers on board. The IMO also aims to prevent maritime accidents through international standards for navigation and navigation equipment, including satellite communications and locating devices. Although compliance with these conventions has been uneven, regional initiatives such as the Paris Memorandum of Understanding have helped ensure the safety of international shipping.
In addition, numerous IMO conventions govern the safety of container shipping, including the International Convention on Safe Containers, which creates uniform regulations for shipping containers, and the International Convention on Load Lines, which determines the volume of containers a ship can safely hold. However, these conventions do not provide comprehensive security solutions for maritime containers, and illegal cargo could be slipped into shipping containers during transit. Since 1992, the IMO has tried to prevent attacks on commercial shipping through the Convention for the Suppression of Unlawful Acts against the Safety of Maritime Navigation, which provides a legal framework for interdicting, detaining, and prosecuting terrorists, pirates, and other criminals on the high seas.
In reality, most enforcement efforts since the 9/11 attacks have focused on securing ports to prevent the use of a ship to attack, rather than to prevent attacks on the ships themselves. Reflecting this imperative, the IMO, with U.S. leadership, implemented the International Ship and Port Facility Security Code (ISPS) in 2004. This code helped set international standards for ship security, requiring ships to have security plans and officers. However, as with port security, the code is not obligatory and no clear process to audit or certify ISPS compliance has been established. Overall, a comprehensive regime for overseeing the safety of international shipping has not been created.
The United States attempts to address this vulnerability through the Container Security Initiative (CSI), which aims to prescreen all containers destined for the United States, and to isolate those that pose a high-security risk before they are in transit. The initiative, which operates in fifty-eight foreign ports, covers more than 86 percent of container cargo en route to the United States. Several international partners and organizations, including the European Union, the Group of Eight, and the World Customs Organization, have expressed interest in modeling security measures for containerized cargo based on the CSI model. Despite these efforts, experts estimate that only 2 percent of containers destined for U.S. ports are actually inspected.
Confronting piracy: Resurgent scourge, collective response
After the number of attacks reached a record high in 2011, incidences of piracy dropped 28 percent in the first three months of 2012. Overall, the number of worldwide attacks decreased from 142 to 102 cases, primarily due to international mobilization and enhanced naval patrols off the coast of Somalia. However, attacks intensified near Nigeria and Indonesia as pirates shifted routes in response to increased policing, raising fresh concerns over the shifting and expanding threat of piracy. In addition to the human toll, piracy has significant economic ramifications. According to a report by the nonprofit organization Oceans Beyond Piracy, Somali piracy cost the global economy nearly $7 billion in 2011. Sustained international coordination and cooperation is essential to preventing and prosecuting piracy.
Recognizing this imperative, countries from around the world have shown unprecedented cooperation to combat piracy, particularly near the Gulf of Aden. In August 2009, the North Atlantic Treaty Organization commenced Operation Ocean Shield in the horn of Africa, where piracy increased close to 200 percent between 2007 and 2009. This effort built upon Operation Allied Protector and consisted of two standing maritime groups with contributions from allied nations. Although the efforts concentrate on protecting ships passing through the Gulf of Aden, they also renewed focus on helping countries, specifically Somalia, prevent piracy and secure their ports. Meanwhile, the United States helped establish Combined Task Force 151 to coordinate the various maritime patrols in East Africa. Other countries including Russia, India, China, Saudi Arabia, Malaysia, and South Korea, have also sent naval vessels to the region.
At the same time, regional organizations have also stepped up antipiracy efforts. The Regional Cooperation Agreement on Combating Piracy and Armed Robbery against Ships in Asia was the first such initiatives, and has been largely successful in facilitating information-sharing, cooperation between governments, and interdiction efforts. And in May 2012, the European Union naval force launched its first air attack against Somali pirates' land bases, the first strike of its kind by outside actors to date.
Like individual countries, international institutions have condemned piracy and legitimized the use of force against pirates. In June 2008, the UN Security Council unanimously passed Resolution 1816, encouraging greater cooperation in deterring piracy and asking countries to provide assistance to Somalia to help ensure coastal security. This was followed by Resolution 1846, which allowed states to use "all necessary means" to fight piracy off the coast of Somalia. In Resolution 1851, the UN Security Council legitimized the use of force on land as well as at sea to the same end. Outside the UN, watchdogs such as the International Maritime Bureau, which collects information on pirate attacks and provides reports on the safety of shipping routes, have proven successful in increasing awareness, disseminating information, and facilitating antipiracy cooperation.
However, such cooperative efforts face several legal challenges. The United States has not ratified the UN Convention on the Law of the Sea (UNCLOS), which governs crimes, including piracy, in international waters. More broadly, the international legal regime continues to rely on individual countries to prosecute pirates, and governments have been reluctant to take on this burden. Accordingly, many pirates are apprehended, only to be quickly released. In addition, many large commercial vessels rely on private armed guards to prevent pirate attacks, but the legal foundations governing such a force are shaky at best.
National governments have redoubled efforts to bring pirates to justice as well. In 2010, the United States held its first piracy trial since its civil war, soon followed by Germany's first trial in over four hundred years. Other agreements have been established to try pirates in nearby countries like Kenya, such as the UNODC Trust Fund to Support the Initiatives of States to Counter Piracy of the Coast of Somalia, established in January 2010. Under the mandate of the Contact Group on Piracy off the Coast of Somalia, the fund aims to defray the financial capital required from countries like Kenya, Seychelles, and Somalia to prosecute pirates, as well as to increase awareness within Somali society of the risk associated with piracy and criminal activity. Future efforts to combat piracy should continue to focus on enhancing regional cooperation and agreements, strengthening the international and domestic legal instruments necessary to prosecute pirates, and addressing the root causes of piracy.
Reducing marine pollution and climate change: Mixed progress
Pollution has degraded environments and ravaged biodiversity in every ocean. Much contamination stems from land-based pollutants, particularly along heavily developed coastal areas. The UN Environment Program (UNEP) Regional Seas Program has sponsored several initiatives to control pollution, modeled on a relatively successful program in the Mediterranean Sea. In 1995, states established the Global Program of Action for the Protection of the Marine Environment from Land-Based Activities, which identifies sources of land-based pollution and helps states establish priorities for action. It has been successful in raising awareness about land-based pollution and offering technical assistance to regional implementing bodies, which are so often starved for resources. More recently, 193 UN member states approved the Nagoya Protocol on biodiversity, which aims to halve the marine extinction rate by 2020 and extend protection to 10 percent of the world's oceans.
Shipping vessels are also a major source of marine pollution. Shipping is the most environmentally friendly way to transport bulk cargoes, but regulating maritime pollution remains complicated because of its inherently transnational nature. Shipping is generally governed by the International Maritime Organization (IMO), which regulates maritime pollution through the International Convention for the Prevention of Pollution from Ships (MARPOL). States are responsible for implementing and enforcing MARPOL among their own fleets to curb the most pernicious forms of maritime pollution, including oil spills, particulate matter such as sulfur oxide (SOx) and nitrous oxide (NOx), and greenhouse gas emissions. Port cities bear the brunt of air pollution, which devastates local air quality because most ships burn bunker fuel (the dirtiest form of crude oil). The IMO's Marine Environmental Protection Committee has also taken important steps to reduce SOx and NOx emissions by amending the MARPOL guidelines to reduce particulate matter from ships. Despite such efforts, a 2010 study (PDF) from the Organization for Economic Development and Cooperation found that international shipping still accounts for nearly 3 percent of all greenhouse gasses.
The IMO has achieved noteworthy success in reducing oil spilled into the marine environment. Despite a global shipping boom, oil spills are at an all-time low. The achievements of the IMO have been further strengthened by commitments by the Group of Eight to cooperate on oil pollution through an action plan that specifically targets pollution prevention for tankers. The IMO should strive to replicate this success in its efforts to reduce shipping emissions.
Climate change is also exacerbating environmental damage. In June 2009, global oceans reached their highest recordedaverage temperature: 17 degrees Celsius. As the world warms, oceans absorb increased levels of carbon dioxide, which acidifies the water and destroys wetlands, mangroves, and coral reefs—ecosystems that support millions of species of plants and animals. According to recent studies, ocean acidity could increase by more than 150 percent by 2050 if counteracting measures are not taken immediately. Moreover, melting ice raises sea levels, eroding beaches, flooding communities, and increasing the salinity of freshwater bodies. And the tiny island nation of the Maldives, the lowest country in the world, could be completely flooded if sea levels continue to rise at the same rate.
Individual states are responsible for managing changes in their own marine climates, but multilateral efforts to mitigate the effect of climate change on the oceans have picked up pace. In particular, the UNEP Regional Seas Program encourages countries sharing common bodies of water to coordinate and implement sound environmental policies, and promotes a regional approach to address climate change.
Sustainable fisheries policies on the high seas: An ecological disaster
States have the legal right to regulate fishing in their exclusive economic zones (EEZs), which extend two hundred nautical miles from shore—and sometimes beyond, in the case of extended continental shelves. But outside the EEZs are the high seas, which do not fall under any one country's jurisdiction. Freedom of the high seas is critical to the free flow of global commerce, but spells disaster for international fisheries in a textbook case of the tragedy of the commons. For years, large-scale fishing vessels harvested fish as fast as possible with little regard for the environmental costs, destroying 90 percent of the ocean's biomass in less than a century. Overall, fisheries suffer from two sets of challenges: ineffective enforcement capacity and lack of market-based governance solutions to remedy perverse incentives to overfish.
Although there are numerous international and multilateral mechanisms for fisheries management, the system is marred by critical gaps and weaknesses exploited by illegal fishing vessels. Articles 117 and 118 of the UN Convention on the Law of the Sea (UNCLOS) enumerate the specific fisheries responsibilities of state parties, placing the onus on national governments to form policies and regional agreements that ensure responsible management and conservation of fish stocks in their respective areas. UNCLOS was further strengthened by the UN Fish Stocks Agreement (FSA), which called for a precautionary approach toward highly migratory and straddling fish stocks that move freely in and out of the high seas. Seventy-eight countries have joined the FSA thus far, and a review conference in May 2010 was hailed as a success due to the passage of Port State Measures (PSMs) to combat illegal, unreported, and unregulated (IUU) fishing. Yet fish stocks have continued to stagnate or decline to dangerously low levels, and the PSMs have largely failed to prevent IUU operations.
Regional fishery bodies (RFBs) are charged with implementation and monitoring. The RFBs provide guidelines and advice on a variety of issues related to fishing, including total allowable catch, by-catch, vessel monitoring systems, areas or seasons closed for fishing, and recording and reporting fishery statistics. However, only a portion of these bodies oversee the management of their recommendations, and some RFBs allow members to unilaterally dismiss unfavorable decisions. Additionally, RFBs are not comprehensive in their membership and, for the most part, their rules do not apply to vessels belonging to a state outside the body.
Even when regional bodies make a binding decision on a high-seas case, implementation hinges on state will and capacity. In 2003, the UN General Assembly established a fund to assist developing countries with their obligations to implement the Fish Stocks Agreement through RFBs. The overall value of the fund remains small, however, and countries' compliance is often constrained by resource scarcity. This results in spotty enforcement, which allows vessels to violate international standards with impunity, particularly off the coasts of weak states. Migratory species like blue fin tuna are especially vulnerable because they are not confined by jurisdictional boundaries and have high commercial value.
Some of the RFBs with management oversight, such as the Commission for the Conservation of Antarctic Marine Living Resources and the South East Atlantic Fisheries Organization, have been relatively effective in curbing overfishing. They have developed oversight systems and specific measures to target deep-water trawl fishing and illegal, unreported, and unregulated fishing in the high seas. Many regional cooperative arrangements, however, continue to suffer from weak regulatory authority. At the same time, some regions like the central and southwest Atlantic Ocean lack RFBs. Some have suggested filling the void with market-based solutions like catch shares, which could theoretically alter the incentives toward stewardship. Catch shares (also known as limited access privilege programs) reward innovation and help fisheries maximize efficiency by dedicating a stock of fish to an individual fisherman, community, fishery association, or an individual state. Each year before the beginning of fishing season, commercial fishermen would know how much fish they are allowed to catch. They would then be allowed to buy and sell shares to maximize profit. By incorporating free-market principles, fisheries could reach a natural equilibrium at a sustainable level. According to research, more sustainable catch shares policies could increase the value of the fishing industry by more than $36 billion. Although allocating the shares at the domestic—much less international—level remains problematic, the idea reflects of the kind of policy work required to better manage the global commons.
Managing the Arctic: At a crossroads
Arctic ice is melting at unprecedented rates. At this pace, experts estimate that the Arctic could be seasonally ice free by 2040, and possibly much earlier. As the ice recedes and exposes valuable new resources, multilateral coordination will become even more important among states (and indigenous groups) jockeying for position in the region.
The melting ice is opening up potentially lucrative new sea routes and stores of natural resources. Since September 2009, cargo ships have been able to traverse the fabled Northwest and Northeast Passages, which are significantly shorter than traditional routes around the capes or through the canals. Widening sea routes also means that fishing fleets can travel north in search of virgin fishing stock, and that cruise ships can carry tourists chasing a last glimpse of the disappearing ice. At the same time, untapped resources such as oil, natural gas, rare earth minerals, and massive renewable wind, tidal, and geothermal energy hold enormous potential. In a preliminary estimate, the U.S. Geographic Society said that the Arctic could hold 22 percent of the world's hydrocarbon resources, including 90 billion barrels of oil and 1,670 trillion cubic feet of natural gas. Beyond oil and gas, the Arctic has valuable mineral commodities such as zinc, nickel, and coal.
But new opportunities in the Arctic also portend new competition among states. In August 2007, Russia symbolically planted a flag on the Arctic floor, staking a claim to large chunks of Arctic land. Other Arctic powers including the United States, Canada, Norway, and Denmark have also laid geographical claims. The European Union crafted a new Arctic policy, and China sent an icebreaker on three separate Arctic expeditions. Each country stands poised to grab new treasure in this increasingly important geostrategic region.
The UN Convention on the Law of the Sea (UNCLOS) is a solid foundation on which to build and coordinate national Arctic policies, especially articles 76 and 234, which govern the limits of the outer continental shelf (OCS) and regulate activities in ice-covered waters, respectively. However, there remains a formidable list of nagging sovereignty disputes that will require creative bilateral and multilateral resolutions. The Arctic Council, a multilateral forum comprising eight Arctic nations, has recently grown in international prominence, signing a legally binding treaty on search and rescue missions in May 2011 and drawing high-level policymakers to its meetings. While these are significant first steps, the forum has yet to address other issues such as overlapping OCS claims, contested maritime boundaries, and the legal status of the Northwest Passage and the Northern Sea Route.
U.S. Ocean Governance Issues
The United States championed many of the most important international maritime organizations over the past fifty years. It helped shape the decades-long process of negotiating the United Nations Convention on the Law of the Sea (UNCLOS) and has played a leading role in many UNCLOS-related bodies, including the International Maritime Organization. It has also served as a driving force behind regional fisheries organizations and Coast Guard forums. Domestically, the United States has intermittently been at the vanguard of ocean policy, such as the 1969 Stratton Commission report, multiple conservation acts in the 1970s, the Joint Ocean Commission Initiative, and, most recently, catch limits on all federally-managed fish species. The U.S.-based Woods Hole Oceanographic Institution and the Monterrey Bay Research Institute have long been leaders in marine science worldwide. And from a geopolitical perspective, the U.S. Navy secures the world's oceans and fosters an environment where global commerce can thrive.
Yet the United States lags behind on important issues, most notably regarding its reluctance to ratify UNCLOS. And until recently, the United States did not have a coherent national oceans policy. To address this gap, U.S. president Barack Obama created the Ocean Policy Task Force in 2009 to coordinate maritime issues across local, state, and federal levels, and to provide a strategic vision for how oceans should be managed in the United States. The task force led to the creation of a National Ocean Council, which is responsible for "developing strategic action plans to achieve nine priority objectives that address some of the most pressing challenges facing the ocean, our coasts, and Great Lakes." Although it has yet to make serious gains (PDF), this comprehensive oceans policy framework could help clear the way for the spadework of coordinating U.S. ocean governance and harmonizing international efforts.
Should the United States ratify the UN Convention on the Law of the Sea?
Yes: The UN Convention on the Law of the Sea (UNCLOS), which created the governance framework that manages nearly three-quarters of the earth's surface, has been signed and ratified by 162 countries and the European Union. But the United States remains among only a handful of countries to have signed but not yet ratified the treaty—even though it already treats many of the provisions as customary international law. Leaders on both sides of the political aisle as well as environmental, conservation, business, industry, and security groups have endorsed ratification in order to preserve national security interests and reap its myriad benefits, such as securing rights for U.S. commercial and naval ships and boosting the competitiveness of U.S. companies in seafaring activities. Notably, all of the uniformed services—and especially the U.S. Navy—strongly support UNCLOS because its provisions would only serve to strengthen U.S. military efforts. By remaining a nonparty, the United States lacks the credibility to promote its own interests in critical decision-making forums as well as bring complaints to an international dispute resolution body.
No: Opponents argue that ratifying the treaty would cede sovereignty to an ineffective United Nations and constrain U.S. military and commercial activities. In particular, critics object to specific provisions including taxes on activities on outer continental shelves; binding dispute settlements; judicial activism by the Law of the Sea Tribunal, especially with regard to land-based sources of pollution; and the perceived ability of UNCLOS to curtail U.S. intelligence-gathering activities. Lastly, critics argue that because UNCLOS is already treated as customary international law, the United States has little to gain from formal accession.
Should the United States lead an initiative to expand the Container Security Initiative globally?
Yes: Some experts say the only way to secure a global economic system is to implement a global security solution. The U.S.-led Container Security Initiative (CSI) helps ensure that high-risk containers are identified and isolated before they reach their destination. Fifty-eight countries are already on board with the initiative, and many others have expressed interest in modeling their own security measures on the CSI. The World Customs Organization called on its members to develop programs based on the CSI, and the European Union agreed to expand the initiative across its territory. With its robust operational experience, the United States is well positioned to provide the technical expertise to ensure the integrity of the container system.
No: Opponents maintain that the United States can hardly commit its tax dollars abroad for a global security system when it has failed to secure its own imports. To date, more than $800 million and considerable diplomatic energy has been invested in CSI to expand the program to fifty-eight international ports, where agents are stationed to screen high-risk containers. Given the scale of world trade, the United States imports more than 10 million containers annually, and only a handful of high-risk boxes can be targeted for inspection. After huge expenditures and years of hard work to expand this program after September 11, 2001, only about 86 percent of the cargo that enters the United States transits through foreign ports covered under CSI, and of that, only about 1 percent is actually inspected (at a cost to the U.S. taxpayer of more than $1,000 per container). Despite congressional mandates to screen all incoming containers, critics say that costs make implementing this mandate virtually impossible. The limited resources the United States has available, they argue, should be invested in protecting imports bound specifically for its shores.
Should the United States be doing more to address the drastic decline in the world's fisheries?
Yes: Advocates say that the further demise of global fish stocks, beyond being a moral burden, undermines the commercial and national security interests of the United States. Depleting fish stocks are driven in large part by the prevalence of illegal, unreported, and unregulated (IUU) fishing and the overcapitalization of the global commercial fishing fleet from domestic subsidies. To protect domestic commercial fisheries and the competitiveness of U.S. exports in the international seafood market, the United States should enhance efforts by the National Oceanic and Atmospheric Administration to manage, enforce, and coordinate technical assistance for nations engaging in IUU fishing.
Domestically, the United States has taken important steps to address the critical gaps in fisheries management. In 2012, it became the first country to impose catch limits on all federally-managed fish species. Some species like the mahi mahi will be restricted for the first time in history. Many environmental experts hailed the move as a potential model for broader regional and international sustainable fisheries policy. To capitalize on such gains, the United States should aggressively work to reduce fishing subsidies in areas such as Europe that promote overcapitalization and thus global depletion of fish stocks. The United States could also promote market-based mechanisms, like catch shares and limited access privilege programs, to help fishermen and their communities curb overfishing and raise the value of global fisheries by up to $36 billion.
No: Critics argue that fisheries management is by and large a domestic issue, and that the United States has little right to tell other nations how to manage their own resources, particularly when such measures could harm local economies. They contend that the science behind overfishing is exaggerated, as are the warnings about the consequences of an anticipated fisheries collapse. Existing conventions like the 1995 Fish Stock Agreement already go far enough in addressing this issue. Any additional efforts, they contend, would be a diplomatic overreach, as well as an excessive burden on a struggling commercial fishing industry. Critics also question how market-based mechanisms, such as catch-shares, would be distributed, traded, and enforced, warning that they would lead to speculative bubbles.
Should the United States push for a more defined multilateral strategy to cope with the melting Arctic?
Yes: The melting Arctic holds important untapped political, strategic, and economic potential for the U.S. government, military, and businesses. This emerging frontier could potentially support a variety of economic activities, including energy exploration, marine commerce, and sustainable development of new fisheries. Countries such as Russia, Canada, Norway, and China have already made claims to the region, yet the United States remains on the sideline without a comprehensive Arctic strategy. The UN Convention on the Law of the Sea (UNCLOS) remains the premier forum of negotiating and arbitrating disputes over contested territory. As a nonparty, however, the United States loses invaluable leverage and position. In addition, the U.S. military does not have a single icebreaker, whereas Russia operates over thirty. Experts argue that the U.S. government should also adopt the recently proposed Polar Code, which is a voluntary agreement that "sets structural classifications and standards for ships operating in the Arctic as well as specific navigation and emergency training for those operating in or around ice-covered waters."
No: Opponents argue that Arctic Council activities and the 2009 National Security Presidential Directive, which updated U.S. Arctic polices, are sufficient. Any collaboration with Canada to resolve disputes over the Northwest Passage might undermine freedom of navigation for U.S. naval assets elsewhere, especially in the Strait of Hormuz and the Taiwan Straits, and this national security concern trumps any advantages from collaborating on security, economic, or environmental issues in the Arctic. Last, given the dominant Russian and Canadian Arctic coastlines, future Arctic diplomacy might best be handled bilaterally rather than through broader multilateral initiatives.
April 2013: Japan included in Trans-Pacific Partnership negotiations
Japan agreed to join negotiations over the Trans-Pacific Partnership (TPP), an ambitious free trade agreement between counties along the Pacific rim. Since the broad outline of the agreement was introduced in November 2011, sixteen rounds of negotiations have thus far brought eleven countries together to discuss the TPP. The addition of Japan, a major economic force in the region, as the twelfth participant comes as an important step in creating a robust agreement. Already, the South China Sea is the second-busiest shipping lane in the world, and should the TTP become a reality, transpacific shipping would dramatically increase. The seventeenth round of negotiations will take place in May and the current goal for agreement is October 2013.
March 2013: IMO pledges to support implementation of new code of conduct on piracy
At a ministerial meeting in Cotonou, Benin, the International Maritime Organization (IMO) pledged to support the implementation of a new code of conduct on piracy and other illicit maritime activity. The Gulf of Guinea Code of Conduct, drafted by the Economic Community of Central African States and the Economic Community of West African States, in partnership with the IMO, contains provisions for interdicting sea- and land-based vehicles engaged in illegal activities at sea, prosecuting suspected criminals, and sharing information between state parties. The code builds on several existing frameworks to create a sub-regional coast guard. The agreement is set to open for signature in May 2013.
March 2013: New fishing restrictions on sharks and rays
Delegates attending the annual meeting on the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) voted to place robust export restrictions on five species of sharks and two species of manta rays. Over the past fifty years, the three shark species—the oceanic whitetip, hammerhead, and porbeagle—have declined by more than 70 percent. Although experts cautioned that the new rules would be difficult to enforce in practice, the decision marked an important victory over economic interests, particularly of China and Japan.
January 2013: Philippines to challenge China's maritime claims in South China Sea
The Philippine government announced its intention to take China to an international arbitration tribunal based on claims that China violated the UN Convention on the Law of the Sea. The dispute dates back to mid-2012, when tensions flared over the Scarborough shoal, which is claimed by both countries.
China, Taiwan, Vietnam, Malaysia, Brunei, and the Philippines have competing territorial and jurisdictional claims to the South China Sea, particularly over rights to exploit its potentially vast oil and gas reserves. Control over strategic shipping lanes and freedom of navigation are also increasingly contested, especially between the United States and China.
September 2012: Arctic ice reaches record low
In September 2012, ice in the Arctic Ocean reached an all-time low of 24 percent, shattering the previous record of 29 percent from 2007. The finding not only has implications for climate change and environmental stability, but also for heightened competition among states jockeying for access to critical resources in the region. For the first time, the melting Arctic has exposed troves of natural resources including oil, gas, and minerals, as well as newly accessible shipping routes. The United States, Russia, and several European states already control parts of the Arctic, and China is also an increasing presence.
September 2012: Tensions flare in the East China Sea
In September 2012, Japan purchased three islands in the East China Sea that form part of the Senkaku Islands, known as the Diaoyu Islands to the Chinese. The islands, claimed by both countries, have been controlled by Japan since 1895, but sovereignty remains hotly contested. Following Japan's announcement, protests broke out across China, and Chinese leaders accused Japan of "severely infringing" upon their sovereignty.
In a move to affirm its claim to the islands, China announced its intention to submit their objections to the Commission on the Limits of the Continental Shelf under the UNCLOS, and dispatched patrol ships to monitor the islands. In December 2012, tensions flared after a Chinese small aircraft flew into airspace over the islands, and both countries sent naval vessels to patrol nearby waters. Both sides remain adamant that there is no room for negotiations over their control of the islands, which are in close proximity to strategic shipping routes, fishing grounds, and potentially lucrative oil reserves.
Options for Strengthening Global Ocean Governance
There are a series of measures, both formal and informal, that can be taken to strengthen U.S. and global ocean governance. First, the United States must begin by finally ratifying the UN Convention on the Law of the Sea. On this foundation, the United States should then tap hitherto underused regimes, update twentieth-century agreements to reflect modern ocean challenges, and, in some cases, serve as the diplomatic lead in pioneering new institutions and regimes. These recommendations reflect the views of Stewart M. Patrick, senior fellow and director of the International Institutions and Global Governance Program, and Scott G. Borgerson, former visiting fellow for ocean governance.
In the near term, the United States and its international partners should consider the following steps:
- Ratify UNCLOS
The United States should finally join the UN Convention on the Law of the Sea (UNCLOS), an action that would give it further credibility and make the United States a full partner in global ocean governance. This carefully negotiated agreement has been signed and ratified by 162 countries and the European Union. Yet despite playing a central role shaping UNCLOS's content, the United States has conspicuously failed to join. It remains among only a handful of countries with a coastline, including Syria, North Korea, and Iran, not to have done so.
Emerging issues such as the melting Arctic lend increased urgency to U.S. ratification. By rejecting UNCLOS, the United States is freezing itself out of important international policymaking bodies, forfeiting a seat at decision-making forums critical to economic growth and national security interests. One important forum where the United States has no say is the commission vested with the authority to validate countries' claims to extend their exclusive economic zones, a process that is arguably the last great partitioning of sovereign space on earth. As a nonparty to the treaty, the United States is forgoing an opportunity to extend its national jurisdiction over a vast ocean area on its Arctic, Atlantic, and Gulf coasts—equal to almost half the size of the Louisiana Purchase—and abdicating an opportunity to have a say in deliberations over other nations' claims elsewhere.
Furthermore, the convention allows for an expansion of U.S. sovereignty by extending U.S. sea borders, guaranteeing the freedom of ship and air traffic, and enhancing the legal tools available to combat piracy and illicit trafficking. Potential participants in U.S.-organized flotillas and coalitions rightly question why they should assist the United States in enforcing the rule of law when the United States refuses to recognize the convention that guides the actions of virtually every other nation.
- Coordinate national ocean policies for coastal states
The creation of a comprehensive and integrated U.S. oceans policy should be immediately followed by similar efforts in developing maritime countries, namely Brazil, Russia, India, and China (BRIC) . These so-called BRIC nations will be critical players in crafting domestic ocean policies that together form a coherent tapestry of global governance. Ideally, such emerging powers would designate a senior government official, and in some cases the head of state, to liaison with other coastal states and regional bodies to coordinate ocean governance policies and respond to new threats. Consistent with the Regional Seas Program, the ripest opportunity for these efforts is at the regional level. With UN assistance, successful regional initiatives could then be harmonized and expanded globally.
- Place a moratorium on critically endangered commercial fisheries
Commercial fishing, a multi-billion dollar industry in the United States, is in grave danger. The oceans have been overfished, and it is feared that many fish stocks may not rebound. In the last fifty years, fish that were previously considered inexhaustible have been reduced to alarmingly low levels. Up to 90 percent of large predatory fish are now gone. Nearly half of fish stocks in the world have been fully exploited and roughly one-third have been overexploited. The recent imposition of catch limits on all federally-managed fish species is an important and long overdue first step, which should be expanded and strengthened to a moratorium on the most endangered commercial fisheries, such as the Atlantic blue fin tuna. But tuna is hardly alone in this predicament, and numerous other species are facing the same fate. Policymakers should stand up to intense political pressure and place fishing moratoriums on the most threatened fisheries to give them a chance to rebound. Doing so would be a courageous act that would help rescue collapsing fish while creating a commercially sustainable resource.
In the longer term, the United States and its international partners should consider the following steps:
- Strengthen and update UNCLOS
The UN Convention on the Law of the Sea (UNCLOS) and related agreements serve as the bedrock of international ocean policy. However, UNCLOS is thirty years old. If it is to remain relevant and effective, it must be strengthened and updated to respond to emerging threats such as transnational crime and marine pollution, as well as employing market-based principles of catch shares to commercial fisheries, especially in the high seas. Lastly, UNCLOS Article 234, which applies to ice-covered areas, should be expanded to better manage the opening Arctic, which will be an area of increasing focus and international tension over the coming years.
The international community should also counter the pressure of coastal states that unilaterally seek to push maritime borders seaward, as illustrated by China's claim to all of the South China Sea. Additionally, states should focus on using UNCLOS mechanisms to resolve nagging maritime conflicts, such as overlapping exclusive economic zones from extended continental shelf claims, and sovereignty disputes, such as that of the Spratly and Hans Islands.
- Bolster enforcement capacity
Many ocean-related governance issues have shortcomings not because rules for better management do not exist, but because weak states cannot enforce them. A failure in the oversight of sovereign waters inevitably leads to environmental degradation and, in cases like Somalia, can morph into problems with global implications, such as piracy. Accordingly, the international community should help less developed coastal states build the capacity to enforce (1) fisheries rules fleets; (2) International Convention for the Prevention of Pollution From Ships regulations to reduce ocean dumping and pollution; (3) other shipping regulations in states with open registries such as Liberia, Panama, Malta, and the Marshall Islands; (4) and existing mandates created to stop illicit trafficking. Developed countries should also help less developed areas monitor environmental variables such as acidification, coral reefs, and fisheries.
|
On the Lap of the Mighty Sagarmatha - Solu Khumbu or Everest region
The major mountains are the Mt.Everest, Mt.Lhotse, Cho Oyu, Nuptse, Pumori, Ama Dablam, Thamserku, Kantega, Mera Peak and Island Peak.
Mt. Everest, which is part of the Himalaya range, is located on the border between Nepal and Tibet. Rising to a height of 8848m, the world’s highest mountain was named in 1865 after Sir George Everest. The mountain got its Nepali name Sagarmatha during the 1960s, when the Government of Nepal gave the mountain the official Nepali name. In sanskrit Sagarmatha means "mother of the universe”. The Tibetan name for Mount Everest is Chomolungma or Qomolangma, which means “Goddess Mother of the Snows". Climbers wishing to scale the peak have to obtain an expensive permit from the Nepal Government, often costing more than $25,000 (USD) per person. Base Camp, which serves as a resting area and base of operations for climbers organizing their attempts for the summit, is located on the Khumbu glacier at an elevation of 5300 m (17,400 ft); it receives an average of 450 mm (18 in) of precipitation a year. The climate of Mount Everest is extreme In July, the warmest month, the average summit temperature is -19° C (-2° F).
When George Mallory, the British climber was asked why he wanted to climb Everest he replied ‘Because it is there’. After two unsuccessful attempts, in 1924 he again tried to climb the peak with Andrew Irvine. They started on June 8, 1924 to scale the summit via the north col route and never returned. Their bodies were later discovered by the Mallory and Irvine Research Expedition near the old Chinese camp in 1999. Edmund Hillary, a New Zealander and Sherpa Tenzing Norgay from Nepal were the first two climbers to set foot on the summit of Mt.Everest. They reached the summit at 11:30 a.m. on May 29, 1953 by climbing through the South Col Route. More than 300 climbers have scaled the highest mountain since then. Also there have been more than 100 deaths on the mountain where conditions are so difficult that most corpses have been left where they fell, some of them visible from standard climbing routes.
Mt. Lhotse (8516m) is the fourth highest mountain in the world. It lies south of Mt. Everest. It was first climbed by two Swiss climbers F. Luchsinger and E. Reiss in 1956 from the West face. The Czech scaled it via the South face in 1984. An impressive ring of three peaks makes up the Lhotse massif: Lhotse East or Middle, Lhotse and Lhotse Shar. The South Face of Lhotse is one of the largest mountain faces in the world.
Cho Oyu, (8201m) the sixth highest mountain in the world, has gained popularity among climbers just recently. The mountain sits on both sides of the border of Nepal and Tibet, about 30 km. west of Mount Everest. Cho Oyu in Tibetan means "the turquoise goddess ." The south face of Cho Oyu, facing Nepal, is quite steep and difficult, and is rarely climbed. The north side, accessed from Tibet, is more moderate, and there is a relatively safe route to the summit. In the autumn of 1954, an Austrian team made the first ascent via this route.
Ama Dablam (6856m) which means ‘mother’s jewellery box’, in sherpa language is considered to be one of the most beautiful mountains in the world. Seen from below, the mountain looks like a woman with outstretched arms or a woman wearing a long necklace. Ama Dablam lies alongside Everest in the heart of the Khumbu valley. Mt Lhotse, Mt. Makalu, Mt. Cho Oyu and Mt. Everest can be seen at close quarters from Ama dablam.
Nuptse (7,855m.) lies south west of Mt Everest. It is situated in the Khumbu Himal. From the Thyangboche Monastery Nuptse appears as a massive wall guarding the approach to Everest. The name Nup-tse in Tibetan means west-peak. The main ridge, which is separated from Lhotse by a 7556m high saddle, is crowned by seven peaks and goes west-northwest until its steep west-face drops down more than 2300m to the Khumbu-glacier. Nuptse I was first summited by a British expedition on May 16, 1961
Pumori peak at 7145m is just 8 km away from the world’s highest peak Mt.Everest. The ascent to this peak is described as a classic climb in the 7000m peak category. In Tibetan, ‘Pumo’ means girl and ‘Ri’, mountain. The peak was named by George Mallory, the famous English climber who lost his life trying to ascend Everest in 1924. The German climber Gerhard Lenser was the first to reach the summit of Pumori peak in 1962. Pumori is a popular climbing peak and the easiest. The best season to climb this peak is during autumn and spring.
Mera Peak (6,475m) is the highest of Nepal's trekking peaks. By its standard route, it is also the highest peak in Nepal that can be climbed without prior mountaineering experience. It was first climbed on 20 May 1953, by J.O.M. Roberts and Sen Tenzing, from the standard route at Mera La. The mountain lies to the south of Everest, dominating the watershed between the wild and beautiful valleys of the Hinku and Hongu.
Island Peak also known as Imja Tse at 6160m was named by Erick Shipton's group in 1953.It was so named as the peak resembles an island in a sea of ice when observed from Dingboche. The peak was first climbed in 1953 by a British group as preparation for climbing Mt. Everest. Among them one of the climbers was Mr. Tenzing Norgay. The peak is part of the south ridge of Lhotse Shar and the main land forms a semicircle of cliffs that rise to the north of the summits of Nuptse, Lhotse, Middle Peak and Lhotse Shar. Cho Oyu and Makalu lie to the east of the Island Peak. Baruntse, Amphu and Ama Dablam lie to the south.
Lobuche(6,119m) is known as Lhauche among the Locals. It rises above the town of Lhauche which is just a few kilometer from Mt. Everest. The first ascent on this peak was done by Laurice Nielson and Ang Gyalzen Sherpa on 25 April 1984.
Kalapattar is a small mountain 5,545 m (18,500 ft) high on the southern flank of Pumori (7,145 m). It is a trekking peak and every year tourists climb this peak to enjoy the fantastic panoramic views it offers of the Khumbu glacier, the Everest and nearby peaks like Lhotse and Nuptse. To the east, Makalu, Amadablam, Pumori, and Cho Oyu are visible.
Climate, Flora & Fauna
The climate in the Everest region can be divided into four climate zones owing to the gradual rise in altitude. The climatic zones include a forested lower zone, a zone of alpine scrub, the upper alpine zone which includes upper limit of vegetation growth, and the Arctic zone where no plants can grow. The types of plants and animals that are found depend on the altitude. In the lower forested zone, birch, juniper, blue pines, firs, bamboo and rhododendron grow. All vegetation that is found above this zone is shrubs. As the altitude increases, plant life is restricted to lichens and mosses. At an elevation of 5,750m begins the permanent snow line in the Himalayas. From this point there is no sign of greenery or vegetation. A common animal sighted in the higher reaches is the hairy animal yak. Dzopkyo a sterile male crossbreed between a yak and a cow is used to move goods along the trail. Red panda, snow leopard, musk deer, wild yak, and Himalayan black bear are some of the more exotic animals that are found in this region. A variety of birds can be sighted in the lower regions.
Sagarmata (Mt. Everest) National Park
The Sagarmatha National Park is the highest national park in the world. It was formally opened to public in July 19, 1976. The park covers an area of 1,148 sq km. It rises from its lowest point of 2,845 m (9,335 ft) at Jorsale to 8,850 m (29,035 ft) up to the summit of Everest. The park’s area is very rugged and steep, with its terrain cut by deep rivers and glaciers. It includes three peaks higher than 8,000 m, including Mt Everest. In 1979 the park was inscribed as a Natural World Heritage Site. The park's visitor centre is located at a hill in Namche Bazaar, where a company of the Nepal Royal Army is stationed for protecting the park. The park's southern entrance is a few hundred meters north of Monjo at 2,835 m. Trekking and climbing groups must bring their own fuel to the park (usually butane and kerosene), and the cutting of wood is prohibited. The Sagarmatha Pollution Control, funded by the World Wildlife Fund and the Himalayan Trust, was established in 1991 to help preserve Everest's environment. About a humdred species of birds and more than twenty species of butterflies have made this park their home. Musk deer, wild yak, red panda, snow leopard, Himalayan black bear, Himalayan thars, deer, langur monkeys, hares, mountain foxes, martens, and Himalayan wolves are found in the park
Early expeditions to climb Everest from the Nepalese side started from Jiri. Before the airstrip at Lukla came into existence all the trekking and climbing expeditions to the Everest region started from Jiri. Starting from Jiri, the route passes through the Sherpa villages of the Solu Khumbu, many of them having beautiful Buddhist monasteries.
Lukla, a village in Khumbu boast of the region’s sole airport.Lying at a height of 9000ft, most travelers to this region usually begin and end their adventure in Lukla. The airport was built in 1964 by Sir Edmund Hillary as part of his project in Khumbu region during the early 60s to transport the supplies for the Himalayan Trust projects in the Khumbu region. Today, somewhere between 90-95% of the foreign nationals who reach Lukla, arrive by a half hour flight from Kathmandu.
Namche Bazar is known as the sherpa capital. Namche is actually a village lying at the junction of the Dudh Koshi and a valley that leads to the frontier pass of Nangpa La . It is tucked away in a niche at a height of 7,845 ft. W. H. Tilman and C. Houston were the first westerners to enter it in 1950 and many more have come since then. Facilities like a bank, a post office, hotels and shops where one can purchase climbing equipment as well as tinned food have sprung up over the years. Namche Bazar is the major regional trading center. Its Saturday market or haat is the place where most of the trading takes place. The headquarters of the Sagarmatha National Park is located in Namche.
Thyangboche is famous for the Thyangboche gompa. It is one of the most important centers of Buddhism in the region. The gompa is the largest in the Khumbu region. It was first built in 1923. Destroyed by a fire in 1989, it was rebuilt later on partly with foreign aid. From Thyangboche, one gets a panoramic view of Kwangde, Tawache, Everest, Nuptse, Lhotse, Amadablam, Kangtenga, and Thamserku.
Buddhism is believed to have been introduced in the Khumbu region towards the end of the 17th century by Lama Sange Dorjee. According to the legend, he flew over the Himalayas and landed on a rock at Pangboche and Thyangboche, leaving his footprints embedded on the stone. He is believed to have been responsible for the founding of the first gompas in the Khumbu region, at Pangboche and Thami. Pangboche is the highest year-round settlement in the valley. The Imja Khola, coming from the right, joins the Dudh Koshi River a little above the village. The gompa (monastery) in Phyangboche is thought to be one of the oldest in the Khumbu region.
Khumjung , a village lying west of Thyangboche, is famous for the gompa where the skull of a supposed Yeti, the Abominable Snowman, is preserved under the supervision of the head Lama. The skull seems more like the outer skin of Himalayan Brown Bear, and this is proved by the report of a scientific exploratory expedition conducted by Sir Edmund Hillary, a copy of which is kept in the gompa.
Pheriche is located at an altitude of 13,845 ft. It lies on a level patch. Apart from the basic facilities available here, there is a medical-aid post maintained by the Himalayan Rescue Association of the Tokyo Medical College with Japanese doctors in attendance. Among other facilities, there is an air compression chamber installed for assisting victims of high altitude sickness
The scenic village of Gokyo lies below the hilly Gokyo Ri(5483m). The village is a cluster of stone houses and walled pastures.One has to pass by the holy Gokyo lakes on the way to the village. The Ngozumpa Glacier Nepal’s longest glacier at 25 miles has to be traversed enroute to this remote village. Gokyo Ri looms above the village on the northern edge of the lake. The summits of Everest, Lhotse and Makalu are visible from the summit of Gokyo Ri.
Thami at 3750m is in a large valley. The village has a police checkpost and a few lodges and tea shops. A little above the village is the Thami gompa, which is the site of the annual Mani Rimdu festival.
Sherpas live in the upper regions of Solu Khumbu. They emigrated from Tibet about 600 years ago. In the past they were traders and porters, carrying butter, meat, rice, sugar, and dye from India, and, wool, jewelry, salt Chinese silk and porcelain from Tibet and beyond. The closure of the border between India and China undermined their economy. Fortunately, with the mountaineering expeditions and trekkers, the Sherpa's found their load carrying skills, both on normal treks and high altitudes in great demand. The Khumbu region has provided a strong group of able bodied, hardy and fearless Sherpa porters and guides. The sherpas are Buddhists.
At the lower elevations lives the Kiranti Rai. The villages of Jubing, Kharikhola, Okhaldhunga, are inhabited by the Rais. Of mongoloid stock they speak their own dialect. Reference is made of their fighting spirit in the Hindu epic Mahabharata. The people from this group have supplied recruits to Gurkha regiments both in the British as well as Indian armies. The Rais follow a religion that is partly animistic with a strong Hindu influence. They revere their ancestors by observing Kul or Pitri puja every year.
The Jirels live in the area around Jiri. They are mongoloid and follow Buddhism.
Losar is celebrated in the month of February by the Sherpas. ‘Losar’ means New Year in Tibetan. Apart from the Sherpas and Tibetans, the Gurungs and Tamangs also celebrate Losar. Buddhist monks offer prayers for good health and prosperity at monasteries. People exchange various goods and gifts among them. Families organize feasts and perform dances.
Dumje is celebrated to mark the birthday of Guru Rimpoche (Padmasambhava).The celebration takes place in June and lasts for six days. It is celebrated in a big way in the villages of Namche, Thame and Khumjung.
Mani Rimdu is a festival that celebrates the victory of Buddhism over the ancient animistic religion of Bon. This festival is celebrated in the monasteries of Thyangboche, Chiwang and Thami. At Thangboche the celebration takes place during the November- December full moon. At Thami the Mani rimdu is festival is celebrated during the full moon in May.Chiwang Gompa generally celebrate this festival during autumn. The lamas wear elaborate brocade gowns and papier-mâché masks while performing. Through the dances, symbolic demons are conquered, dispelled, or converted to Dharma Protectors as positive forces clash with those of chaos. The dances convey Buddhist teaching on many levels from the simplest to the most profound, for those who do not have the opportunity to study and meditate extensively. It gives an opportunity to the Sherpas to gather and celebrate together with the monks.
Sakela (Chandi Dance) is a harvest festival celebrated by the Rai community. The harvest ceremony involves the worship of mother earth, called ‘Bhumi-Puja’. The festival is celebrated twice a year, once in spring before planting begins and once during autumn before harvesting. Ubhauli is celebrated during the spring season on Baishakh Purnima. In the autumn season on Mangsir Purnima, Udhauli is celebrated. The spring worship is done to propitiate mother earth for a good harvest and the rain god to bless the earth with enough rain. The festival is celebrated with more fervor in the remote hills. The Rai villagers celebrate it with priests (dhami) who perform rituals to worship their ancestors. The elders of the community begin the dance with a puja. Later on everybody participate in the dance forming a circle by holding each other’s hands. With drumbeats, they begin dancing at a slow pace but moves faster later with the drumbeats. The dance steps and hand gestures imitate the sowing and harvesting of crops .The festival also provides an opportunity for the Rai people to socialise.
The Classic Everest Base Camp Trek
Mt Everest Base Camp is the most popular destination for trekkers in Nepal. Its popularity has grown since the first expedition to the Nepalese side of Everest in the 1950s.One can do this trek the old way, by beginning the trek from Jiri. From Jiri it takes around nine days to reach Namche. On the way you will come across Rai settlements. The other (quicker) alternative is to take a flight to Lukla and to begin the trek from there. The trek follows the Dudh Kosi valley route with an ascent up to the Sherpa capital of Namche Bazaar. From Namche, you traverse along a high path from where you have the first good view of Everest. You head towards Thyangboche Monastery located on top of a mountain ridge and then descend the Imja Khola and continue to the villages of Pangboche and Pheriche. After that you arrive at the Khumbu Glacier. The trek through the glacier takes you first to Lobuche and then to Gorak Shep. From Gorak Shep you can climb up to Kala Pattar for even more spectacular views of the surrounding mountains, including Everest's south west face. Yhou then reach your destination, the Everest Base Camp at the foot of the Khumbu ice fall.
|
New York Manumission Society
The New York Manumission Society was an early American organization founded in 1785 to promote the abolition of the slavery of African descendants within the state of New York. The organization was made up entirely of white men, most of whom were wealthy and held influential positions in society. Throughout its 71-year history, which ended in 1849, the society battled against the slave trade and for the eventual emancipation of all the slaves in the state; it founded the African Free School for the poor and orphaned children of slaves and free people of color.
John Jay
John Jay had been a prominent leader in the antislavery cause since 1777, when he drafted a state law to abolish slavery. The draft failed, as did a second attempt in 1785. In 1785, all state legislators except one voted for some form of gradual emancipation. However, they did not agree on what civil rights would be given to the slaves once they were freed. In 1799, an emancipation bill passed by not mentioning the subject of civil rights for freed slaves at all.
Jay brought in prominent political leaders such as Alexander Hamilton. He also worked closely with Aaron Burr, later head of the Democratic-Republicans in New York. The Society started a petition against slavery, which was signed by almost all the politically prominent men in New York, of all parties and led to a bill for gradual emancipation. Burr, in addition to supporting the bill, made an amendment for immediate abolition, which was voted down.
Jay founded the New-York Society for Promoting the Manumission of Slaves, and Protecting Such of Them as Have Been, or May be Liberated or the New York Manumission Society, and became its first president in 1785.
The organization was originally composed of Jay and a few dozen close friends, many of whom were slave-owners at the time. The first meeting was on January 25, 1785, at the home of John Simmons - who had space for the nineteen men in attendance since he kept an inn. Robert Troup and Melancton Smith were appointed to draw up rules; Jay was elected President. There were 31 members at the second meeting on February 4, including Alexander Hamilton. Several of the members were Quakers.
The Society formed a ways-and-means committee to deal with the difficulty that more than half of the members, including Troup and Jay, owned slaves (mostly a few domestic servants per household). The committee reported a plan for gradual emancipation: members would free slaves then younger than 28 when they reached the age of 35, slaves between 28 and 38 in seven years' time, and slaves over 45 immediately. This was voted down; and the committee dissolved.
This society was instrumental in having a state law passed in 1785 prohibiting the sale of slaves imported into the state, and making it easy for slaveholders to manumit slaves either by a registered certificate or by will. In 1788 the purchase of slaves for removal to another state was forbidden; they were allowed trial by jury "in all capital cases," and the earlier laws about slaves were simplified and restated. The emancipation of slaves by the Quakers was legalized in 1798. At that date, there were still about 33,000 slaves statewide.
Lobbying and boycotts
The Society organized boycotts against New York merchants and newspaper owners involved in the slave trade. The Society had a special committee of militants who visited newspaper offices to warn publishers against accepting advertisements for the purchase or sale of slaves.
Another committee kept a list of people who were involved in the slave trade, and urged members to boycott anyone listed. As historian Roger Kennedy reports,
"Those [blacks] who remained in New York soon discovered that until the Manumission Society was organized, things had gotten worse, not better, for blacks. Despite the efforts of Burr, Hamilton, and Jay, the slave importers were busy. There was a 23 percent increase in slaves and a 33 percent increase in slaveholders in New York City in the 1790s."
Beginning in 1785, the Society lobbied for a state law to abolish slavery in New York, as all the other northern states (except New Jersey) had done. Considerable opposition came from the Dutch areas upstate (where slavery was still popular), as well as from the many businessmen in New York who profited from the slave trade. The two houses passed different emancipation bills and could not reconcile them. Every member of the New York legislature, but one, voted for some form of gradual emancipation, but no agreement could be reached on the civil rights of freedmen afterwards. Success finally came in 1799, when the Society supported a bill which said nothing about such rights. Jay signed this statement into law as governor.
The Act for the Gradual Abolition of Slavery 1799 declared that, from July 4 of that year, all children born to slave parents would be free. It also outlawed the exportation of current slaves. However, the Act held the caveat that the children would be subject to apprenticeship. These same children would be required to serve their mother’s owner until age twenty-eight for males, and age twenty-five for females.
The law defined the children of slaves as a type of indentured servant, while scheduling them for eventual freedom. The last slaves were emancipated by July 4, 1827; the process was the largest emancipation in North America before 1861. Thousands of freedmen celebrated with a parade in New York.
Other anti-slavery societies directed their attention to slavery as a national issue. The Quakers of New York petitioned the First Congress (under the Constitution) for the abolition of the slave trade. In addition, Benjamin Franklin and the Pennsylvania Abolition Society petitioned for the abolition of slavery in the new nation; but the NYMS did not act. (Hamilton and others felt that Federal action on slavery would endanger the compromise worked out at the Constitutional Convention, and so the new United States.)
African Free School
See also
- Berlin, Ira and Leslie Harris, eds.. Slavery in New York. New Press, 2005. ISBN 1-56584-997-3.
- Gellman, David N. Emancipating New York: The Politics of Slavery And Freedom, 1777-1827 Louisiana State Univ Press, 2006. ISBN 0-8071-3174-1.
- Gellman, David N. "Pirates, Sugar, Debtors, and Slaves: Political Economy and the Case for Gradual Abolition in New York." Slavery & Abolition (2001) 22(2): 51-68. Issn: 0144-039x
- Gellman, David N. "Race, the Public Sphere, and Abolition in Late Eighteenth-century New York." Journal of the Early Republic (2000) 20(4): 607-636. Issn: 0275-1275 Fulltext: online in Jstor
- Leslie M. Harris. In the Shadow of Slavery: African Americans in New York City, 1626-1863 (2003),
- Horton, James Oliver. "Alexander Hamilton: Slavery and Race in a Revolutionary Generation" New-York Journal of American History 2004 65(3): 16-24. ISSN 1551-5486
- Roger G. Kennedy, Burr, Hamilton, and Jefferson: A Study in Character (2000)
- Littlefield, Daniel C. "John Jay, the Revolutionary Generation, and Slavery." New York History 2000 81(1): 91-132. Issn: 0146-437x
- Edgar J. McManus, History of Negro Slavery in New York (1968)
- Newman, Richard S. The Transformation of American Abolitionism: Fighting Slavery in the Early Republic. Univ of North Carolina Press, 2002. ISBN 0-8078-2671-5.
- Schaetzke, E. Anne. "Slavery in the Genesee Country (Also Known as Ontario County) 1789 to 1827." Afro-Americans in New York Life and History (1998) 22(1): 7-40. Issn: 0364-2437
- Jake Sudderth," John Jay and Slavery" (2002) at
- "African-Americans in New York City, 1626-1863". Emory University Deptartment of History. Retrieved 2006-12-11.
- "Race and Antebellum New York City - New York Manumission Society". New-York Historical Society. Retrieved 2006-12-12.
- Davis, New York’s Manumission (Free the Slaves!)Society & Its African Free School 1785-1849, as cited.
- John Jay and Sarah Livingston Jay, Selected Letters of John Jay and Sarah Livingston Jay (2005) pp 297-99; online at
- Edgar McManus, History of Negro Slavery in New York, Syracuse University Press, 1966
- Davis, op. cit.; Chernow, Alexander Hamilton p.214
- Ron Chernow, Alexander Hamilton, p. 215
- Peter Nelson. The American Revolution in New York: Its Political, Social and Economic Significance. (1926). p, 237.
- Roger G. Kennedy, Burr, Hamilton, and Jefferson: A Study in Character (2000) p. 92
- Herbert S. Parmet and Marie B. Hecht, Aaron Burr (1967) p. 76
- Edgar J. McManus, History of Negro Slavery in New York
- Edgar J. McManus, History of Negro Slavery in New York
- Jake Sudderth," John Jay and Slavery" (2002) at
- Forrest McDonald, Alexander Hamilton, a Biography(1982) p. 177
|
Inflammation Key to the Asthma-Sinus Connection
People with asthma frequently experience problems with their sinuses. And more than half of those who have chronic sinusitis also suffer from asthma.
Is there a connection? Each condition is marked by inflammation, which is to blame for both the symptoms of asthma—cough, chest tightness, shortness of breath, and wheezing—and sinusitis. Researchers speculate inflammation in the lungs or sinuses affects both. As a result, people with lung symptoms likely will eventually get symptoms in the nose, and vice versa. Plus, sinusitis can trigger asthma attacks.
What is sinusitis?
Sinusitis occurs when air-filled spaces behind the nose, forehead, cheeks, and eyes become inflamed and blocked with mucus. Often, infection results.
Symptoms usually occur after a cold that fails to improve or gets worse after five to seven days. They include:
People with asthma who experience sinus problems should talk with their doctor about treatment. Studies show that resolving sinusitis often improves asthma and decreases the need for asthma medication.
How you can protect yourself
Those with asthma need to be vigilant about colds and the flu. Viral respiratory infections often worsen asthma. Some prevention tips:
Avoid smoke and pollution.
Drink plenty of fluids.
Take decongestants for upper respiratory infections.
Get help for allergies.
Use a humidifier to increase moisture in nose and sinuses.
|
As previous research has demonstrated, HIV attacks the host's immune system, including an important category of T cells critical for fighting infection called CD4+ T cells. The virus specifically destroys those CD4+ T cells that reside in the tissues of the body like the intestine and lung that interface with the outside environment. As HIV replicates in the human body, the tissue levels of CD4+ T cells are killed and, when the number of these cells drop below a certain level, a patient develops AIDS. At this point, the patient's immune system is weakened enough to allow infections caused by bacteria, viruses, fungi and parasites that are normally held at bay by a healthy human body. The infected body's inability to fight off these "intruders," leads to death.
"To learn more about HIV, its impacts and possible treatments, we study simian immunodeficiency virus (SIV), a close relative of HIV that infects and causes AIDS in nonhuman primates," explained the study's lead author Louis Picker, M.D. Picker serves as director of the VGTI's vaccine program and associate director of the VGTI. He is also a professor of pathology, and molecular microbiology and immunology in the OHSU School of Medicine; and director of the Division of Pathobiology and Immunology at the OHSU Oregon National Primate Research Center.
"What we were able to discover using SIV-infected monkeys was that a certain naturally occurring protein called interleukin-15 (IL-15) caused a dramatic restoration of tissue CD4+ T cel
Contact: Jim Newman
Oregon Health & Science University
|
Certain lifestyle factors greatly increase your risk of contracting HIV infection and developing AIDS. By avoiding behaviors that are associated with increasing your risk, you can greatly reduce your risk.
Risk factors include:
Having Unprotected Sex
Most people become infected with HIV through sexual activity. You can contract AIDS by not using a condom when having sexual relations with a person infected with HIV. Not using condoms properly can also put you at increased risk for acquiring HIV infection. During sex, the vagina, vulva, penis, rectum, and mouth can provide entry points for the virus.
Other risky behaviors include having:
- Sex with someone without knowing his or her HIV status
- More than one sex partner
- Sex with someone who has more than one sexual partner
- Anal intercourse
Men who have sex with other men may be at a higher risk of being infected with HIV. Having unprotected sex and using drugs (eg, methamphetamines) during sex can increase this risk. Women who engage in risky behaviors and have both male and female partners may also be at a greater risk.
If you inject illegal drugs, this increases your risk of becoming infected with HIV. Using a needle or syringe that contains even a small amount of infected blood can transmit HIV infection.
Having Certain Medical Conditions
Sexually transmitted diseases (STDs) and vaginal infections caused by bacteria tend to increase the risk of HIV transmission during sex with an HIV-infected partner. Examples of STDs include:
For men, not being circumcised can also increase the risk of getting HIV infection.
Having Certain Medical Procedures
Having a blood transfusion or receiving blood products before 1985 increases your risk of HIV infection and AIDS. Before blood banks began testing donated blood for HIV in 1985, there was no way of knowing if the blood was contaminated with HIV, and recipients could become infected through transfusions.
Receiving blood products, tissue or organ transplantation, or artificial insemination increases your risk of HIV infection and AIDS. Even though blood products are now screened for HIV, there is still some degree of risk because tests cannot detect HIV immediately after transmission.
Being a Healthcare Worker
Exposure to contaminated blood and needles puts healthcare workers at risk for HIV.
- Reviewer: Rosalyn Carson-DeWitt, MD
- Review Date: 12/2011 -
- Update Date: 12/30/2011 -
|
It may be the site of the town of Hilfa, which is mentioned in the Talmud. The foundations of an ancient synagogue were discovered near there in 1929 by explorations done on behalf of the Hebrew University. The synagogue, measuring 46 x 92 feet, included a courtyard, hall, two side aisles and a women's gallery. It faced south toward Jerusalem. A small cavity in the floor probably served as a genizah; above it was an Ark for scrolls of the Law. The whole floor of the building is paved with mosaics.
Two inscriptions were found at the entrance to the hall. One, in Aramaic, states that the mosaics were made during the reign of Emperor Justin (518--527). The other, in Greek, gives the names of those who made the mosaics, Marianos and his son Hanina. There are three mosaic panels in the center of the hall. The first shows the Akedah, the binding of Isaac on the altar. The second mosaic represents the signs of the Zodiac. The third depicts a synagogue ark with a gabled roof and an "eternal light" suspended from its top. On either side is a lion with a seven-branched menorah. Above the menorah and between the lions are pictured ritual items such as lulavim (palm branches), etrogim (citrons), and incense holders. Curtains adorned it on either side. The designs are simple and strong. In these mosaics, the artists took great care to make each scene expressive. The mosaics of Bet Alfa are striking in their coloring and style, and are among the finest examples of Jewish art in the Byzantine period.
Kibbutz Bet Alfa was founded at in 1922.
A scene at one of the children's houses
Making up the daily work schedule (August 1946)
A Man with his cow (August 1930)
|
Charles Robert Darwin (18091882). Origin of Species. The Harvard Classics. 190914.
XI. On the Geological Succession of Organic Beings
On the Forms of Life Changing Almost Simultaneously throughout the World
SCARCELY any palæontological discovery is more striking than the fact that the forms of life change almost simultaneously throughout the world. Thus our European Chalk formation can be recognised in many distant regions, under the most different climates, where not a fragment of the mineral chalk itself can be found; namely in North America, in equatorial South America, in Tierra del Fuego, at the Cape of Good Hope, and in the peninsula of India. For at these distant points, the organic remains in certain beds present an unmistakable resemblance to those of the Chalk. It is not that the same species are met with; for in some cases not one species is identically the same, but they belong to the same families, genera, and sections of genera, and sometimes are similarly characterised in such trifling points as mere superficial sculpture. Moreover, other forms, which are not found in the Chalk of Europe, but which occur in the formations either above or below, occur in the same order at these distant points of the world. In the several successive palæozoic formations of Russia, Western Europe, and North America, a similar parallelism in the forms of life has been observed by several authors; so it is, according to Lyell, with the European and North American tertiary deposits. Even if the few fossil species which are common to the Old and New Worlds were kept wholly out of view, the general parallelism in the successive forms of life, in the palæozoic and tertiary stages, would still be manifest, and the several formations could be easily correlated.
These observations, however, relate to the marine inhabitants of the world: we have not sufficient data to judge whether the productions of the land and of fresh water at distant points change in the same parallel manner. We may doubt whether they have thus changed: if the Megatherium, Mylodon, Macrauchenia, and Toxodon had been brought to Europe from La Plata, without any information in regard to their geological position, no one would have suspected that they had co-existed with seashells all still living; but as these anomalous monsters co-existed with the mastodon and horse, it might at least have been inferred that they had lived during one of the later tertiary stages.
When the marine forms of life are spoken of as having changed simultaneously throughout the world, it must not be supposed that this expression relates to the same year, or to the same century, or even that it has a very strict geological sense; for if all the marine animals now living in Europe, and all those that lived in Europe during the pleistocene period (a very remote period as measured by years, including the whole glacial epoch) were compared with those now existing in South America or in Australia, the most skilful naturalist would hardly be able to say whether the present or the pleistocene inhabitants of Europe resembled most closely those of the southern hemisphere. So, again, several highly competent observers maintain that the existing productions of the United States are more closely related to those which lived in Europe during certain late tertiary stages, than to the present inhabitants of Europe; and if this be so, it is evident that fossiliferous beds now deposited on the shores of North America would hereafter be liable to be classed with somewhat older European beds. Nevertheless, looking to a remotely future epoch, there can be little doubt that all the more modern marine formations, namely, the upper pliocene, the pleistocene and strictly modern beds of Europe, North and South America, and Australia, from containing fossil remains in some degree allied, and from not including those forms which are found only in the older underlying deposits, would be correctly ranked as simultaneous in a geological sense.
The fact of the forms of life changing simultaneously, in the above large sense, at distant parts of the world, has greatly struck these admirable observers, MM. de Verneuil and dArchiac. After referring to the parallelism of the palæozoic forms of life in various parts of Europe, they add, If, struck by this strange sequence, we turn our attention to North America, and there discover a series of analogous phenomena, it will appear certain that all these modifications of species, their extinction, and the introduction of new ones, cannot be owing to mere changes in marine currents or other causes more or less local and temporary, but depend on general laws which govern the whole animal kingdom. M. Barrande has made forcible remarks to precisely the same effect. It is, indeed, quite futile to look to changes of currents, climate, or other physical conditions, as the cause of these great mutations in the forms of life throughout the world, under the most different climates. We must, as Barrande has remarked, look to some special law. We shall see this more clearly when we treat of the present distribution of organic beings, and find how slight is the relation between the physical conditions of various countries and the nature of their inhabitants.
This great fact of the parallel succession of the forms of life throughout the world, is explicable on the theory of natural selection. New species are formed by having some advantage over older forms; and the forms, which are already dominant, or have some advantage over the other forms in their own country, give birth to the greatest number of new varieties or incipient species. We have distinct evidence on this head, in the plants which are dominant, that is, which are commonest and most widely diffused, producing the greatest number of new varieties. It is also natural that the dominant, varying, and far-spreading species, which have already invaded to a certain extent the territories of other species, should be those which would have the best chance of spreading still further, and of giving rise in new countries to other new varieties and species. The process of diffusion would often be very slow, depending on climatal and geographical changes, on strange accidents, and on the gradual acclimatisation of new species to the various climates through which they might have to pass, but in the course of time the dominant forms would generally succeed in spreading and would ultimately prevail. The diffusion would, it is probable, be slower with the terrestrial inhabitants of distinct continents than with the marine inhabitants of the continuous sea. We might therefore expect to find, as we do find, a less strict degree of parallelism in the succession of the productions of the land than with those of the sea.
Thus, as it seems to me, the parallel, and, taken in a large sense, simultaneous, succession of the same forms of life throughout the world, accords well with the principle of new species having been formed by dominant species spreading widely and varying; the new species thus produced being themselves dominant, owing to their having had some advantage over their already dominant parents, as well as over other species, and again spreading, varying, and producing new forms. The old forms which are beaten and which yield their places to the new and victorious forms, will generally be allied in groups, from inheriting some inferiority in common; and therefore, as new and improved groups spread throughout the world, old groups disappear from the world; and the succession of forms everywhere tends to correspond both in their first appearance and final disappearance.
There is one other remark connected with this subject worth making. I have given my reasons for believing that most of our great formations, rich in fossils, were deposited during periods of subsidence; and that blank intervals of vast duration, as far as fossils are concerned, occurred during the periods when the bed of the sea was either stationary or rising, and likewise when sediment was not thrown down quickly enough to embed and preserve organic remains. During these long and blank intervals I suppose that the inhabitants of each region underwent a considerable amount of modification and extinction, and that there was much migration from other parts of the world. As we have reason to believe that large areas are affected by the same movement, it is probable that strictly contemporaneous formations have often been accumulated over very wide spaces in the same quarter of the world; but we are very far from having any right to conclude that this has invariably been the case, and that large areas have invariably been affected by the same movements. When two formations have been deposited in two regions during nearly, but not exactly, the same period, we should find in both, from the causes explained in the foregoing paragraphs, the same general succession in the forms of life; but the species would not exactly correspond; for there will have been a little more time in the one region than in the other for modification, extinction, and immigration.
I suspect that cases of this nature occur in Europe. Mr. Prestwich, in his admirable Memoirs on the eocene deposits of England and France, is able to draw a close general parallelism between the successive stages in the two countries; but when he compares certain stages in England with those in France, although he finds in both a curious accordance in the numbers of the species belonging to the same genera, yet the species themselves differ in a very, difficult to account for, considering the proximity of the two areas,unless, indeed, it be assumed that an isthmus separated two seas inhabited by distinct, but contemporaneous, faunas. Lyell has made similar observations on some of the later tertiary formations. Barrande, also, shows that there is a striking general parallelism in the successive Silurian deposits of Bohemia and Scandinavia; nevertheless he finds a surprising amount of difference in the species. If the several formations in these regions have not been deposited during the same exact periods,a formation in one region often corresponding with a blank interval in the other,and if in both regions the species have gone on slowly changing during the accumulation of the several formations and during the long intervals of time between them; in this case the several formations in the two regions could be arranged in the same order, in accordance with the general succession of the forms of life, and the order would falsely appear to be strictly parallel; nevertheless the species would not be all the same in the apparently corresponding stages in the two regions.
|
Haga Palace. Drawing by Princess Eugenia (daughter of Oscar I and Queen Josefina).
Gustav III's palace building - the ruin The history of Haga differs somewhat to that of the other Royal retreats. At Drottningholm, for example, the palace was built first and then surrounded by extensive grounds.
At Haga, the original feature is the park, which over time has gradually been joined by buildings of various different characters. Gustav III's dream palace was never completed and today it is known as the ruin, with only the cellar and foundations finished.
Gustav III's Pavilion. Photo: Charless Hammarsten, IBL.
Gustav III's Pavilion Gustav III's Pavilion (originally the King's Pavilion) could be regarded as The King's private residence at Haga and was intended to complement the official palace that was never finished. Today this pavilion is an excellent example of Gustavian architecture and interiors.
Haga Palace Gustav III's Pavilion in Haga Park was often used by Gustav III's son and successor, Gustav IV Adolf.
Gustav IV Adolf ordered the construction of a second pavilion close by for his wife, Queen Fredrika, and their children. This building, which is now called Haga Palace, was erected between the years 1802 and 1805 according to plans drawn by the architect Carl Christoffer Gjörwell. Haga is slightly different in this respect as well. The palace was constructed as a home at Haga for the Queen and her children, and not as an official residence. It is also around the time of Gustav IV Adolf and his family that we get a first glimpse of family life in the modern sense.
Italian villa style In terms of its style, the palace building is most like an Italian villa, where family life is focused around the central living room. The white colouring and the temple-like design of the central part, with its temple gables and classic columns, emphasise the Italian nature of the building
Haga Palace and the Bernadotte dynasty Haga Palace has been a much-used and loved home for the Bernadotte dynasty. Oskar I and his family often stayed at Haga and for several years the palace was occupied by King Oskar's son, the 'song Prince' Gustaf. Prince Gustaf's youngest brother August and his wife Theresia lived at Haga for many years, and the first interior photographs from their home were taken during their time there in the late 19th century. These images convey a feeling of homeliness according to the ideal that was characteristic of the end of the 19th century.
The Green salon at Haga Palace. Photo: the Royal Court
The Haga princesses In 1932, heir presumptive Prince Gustaf Adolf married Princess Sibylla of Sachsen Coburg and Gotha and Haga Palace was renovated in order to function as their family home. The building underwent a transformation, which meant that the interiors reflected the more functional style of the time rather than the historical heritage of the early 19th century. Family photographs from Haga that were published in books and magazines, and perhaps even more so shown on cinema newsreels, spread the image of a royal family idyll at Haga.
Guest house for Swedish government Haga Palace functioned as the Swedish government's guest house for distinguished visitors between 1964 and 2009. In 2009, the government transferred the royal right of disposal to Haga Palace back to H.M. The King and the palace was placed at the disposal of the Crown Princess Couple for use as a home following the wedding on 19 June, 2010.
For further information about Haga Palace and Haga in general, see 'Haga: a Royal Cultural Heritage', from the book series, 'The Royal Palaces', published 2010.
|
There are so many ways that you can do extra research and investigate into many more aspects of Clare. Below are some ideas.
Your local library is an excellent source for all the information you may need: it may have some maps you can look at. Perhaps you could do a project about aspects of Clare.
There might be some more photographs or images related to Clare in the Media Bank.
Maybe you need some tips on how to be a researcher? Click on one of the icons below.
Ask Your Librarian or Teacher
Ask your librarian or teacher if they can help you find out about Clare.
Information you can look out for includes photos and media publications, such as old newspapers, which may provide you with interesting insights.
Not sure exactly what you can ask the librarian for? Click on the icon for some suggestions.
http://www.museum.ie has loads of artefacts and evidence from Ireland throughout the centuries, even before your grandparents were born.
|
Examining the Sustainability Journey of Walmart
Business researchers from the University of Arkansas and the University of South Carolina will be examining Walmart’s 7-year sustainability effort as a way to create a teaching tool for other businesses that hope to become more environmentally friendly.
As researchers from the University of Arkansas and the University of South begin their project of examining Walmart’s efforts of becoming more sustainable for nearly a decade, they hope to find ways to help other businesses overcome certain obstacles in order to be more environmentally conscious. The project will consist of a series of cases that focus on a single topic and a single organization over a specific amount of time.
“The strategy is to embed sustainability within the core business supporting the company’s vision for a more sustainable Walmart. Numerous initiatives, some of which are documented in the case series, are part of this strategy that led to international recognition of Walmart’s sustainability leadership,” said David Hyatt, clinical assistant professor in the Sam M. Walton College of Business.
The cases explore a set of essential questions across three levels: societal, organizational and individual. For example, at the societal level, who should set standards for sustainability – government, society, consumers, scientists or companies? At the organizational level, who evaluates and measures sustainability, who should make decisions about strategy and how should the strategy be implemented? At the individual level, what does sustainability mean to the consumer or employee? The studies ask several other questions pertaining to each level.
The first case study will look at former CEO Lee Scott’s “Leadership in the 21st Century” speech from 2005, in which he publicly announced Walmart’s sustainability goals. Other studies will include the examination of how the company planned certain goals and processes, implementing LED lights and environmentally friendly shopping bags, and other strategies that have already been used by Walmart.
|
A tip or gratuity is an amount of money that is given to a worker such as a waiter or
waitress who performs a service for you. A common tip amount is 15% of the cost of
the meal or other service. Generally a tip is determined based on the total bill
which includes the cost of the meal and sales tax.
If a meal costs $10.00 and sales tax is 5% the bill is $10.50. A 15% tip based
on the $10.50 cost would be $1.58. The total would be $10.50 + $1.58 = $12.08. In
the next lesson we will find how to quickly estimate the amount of a tip.
|
Interaction of TCP/IP and Other Protocols
It is possible to classify applications as being network-aware or network-unaware. The distinction can be made because some applications, such as Web browsers and client/server applications, need to make explicit use of an underlying network protocol. Other applications, such as standard Windows application suites, simply function within the confines of a workstation's own operating system. For these applications to make use of network file and print services, it is necessary for the NOS to provide extensions to the functions of the local operating system. The next section examines how these different types of applications can make use of the underlying network.
Application Programming Interface (API):
Application developers can write network-aware applications by accessing a set of standard procedures and functions through an Application Programming Interface (API). This interface specifies software-defined entry points that developers can use to access the functionality of the networking protocols. The use of an API enables a developer to develop networkable applications, while being shielded from having to understand how the underlying protocols operate. Other APIs define interfaces to other system functionality.
Figure 114 provides a visual representation of how a networking API might fit within the OSI seven-layer model.
The majority of network applications have been written specifically to access a single networking protocol. This is because each of the NOS implementations have developed their APIs as a standard.
Redirectors and File Sharing:
One of the main application requirements within a network is saving files on a central file store. To achieve this, NOS implementations commonly include a program known as a redirector. A redirector program extends the functionality of the workstation operating system to enable it to address remote file stores.
In a DOS/Windows environment, file storage areas are denoted with the use of letters, typically with the letters A through E being reserved for local disk drives. When a user wants to access a network file volume, it is common for the NOS to facilitate some form of mapping between a volume name and an available drive letter. After the mapping has been made, it is possible for any application to access the shared file volumes in the same way as the would access a local drive. This is because of the operation of the installed redirector program. The program sits between the workstation operating system and the NOS protocol stack and listens for application calls made to any of the mapped network drives.
The functionality of a redirector can be further clarified by considering the example of an application user attempting to save a file on a network drive. The user prompts the application to save the file on a network file volume that the NOS has mapped to the DOS drive I:. The application makes a call to the workstation operating system to complete the required file save operation. The redirector program recognise that the application is attempting to access a network drive and steps in to handle the required data transfer. If the redirector hadn't been active, the workstation operating system would have been presented with a request to save a file on a drive letter that is knew nothing about, and it would have responded with a standard error message, such as 'Invalid drive specification'.
In a UNIX environment, similar file sharing capabilities are provided through the use of a Network File System (NFS). The use of NFS enables the workstation to access file volumes located on remote host machines as if they were extensions to the workstation's native filesystem. As such, the use of NFS, on the workstation side, is very similar to the use of the NOS redirector as outlined earlier. Implementation of client NFS software are available from several thirdparty companies. These implementations require a TCP/IP protocol stack to operate alongside the installed NOS protocol stack.
A workstation configured with both an NOS and a TCP/IP protocol stack is able to operate two independent applications that can provide file sharing access between environments. This is accomplished through the use of the redirector program, to provide access to the NOS file server, and NFS, operating on the TCP/IP protocol stack to provide access to NFS volumes on UNIX-servers.
Figure 115 illustrates how a single workstation can be utilise to access both network environments.
The indicated workstation loads a NetWare protocol software and the associated redirector software. File areas on the NetWare server are mapped as local drive F: and G:. The TCP/IP stack and NFS implementation are also loaded, and the remote UNIX file system is mounted as the local drive H: on the workstation PC. Files are then available to be saved by any application operating on the workstation to any of the mapped drivers.
NOS Gateways and Servers:
It is often more efficient to utilise an NOS server as a gateway into an existing TCP/IP network than to run dual protocol stacks upon each network client.
In figure 117, the NetWare server has the Novel NFS Gateway software installed. The UNIX host has exported the NFS, which has been mounted to a drive on it. This file area is now available to any of the NetWare client workstations. These users are able to access the UNIX file area through the standard NetWare redirector program, removing the requirement of having to load a TCP/IP protocol stack and run a TCP/IP-based application.
The NetWare server provides application gateway services between the IPX/SPX-based networks and the TCP/IP network. To achieve this, it is necessary for the server to load both protocol stacks. On the network clients, however, it is necessary to operate only the standard IPX/SPX protocol. The client directs applications requests to use resources within the UNIX network to the gateway using IPX/SPX protocols. The gateway relays these requests to the UNIX host via its TCP/IP protocol stack. In this way, the use of a gateway greatly reduces the administrative overhead required to provide network clients with access to TCP/IP hosts. Network users are able to utilise UNIX-based resources without the requirement to run multiprotocol stacks.
Figure 116 outlines a sample configuration of a NOS server as a gateway.
NOS gateways tend to be implemented in one of two ways. The first is through the operation of proxy application services. The use of a proxy service provides the user with a special set of the network applications, such as Telnet, FTP, and Web browsers, that have been specifically written to operate over NOS protocols. The client applications communicate with the gateway process, which forwards the application request to the specified UNIX hosts. An alternative solution utilise a tailored version of a standard WinSock driver. This special WinSock driver provides support for standard WinSock applications, but instead of operating on an underlying TCP/IP protocol stack it communicates using IPX/SPX protocols. Yet again, communication occurs between the client workstation and the gateway application, with the gateway acting to forward application data between the client and UNIX host. The use of the tailored WinSock driver means that network clients are able to utilise any standard. WinSock application and don't have to rely on the gateway manufacturer to provide specialised application software.
Figure 117 shows a tailored version of a standard WinSock driver enables the network clients to use any standard WinSock application.
NOS Support for Native IP:
The major NOS vendors have recognised an increasing demand to replace their proprietary communication methods with native TCP/IP protocols. However, network applications have generally interfaced with a specific protocol. If NOS vendors were to suddenly adopt a different protocol, many of the existing network applications would no longer function. For this reason, vendors are looking for ways to replace their proprietary network protocols, but at the same time to provide a degree of backward-compatibility to protect existing applications.
For example, within NetWare it is possible to replace the standard IPX/SPX protocols with a TCP/IP protocol stack to provide standard communication between network client and server. However, within this implementation each data packet actually consists of an IPX packet enclosed within a UDP packet. The inclusion of the IPX header provides NetWare with the backward-compatibility it requires to support its existing application base. However, the inclusion of the IPX header places an additional overhead on each data packet. This overhead is likely to account for around 8 to 10 percent of the total packet size.
Other NOS vendors also provide native support for TCP/IP protocols. For example, Windows NT allows for the users of the NetBEUI protocol or TCP/IP protocols or a combination of both. Within NT, network protocols are provided via an interface that it refers to as the Transport Driver Interface (TDI). This is a layer that is loaded toward the top of the protocol stack and is used to provide a standard interface between application environments and any underlying network protocols.
Figure 118 illustrates the location and operation of the Transport Driver Interface within Windows NT.
At the TDI interface, standard APIs such as NetBIOS and WinSock are able to interact with communication modules, principally TCP/IP and NetBEUI. The TDI model has been designed around a flexible architecture so that it can be adapted to support additional network protocols as required.
Under this networking model, applications that have been written to the NetBIOS interface can operate over an installed TCP/IP protocol stack. NetBIOS operates by assigning a unique name to every network node. The assignment and management of the NetBIOS name space results in the generation of a large amount of network traffic. This is because hosts send out broadcasts to all network nodes when they want to register the use of a name they need to perform name resolution. The NetBIOS over TCP/IP standards specifies a method whereby this functionality can occur over a TCP/IP protocol stack. The excessive broadcast requirements effectively limit the use of NetBIOS to small LAN environments where the necessary bandwidth is available. IP networks, on the other hand, often include wide area links where bandwidth might not be sufficient to handle the required broadcasts needed to maintain the NetBIOS address space.
|
Working within healthcare appeals to our society in a way virtually no other professional sector does. The mystique of making medical diagnoses, the desire to help those in need, the adrenalin surge that comes with working at a fast pace and in stressful situations, and the allure of earning a handsome salary. If you need further proof that we like the idea of healthcare jobs, just think about how many movies and televisions shows (reality, dramas, and comedies) take place or have taken place inside hospitals.
Our attraction to this industry will probably always be well-matched with the necessity to educate and employ more qualified healthcare professionals—especially right now, as a substantial portion of our population is aging and requiring more medical care. In fact, the Bureau of Labor Statistics (BLS) reports that there will be more employment growth within the healthcare and social assistance sector than in any others this decade.
Here's a snapshot of the 24 healthcare professions that we at U.S. News have labelled the best to break into, either this year or in the years to come.
The Doctors Are In
What's a list of healthcare jobs without doctors? For 2013, we highlight a handful of professions that utilize this title, although the long road to earning the honor is different for each job. Whether you choose to be a Medical Doctor (M.D.), a Doctor of Dental Medicine (D.M.D.), a Doctor of Pharmacy (PharmD), or even a Doctor of Veterinary Medicine (D.V.M.), you can expect to spend at least two years following undergrad completing a professional degree and residency program. Some medical specialties require up to eight years working as a resident.
The initials behind your name are only part of the payoff for all those years of training: Doctors are imperative to providing quality healthcare, as they're the ones who make the medical diagnoses and final decisions on how to treat patients. The four categories of doctors we highlight this year could together see nearly 300,000 new hires between 2010 and 2020.
Expected Openings: 27,600
Expected Openings: 69,700
Expected Openings: 168,300
Expected Openings: 22,000
Some of the most significant work in a healthcare facility is performed by medical secretaries, technologists, and technicians. And like the doctors, therapists, and nurses who they support, these workers undergo specialized training to become qualified to properly operate complex medical equipment, decipher prescription orders, prepare patients for procedures, keep detailed medical records, and possibly even perform initial analyses and medical examinations.
However, you won't find yourself in a four-years-or-more learning purgatory (er, training period) to enter one of these six positions from our Best Jobs list. And job prospects are excellent, as healthcare facilities strive to meet the demand to treat more patients by hiring these types of workers to provide general care and free up registered nurses, therapists, and doctors. Keep in mind that technologists are senior to technicians, typically earn higher salaries, and often need a bachelor's degree and credentials.
Expected Openings: 23,800
Expected Openings: 23,400
Expected Openings: 210,200
Expected Openings: 108,300
|
Tucked inside Carl Zimmer's wonderful and thorough feature on de-extinction, a topic that got a TEDx coming out party last week, we find a tantalizing, heartbreaking anecdote about the time scientists briefly, briefly brought an extinct species back to life.
The story begins in 1999, when scientists determined that there was a single remaining bucardo, a wild goat native to the Pyrenees, left in the world. They named her Celia and wildlife veterinarian Alberto Fernández-Arias put a radio collar around her neck. She died nine months later in January 2000, crushed by a tree. Her cells, however, were preserved.
Working with the time's crude life sciences tools, José Folch led a Franco-Spanish team that attempted to bring the bucardo, as a species, back from the dead.
It was not pretty. They injected the nuclei from Celia's cells into goat eggs that had been emptied of their DNA, then implanted 57 of them into different goat surrogate mothers. Only seven goats got pregnant, and of those, six had miscarriages. Which meant that after all that work, only a single goat carried a Celia clone to term. On July 30, 2003, the scientists performed a cesarean section.
Here, let's turn the narrative over to Zimmer's story:
As Fernández-Arias held the newborn bucardo in his arms, he could see that she was struggling to take in air, her tongue jutting grotesquely out of her mouth. Despite the efforts to help her breathe, after a mere ten minutes Celia's clone died. A necropsy later revealed that one of her lungs had grown a gigantic extra lobe as solid as a piece of liver. There was nothing anyone could have done.
A species had been brought back. And ten minutes later it was gone again. Zimmer continues
The notion of bringing vanished species back to life--some call it de-extinction--has hovered at the boundary between reality and science fiction for more than two decades, ever since novelist Michael Crichton unleashed the dinosaurs of Jurassic Park on the world. For most of that time the science of de-extinction has lagged far behind the fantasy. Celia's clone is the closest that anyone has gotten to true de-extinction. Since witnessing those fleeting minutes of the clone's life, Fernández-Arias, now the head of the government of Aragon's Hunting, Fishing and Wetlands department, has been waiting for the moment when science would finally catch up, and humans might gain the ability to bring back an animal they had driven extinct.
"We are at that moment," he told me.
That may be. And the tools available to biologists are certainly superior. But there's no developed ethics of de-extinction, as Zimmer elucidates throughout his story. It may be possible to bring animals that humans have killed off back from extinction, but is it wise, Zimmer asks?
"The history of putting species back after they've gone extinct in the wild is fraught with difficulty," says conservation biologist Stuart Pimm of Duke University. A huge effort went into restoring the Arabian oryx to the wild, for example. But after the animals were returned to a refuge in central Oman in 1982, almost all were wiped out by poachers. "We had the animals, and we put them back, and the world wasn't ready," says Pimm. "Having the species solves only a tiny, tiny part of the problem."
Maybe another way to think about it, as Jacquelyn Gill argues in Scientific American, is that animals like mammoths have to perform (as the postmodern language would have it) their own mammothness within the complex social context of a herd.
When we think of cloning woolly mammoths, it's easy to picture a rolling tundra landscape, the charismatic hulking beasts grazing lazily amongst arctic wildflowers. But what does cloning a woolly mammoth actually mean? What is a woolly mammoth, really? Is one lonely calf, raised in captivity and without the context of its herd and environment, really a mammoth?
Does it matter that there are no mammoth matriarchs to nurse that calf, to inoculate it with necessary gut bacteria, to teach it how to care for itself, how to speak to other mammoths, where the ancestral migration paths are, and how to avoid sinkholes and find water? Does it matter that the permafrost is melting, and that the mammoth steppe is gone?...
Ultimately, cloning woolly mammoths doesn't end in the lab. If the goal really is de-extinction and not merely the scientific equivalent of achievement unlocked!, then bringing back the mammoth means sustained effort, intensive management, and a massive commitment of conservation resources. Our track record on this is not reassuring.
In other words, science may be able to produce the organisms, but society would have to produce the conditions in which they could flourish.
|
Institute for the Study of Earth, Oceans, and Space at UNH
Scientists Say Developing Countries Will Be Hit Hard By Water Scarcity in the 21st Century
By Sharon Keeler
UNH News Bureau
July 11, 2001
DURHAM, N.H. --The entire water cycle of the globe has been changed by human activities and even more dramatic changes lie ahead, said a group of experts at an international conference in Amsterdam on global change this week.
"Today, approximately 2 billion people are suffering from water stress, and models predict that this will increase to more than 3 billion (or about 40 percent of the population) in 2025," said Charles Vorosmarty, a research professor in the University of New Hampshire's Institute for the Study of Earth, Oceans, and Space.
There will be winners and losers in terms of access to safe water. The world's poor nations will be the biggest losers. Countries already suffering severe water shortages, such as Mexico, Pakistan, northern China, Poland and countries in the Middle East and sub-Saharan Africa will be hardest hit.
"Water scarcity means a growing number of public health, pollution and economic development problems," said Vorosmarty.
"To avoid major conflict through competition for water resources, we urgently need international water use plans," added Professor Hartmut Grassl from the Max-Planck-Institute for Meteorology in Germany. "I believe this should be mediated by an established intergovernmental body."
The water cycle is affected by climate change, population growth, increasing water demand, changes in vegetation cover and finally the El Nino Southern Oscillation, bringing drought to some areas and flooding to others. Surprisingly, at the global scale, population growth and increasing demand for water -- not climate change -- are the primary contributing factors in future water scarcity to the year 2025.
"But at the regional scale, which is where all the critical decisions are made, it is the combination of population growth, increasing demand for water, and climate change that is the main culprit," said Vorosmarty.
According to El Nino expert, Professor Antonio Busalacchi from the University of Maryland, the two major El Nino events of the century occurred in the last 15 years and there are signs that the frequency may increase due to human activities.
"In 1982-83, what was referred to as the "El Nino event of the century" occurred with global economic consequences totaling more than $13 billion," said Busalacchi. "The recently concluded 1997-1998 El Nino was the second El Nino event of the century with economic losses estimated to be upward of $89 billion."
|
HAMBURG, Germany, June 19, 2012 — The European X-ray Free-Electron Laser (European XFEL) international research facility has overcome one of its most difficult building phases: completion of a 3.6-mile-long network of tunnels. By 2015, laserlike x-ray flashes that enable new insights into the nanoworld will be generated in the tunnels by scientists worldwide.
“Electrons will fly with almost the speed of light from DESY [Deutsches Elektronen Synchrotron] in Hamburg to Osdorf,” said professor Robert Feidenhans’l, chairman of the European XFEL Council.
Tunnel XTL on Dec. 6, 2011. (Image: ©European XFEL)
Tunnel construction began in July 2010 with the tunnel boring machine TULA (TUnnel for LAser), which concluded excavation in August 2011. A second boring machine was used from January 2011 right up until the last section of the five photon tunnels leading into the experiment hall was completed. (See: Europe signs on to big x-ray facility
The accelerator tunnel, the longest of the accelerator facility, runs in a straight line for 1.3 mi through Hamburg’s underground. It branches out into five photon tunnels, which lead into the future experiment hall. Undulator tunnels, which contain special magnet structures that slalom accelerated and bundled electrons — inducing them to emit intense flashes of x-ray radiation — are set between the accelerator and photon tunnels.
To generate the extremely short and intense x-ray flashes, bunches of high-energy electrons are directed through special arrangements of magnets (undulators). (Image: ©European XFEL, Design: Marc Hermann, tricklabor)
The tunnels will be equipped with safety devices and infrastructure before the main components of the facility are installed. These include the superconducting electron linear accelerator, whose development, installation and operation will be conducted by DESY, and the photon tunnels, undulator lines and experiment hall, whose equipment and instrument installation will be led by the European XFEL.
It is expected that scientists will be able to produce x-ray radiation for the first time here in 2015, generating up to 27,000 flashes per second — nearly 10 sextillion times brighter than the sun.
More than 400 participants, including guests from politics and science as well as staff from collaborating companies attended the June 14 ceremony upon completion of the tunnel. (Image: ©European XFEL)
“We expect great success for the life sciences, material sciences and nanotechnology when research at the European XFEL begins,” said Dr. Beatrix Vierkorn-Rudolph, head of the Subsection for Large Facilities, Energy and Basic Research, as well as the ESFRI Special Task of the German Federal Ministry of Education and Research. “The just-completed tunnel connects not only Hamburg and Schleswig-Holstein, but also scientists throughout Europe and beyond.”
For more information, visit: www.xfel.eu
|
Give the students some white wax crayons and tell them to lay down a
thick layer on manilla paper. Then tell them to cut out high contrast photos
(without letters) from newspapers-- color comics work, too. Turn the image
over on to the wax layer and burnish well--you can use the round handle end
of scissors, wooden spoons, clay tools, whatever. The image from the
newsprint will transfer into the white crayon. You can have them create
captions, extend other ideas for images. The kids really get into the
physical aspects of preparing these transfers and you are free to
concentrate on your mural painters.
Ann-on-y-mouse in Columbus
how to manage the mural painters out in the hall, while supervising the rest
of the class.
Any ideas for less teacher intensive projects with a high enough engagement
level that students would be able to work on out in the hall?
|
Significance and Use
Sediment provides habitat for many aquatic organisms and is a major repository for many of the more persistent chemicals that are introduced into surface waters. In the aquatic environment, most anthropogenic chemicals and waste materials including toxic organic and inorganic chemicals eventually accumulate in sediment. Mounting evidences exists of environmental degradation in areas where USEPA Water Quality Criteria (WQC; Stephan et al.(67)) are not exceeded, yet organisms in or near sediments are adversely affected Chapman, 1989 (68). The WQC were developed to protect organisms in the water column and were not directed toward protecting organisms in sediment. Concentrations of contaminants in sediment may be several orders of magnitude higher than in the overlying water; however, whole sediment concentrations have not been strongly correlated to bioavailability Burton, 1991(69). Partitioning or sorption of a compound between water and sediment may depend on many factors including: aqueous solubility, pH, redox, affinity for sediment organic carbon and dissolved organic carbon, grain size of the sediment, sediment mineral constituents (oxides of iron, manganese, and aluminum), and the quantity of acid volatile sulfides in sediment Di Toro et al. 1991(70) Giesy et al. 1988 (71). Although certain chemicals are highly sorbed to sediment, these compounds may still be available to the biota. Chemicals in sediments may be directly toxic to aquatic life or can be a source of chemicals for bioaccumulation in the food chain.
The objective of a sediment test is to determine whether chemicals in sediment are harmful to or are bioaccumulated by benthic organisms. The tests can be used to measure interactive toxic effects of complex chemical mixtures in sediment. Furthermore, knowledge of specific pathways of interactions among sediments and test organisms is not necessary to conduct the tests Kemp et al. 1988, (72). Sediment tests can be used to: (1) determine the relationship between toxic effects and bioavailability, (2) investigate interactions among chemicals, (3) compare the sensitivities of different organisms, (4) determine spatial and temporal distribution of contamination, (5) evaluate hazards of dredged material, (6) measure toxicity as part of product licensing or safety testing, (7) rank areas for clean up, and (8) estimate the effectiveness of remediation or management practices.
A variety of methods have been developed for assessing the toxicity of chemicals in sediments using amphipods, midges, polychaetes, oligochaetes, mayflies, or cladocerans (Test Method E 1706, Guide E 1525, Guide E 1850; Annex A1, Annex A2; USEPA, 2000 (73), EPA 1994b, (74), Environment Canada 1997a, (75), Enviroment Canada 1997b,(76)). Several endpoints are suggested in these methods to measure potential effects of contaminants in sediment including survival, growth, behavior, or reproduction; however, survival of test organisms in 10-day exposures is the endpoint most commonly reported. These short-term exposures that only measure effects on survival can be used to identify high levels of contamination in sediments, but may not be able to identify moderate levels of contamination in sediments (USEPA USEPA, 2000 (73); Sibley et al.1996, (77); Sibley et al.1997a, (78); Sibley et al.1997b, (79); Benoit et al.1997, (80); Ingersoll et al.1998, (81)). Sublethal endpoints in sediment tests might also prove to be better estimates of responses of benthic communities to contaminants in the field, Kembel et al. 1994 (82). Insufficient information is available to determine if the long-term test conducted with Leptocheirus plumulosus (Annex A2) is more sensitive than 10-d toxicity tests conducted with this or other species.
The decision to conduct short-term or long-term toxicity tests depends on the goal of the assessment. In some instances, sufficient information may be gained by measuring sublethal endpoints in 10-day tests. In other instances, the 10-day tests could be used to screen samples for toxicity before long-term tests are conducted. While the long-term tests are needed to determine direct effects on reproduction, measurement of growth in these toxicity tests may serve as an indirect estimate of reproductive effects of contaminants associated with sediments (Annex A1).
Use of sublethal endpoints for assessment of contaminant risk is not unique to toxicity testing with sediments. Numerous regulatory programs require the use of sublethal endpoints in the decision-making process (Pittinger and Adams, 1997, (83)) including: (1) Water Quality Criteria (and State Standards); (2) National Pollution Discharge Elimination System (NPDES) effluent monitoring (including chemical-specific limits and sublethal endpoints in toxicity tests); (3) Federal Insecticide, Rodenticide and Fungicide Act (FIFRA) and the Toxic Substances Control Act (TSCA, tiered assessment includes several sublethal endpoints with fish and aquatic invertebrates); (4) Superfund (Comprehensive Environmental Responses, Compensation and Liability Act; CERCLA); (5) Organization of Economic Cooperation and Development (OECD, sublethal toxicity testing with fish and invertebrates); (6) European Economic Community (EC, sublethal toxicity testing with fish and invertebrates); and (7) the Paris Commission (behavioral endpoints).
Results of toxicity tests on sediments spiked at different concentrations of chemicals can be used to establish cause and effect relationships between chemicals and biological responses. Results of toxicity tests with test materials spiked into sediments at different concentrations may be reported in terms of an LC50 (median lethal concentration), an EC50 (median effect concentration), an IC50 (inhibition concentration), or as a NOEC (no observed effect concentration) or LOEC (lowest observed effect concentration). However, spiked sediment may not be representative of chemicals associated with sediment in the field. Mixing time Stemmer et al. 1990b, (84), aging ( Landrum et al. 1989,(85), Word et al. 1987, (86), Landrum et al., 1992,(87)), and the chemical form of the material can affect responses of test organisms in spiked sediment tests.
Evaluating effect concentrations for chemicals in sediment requires knowledge of factors controlling their bioavailability. Similar concentrations of a chemical in units of mass of chemical per mass of sediment dry weight often exhibit a range in toxicity in different sediments Di Toro et al. 1990, (88) Di Toro et al. 1991,(70). Effect concentrations of chemicals in sediment have been correlated to interstitial water concentrations, and effect concentrations in interstitial water are often similar to effect concentrations in water-only exposures. The bioavailability of nonionic organic compounds in sediment is often inversely correlated with the organic carbon concentration. Whatever the route of exposure, these correlations of effect concentrations to interstitial water concentrations indicate that predicted or measured concentrations in interstitial water can be used to quantify the exposure concentration to an organism. Therefore, information on partitioning of chemicals between solid and liquid phases of sediment is useful for establishing effect concentrations Di Toro et al. 1991, (70).
Field surveys can be designed to provide either a qualitative reconnaissance of the distribution of sediment contamination or a quantitative statistical comparison of contamination among sites.
Surveys of sediment toxicity are usually part of more comprehensive analyses of biological, chemical, geological, and hydrographic data. Statistical correlations may be improved and sampling costs may be reduced if subsamples are taken simultaneously for sediment tests, chemical analyses, and benthic community structure.
Table 2 lists several approaches the USEPA has considered for the assessment of sediment quality USEPA, 1992, (89). These approaches include: (1) equilibrium partitioning, (2) tissue residues, (3) interstitial water toxicity, (4) whole-sediment toxicity and sediment-spiking tests, (5) benthic community structure, (6) effect ranges (for example, effect range median, ERM), and (7) sediment quality triad (see USEPA, 1989a, 1990a, 1990b and 1992b, (90, 91, 92, 93 and Wenning and Ingersoll (2002 (94)) for a critique of these methods). The sediment assessment approaches listed in Table 2 can be classified as numeric (for example, equilibrium partitioning), descriptive (for example, whole-sediment toxicity tests), or a combination of numeric and descriptive approaches (for example, ERM, USEPA, 1992c, (95). Numeric methods can be used to derive chemical-specific sediment quality guidelines (SQGs). Descriptive methods such as toxicity tests with field-collected sediment cannot be used alone to develop numerical SQGs for individual chemicals. Although each approach can be used to make site-specific decisions, no one single approach can adequately address sediment quality. Overall, an integration of several methods using the weight of evidence is the most desirable approach for assessing the effects of contaminants associated with sediment, (Long et al. 1991(96) MacDonald et al. 1996 (97) Ingersoll et al. 1996 (98) Ingersoll et al. 1997 (99), Wenning and Ingersoll 2002 (94)). Hazard evaluations integrating data from laboratory exposures, chemical analyses, and benthic community assessments (the sediment quality triad) provide strong complementary evidence of the degree of pollution-induced degradation in aquatic communities (Burton, 1991 (69), Chapman 1992, 1997 (100, 101).)
Regulatory Applications—Test Method E 1706 provides information on the regulatory applications of sediment toxicity tests.
The USEPA Environmental Monitoring Management Council (EMMC) recommended the use of performance-based methods in developing standards, (Williams, 1993 (102). Performance-based methods were defined by EMMC as a monitoring approach which permits the use of appropriate methods that meet preestablished demonstrated performance standards (11.2).
The USEPA Office of Water, Office of Science and Technology, and Office of Research and Development held a workshop to provide an opportunity for experts in the field of sediment toxicology and staff from the USEPA Regional and Headquarters Program offices to discuss the development of standard freshwater, estuarine, and marine sediment testing procedures (USEPA, 1992a, 1994a (89, 103)). Workgroup participants arrived at a consensus on several culturing and testing methods. In developing guidance for culturing test organisms to be included in the USEPA methods manual for sediment tests, it was agreed that no one method should be required to culture organisms. However, the consensus at the workshop was that success of a test depends on the health of the cultures. Therefore, having healthy test organisms of known quality and age for testing was determined to be the key consideration relative to culturing methods. A performance-based criteria approach was selected in USEPA, 2000 (73) as the preferred method through which individual laboratories could use unique culturing methods rather than requiring use of one culturing method.
This standard recommends the use of performance-based criteria to allow each laboratory to optimize culture methods and minimize effects of test organism health on the reliability and comparability of test results. See Annex A1 and Annex A2 for a listing of performance criteria for culturing or testing.
1.1 This test method covers procedures for testing estuarine or marine organisms in the laboratory to evaluate the toxicity of contaminants associated with whole sediments. Sediments may be collected from the field or spiked with compounds in the laboratory. General guidance is presented in Sections 1-15 for conducting sediment toxicity tests with estuarine or marine amphipods. Specific guidance for conducting 10-d sediment toxicity tests with estuarine or marine amphipods is outlined in Annex A1 and specific guidance for conducting 28-d sediment toxicity tests with Leptocheirus plumulosus is outlined in Annex A2.
1.2 Procedures are described for testing estuarine or marine amphipod crustaceans in 10-d laboratory exposures to evaluate the toxicity of contaminants associated with whole sediments (Annex A1; USEPA 1994a (1)). Sediments may be collected from the field or spiked with compounds in the laboratory. A toxicity method is outlined for four species of estuarine or marine sediment-burrowing amphipods found within United States coastal waters. The species are Ampelisca abdita, a marine species that inhabits marine and mesohaline portions of the Atlantic coast, the Gulf of Mexico, and San Francisco Bay; Eohaustorius estuarius, a Pacific coast estuarine species; Leptocheirus plumulosus, an Atlantic coast estuarine species; and Rhepoxynius abronius, a Pacific coast marine species. Generally, the method described may be applied to all four species, although acclimation procedures and some test conditions (that is, temperature and salinity) will be species-specific (Sections 12 and Annex A1). The toxicity test is conducted in 1-L glass chambers containing 175 mL of sediment and 775 mL of overlying seawater. Exposure is static (that is, water is not renewed), and the animals are not fed over the 10-d exposure period. The endpoint in the toxicity test is survival with reburial of surviving amphipods as an additional measurement that can be used as an endpoint for some of the test species (for R. abronius and E. estuarius). Performance criteria established for this test include the average survival of amphipods in negative control treatment must be greater than or equal to 90 %. Procedures are described for use with sediments with pore-water salinity ranging from >0 o/ooto fully marine.
1.3 A procedure is also described for determining the chronic toxicity of contaminants associated with whole sediments with the amphipod Leptocheirus plumulosus in laboratory exposures (Annex A2; USEPA-USACE 2001(2)). The toxicity test is conducted for 28 d in 1-L glass chambers containing 175 mL of sediment and about 775 mL of overlying water. Test temperature is 25° ± 2°C, and the recommended overlying water salinity is 5 o/oo ± 2 o/oo(for test sediment with pore water at 1 o/oo to 10 o/oo) or 20 o/oo ± 2 o/oo (for test sediment with pore water >10 o/oo). Four hundred millilitres of overlying water is renewed three times per week, at which times test organisms are fed. The endpoints in the toxicity test are survival, growth, and reproduction of amphipods. Performance criteria established for this test include the average survival of amphipods in negative control treatment must be greater than or equal to 80 % and there must be measurable growth and reproduction in all replicates of the negative control treatment. This test is applicable for use with sediments from oligohaline to fully marine environments, with a silt content greater than 5 % and a clay content less than 85 %.
1.4 A salinity of 5 or 20 o/oo is recommended for routine application of 28-d test with L. plumulosus (Annex A2; USEPA-USACE 2001 (2)) and a salinity of 20 o/oois recommended for routine application of the 10-d test with E. estuarius or L. plumulosus (Annex A1). However, the salinity of the overlying water for tests with these two species can be adjusted to a specific salinity of interest (for example, salinity representative of site of interest or the objective of the study may be to evaluate the influence of salinity on the bioavailability of chemicals in sediment). More importantly, the salinity tested must be within the tolerance range of the test organisms (as outlined in Annex A1 and Annex A2). If tests are conducted with procedures different from those described in 1.3 or in Table A1.1 (for example, different salinity, lighting, temperature, feeding conditions), additional tests are required to determine comparability of results (1.10). If there is not a need to make comparisons among studies, then the test could be conducted just at a selected salinity for the sediment of interest.
1.5 Future revisions of this standard may include additional annexes describing whole-sediment toxicity tests with other groups of estuarine or marine invertebrates (for example, information presented in Guide E 1611 on sediment testing with polychaetes could be added as an annex to future revisions to this standard). Future editions to this standard may also include methods for conducting the toxicity tests in smaller chambers with less sediment (Ho et al. 2000 (3), Ferretti et al. 2002 (4)).
1.6 Procedures outlined in this standard are based primarily on procedures described in the USEPA (1994a (1)), USEPA-USACE (2001(2)), Test Method E 1706, and Guides E 1391, E 1525, E 1688, Environment Canada (1992 (5)), DeWitt et al. (1992a (6); 1997a (7)), Emery et al. (1997 (8)), and Emery and Moore (1996 (9)), Swartz et al. (1985 (10)), DeWitt et al. (1989 (11)), Scott and Redmond (1989 (12)), and Schlekat et al. (1992 (13)).
1.7 Additional sediment toxicity research and methods development are now in progress to (1) refine sediment spiking procedures, (2) refine sediment dilution procedures, (3) refine sediment Toxicity Identification Evaluation (TIE) procedures, (4) produce additional data on confirmation of responses in laboratory tests with natural populations of benthic organisms (that is, field validation studies), and (5) evaluate relative sensitivity of endpoints measured in 10- and 28-d toxicity tests using estuarine or marine amphipods. This information will be described in future editions of this standard.
1.8 Although standard procedures are described in Annex A2 of this standard for conducting chronic sediment tests with L. plumulosus, further investigation of certain issues could aid in the interpretation of test results. Some of these issues include further investigation to evaluate the relative toxicological sensitivity of the lethal and sublethal endpoints to a wide variety of chemicals spiked in sediment and to mixtures of chemicals in sediments from contamination gradients in the field (USEPA-USACE 2001 (2)). Additional research is needed to evaluate the ability of the lethal and sublethal endpoints to estimate the responses of populations and communities of benthic invertebrates to contaminated sediments. Research is also needed to link the toxicity test endpoints to a field-validated population model of L. plumulosus that would then generate estimates of population-level responses of the amphipod to test sediments and thereby provide additional ecologically relevant interpretive guidance for the laboratory toxicity test.
1.9 This standard outlines specific test methods for evaluating the toxicity of sediments with A. abdita, E. estuarius, L. plumulosus, and R. abronius. While standard procedures are described in this standard, further investigation of certain issues could aid in the interpretation of test results. Some of these issues include the effect of shipping on organism sensitivity, additional performance criteria for organism health, sensitivity of various populations of the same test species, and confirmation of responses in laboratory tests with natural benthos populations.
1.10 General procedures described in this standard might be useful for conducting tests with other estuarine or marine organisms (for example, Corophium spp., Grandidierella japonica, Lepidactylus dytiscus, Streblospio benedicti), although modifications may be necessary. Results of tests, even those with the same species, using procedures different from those described in the test method may not be comparable and using these different procedures may alter bioavailability. Comparison of results obtained using modified versions of these procedures might provide useful information concerning new concepts and procedures for conducting sediment tests with aquatic organisms. If tests are conducted with procedures different from those described in this test method, additional tests are required to determine comparability of results. General procedures described in this test method might be useful for conducting tests with other aquatic organisms; however, modifications may be necessary.
1.11 Selection of Toxicity Testing Organisms:
1.11.1 The choice of a test organism has a major influence on the relevance, success, and interpretation of a test. Furthermore, no one organism is best suited for all sediments. The following criteria were considered when selecting test organisms to be described in this standard (Table 1 and Guide E 1525). Ideally, a test organism should: (1) have a toxicological database demonstrating relative sensitivity to a range of contaminants of interest in sediment, (2) have a database for interlaboratory comparisons of procedures (for example, round-robin studies), (3) be in direct contact with sediment, (4) be readily available from culture or through field collection, (5) be easily maintained in the laboratory, (6) be easily identified, (7) be ecologically or economically important, (8) have a broad geographical distribution, be indigenous (either present or historical) to the site being evaluated, or have a niche similar to organisms of concern (for example, similar feeding guild or behavior to the indigenous organisms), (9) be tolerant of a broad range of sediment physico-chemical characteristics (for example, grain size), and (10) be compatible with selected exposure methods and endpoints (Guide E 1525). Methods utilizing selected organisms should also be (11) peer reviewed (for example, journal articles) and (12) confirmed with responses with natural populations of benthic organisms.
1.11.2 Of these criteria (Table 1), a database demonstrating relative sensitivity to contaminants, contact with sediment, ease of culture in the laboratory or availability for field-collection, ease of handling in the laboratory, tolerance to varying sediment physico-chemical characteristics, and confirmation with responses with natural benthic populations were the primary criteria used for selecting A. abdita, E. estuarius, L. plumulosus, and R. abronius for the current edition of this standard for 10-d sediment tests (Annex A1). The species chosen for this method are intimately associated with sediment, due to their tube- dwelling or free-burrowing, and sediment ingesting nature. Amphipods have been used extensively to test the toxicity of marine, estuarine, and freshwater sediments (Swartz et al., 1985 (10); DeWitt et al., 1989 (11); Scott and Redmond, 1989 (12); DeWitt et al., 1992a (6); Schlekat et al., 1992 (13)). The selection of test species for this standard followed the consensus of experts in the field of sediment toxicology who participated in a workshop entitled “Testing Issues for Freshwater and Marine Sediments”. The workshop was sponsored by USEPA Office of Water, Office of Science and Technology, and Office of Research and Development, and was held in Washington, D.C. from 16-18 September 1992 (USEPA, 1992 (14)). Of the candidate species discussed at the workshop, A. abdita, E. estuarius, L. plumulosus, and R. abronius best fulfilled the selection criteria, and presented the availability of a combination of one estuarine and one marine species each for both the Atlantic (the estuarine L. plumulosus and the marine A. abdita) and Pacific (the estuarine E. estuarius and the marine R. abronius) coasts. Ampelisca abdita is also native to portions of the Gulf of Mexico and San Francisco Bay. Many other organisms that might be appropriate for sediment testing do not now meet these selection criteria because little emphasis has been placed on developing standardized testing procedures for benthic organisms. For example, a fifth species, Grandidierella japonica was not selected because workshop participants felt that the use of this species was not sufficiently broad to warrant standardization of the method. Environment Canada (1992 (5)) has recommended the use of the following amphipod species for sediment toxicity testing: Amphiporeia virginiana, Corophium volutator, Eohaustorius washingtonianus, Foxiphalus xiximeus, and Leptocheirus pinguis. A database similar to those available for A. abdita, E. estuarius, L. plumulosus, and R. abronius must be developed in order for these and other organisms to be included in future editions of this standard.
1.11.3 The primary criterion used for selecting L. plumulosus for chronic testing of sediments was that this species is found in both oligohaline and mesohaline regions of estuaries on the East Coast of the United States and is tolerant to a wide range of sediment grain size distribution (USEPA-USACE 2001 (2), Annex Annex A2). This species is easily cultured in the laboratory and has a relatively short generation time (that is, about 24 d at 23°C, DeWitt et al. 1992a (6)) that makes this species adaptable to chronic testing (Section 12).
1.11.4 An important consideration in the selection of specific species for test method development is the existence of information concerning relative sensitivity of the organisms both to single chemicals and complex mixtures. Several studies have evaluated the sensitivities of A. abdita, E. estuarius, L. plumulosus, or R. abronius, either relative to one another, or to other commonly tested estuarine or marine species. For example, the sensitivity of marine amphipods was compared to other species that were used in generating saltwater Water Quality Criteria. Seven amphipod genera, including Ampelisca abdita and Rhepoxynius abronius, were among the test species used to generate saltwater Water Quality Criteria for 12 chemicals. Acute amphipod toxicity data from 4-d water-only tests for each of the 12 chemicals was compared to data for (1) all other species, (2) other benthic species, and (3) other infaunal species. Amphipods were generally of median sensitivity for each comparison. The average percentile rank of amphipods among all species tested was 57 %; among all benthic species, 56 %; and, among all infaunal species, 54 %. Thus, amphipods are not uniquely sensitive relative to all species, benthic species, or even infaunal species (USEPA 1994a (1)). Additional research may be warranted to develop tests using species that are consistently more sensitive than amphipods, thereby offering protection to less sensitive groups.
1.11.5 Williams et al. (1986 (15)) compared the sensitivity of the R. abronius 10-d whole sediment test, the oyster embryo (Crassostrea gigas) 48-h abnormality test, and the bacterium (Vibrio fisheri) 1-h luminescence inhibition test (that is, the Microtox test) to sediments collected from 46 contaminated sites in Commencement Bay, WA. Rhepoxynius abronius were exposed to whole sediment, while the oyster and bacterium tests were conducted with sediment elutriates and extracts, respectfully. Microtox was the most sensitive test, with 63 % of the sites eliciting significant inhibition of luminescence. Significant mortality of R. abronius was observed in 40 % of test sediments, and oyster abnormality occurred in 35 % of sediment elutriates. Complete concordance (that is, sediments that were either toxic or not-toxic in all three tests) was observed in 41 % of the sediments. Possible sources for the lack of concordance at other sites include interspecific differences in sensitivity among test organisms, heterogeneity in contaminant types associated with test sediments, and differences in routes of exposure inherent in each toxicity test. These results highlight the importance of using multiple assays when performing sediment assessments.
1.11.6 Several studies have compared the sensitivity of combinations of the four amphipods to sediment contaminants. For example, there are several comparisons between A. abdita and R. abronius, between E. estuarius and R. abronius, and between A. abdita and L. plumulosus. There are fewer examples of direct comparisons between E. estuarius and L. plumulosus, and no examples comparing L. plumulosus and R. abronius. There is some overlap in relative sensitivity from comparison to comparison within each species combination, which appears to indicate that all four species are within the same range of relative sensitivity to contaminated sediments.
220.127.116.11 Word et al. (1989 (16)) compared the sensitivity of A. abdita and R. abronius to contaminated sediments in a series of experiments. Both species were tested at 15°C. Experiments were designed to compare the response of the organism rather than to provide a comparison of the sensitivity of the methods (that is, Ampelisca abdita would normally be tested at 20°C). Sediments collected from Oakland Harbor, CA, were used for the comparisons. Twenty-six sediments were tested in one comparison, while 5 were tested in the other. Analysis of results using Kruskal Wallace rank sum test for both experiments demonstrated that R. abronius exhibited greater sensitivity to the sediments than A. abdita at 15°C. Long and Buchman (1989 (17)) also compared the sensitivity of A. abdita and R. abronius to sediments from Oakland Harbor, CA. They also determined that A. abdita showed less sensitivity than R. abronius, but they also showed that A. abdita was less sensitive to sediment grain size factors than R. abronius.
18.104.22.168 DeWitt et al. (1989 (11)) compared the sensitivity of E. estuarius and R. abronius to sediment spiked with fluoranthene and field-collected sediment from industrial waterways in Puget Sound, WA, in 10-d tests, and to aqueous cadmium (CdCl2) in a 4-d water-only test. The sensitivity of E. estuarius was from two (to spiked-spiked sediment) to seven (to one Puget Sound, WA, sediment) times less sensitive than R. abronius in sediment tests, and ten times less sensitive to CdCl2 in the water-only test. These results are supported by the findings of Pastorok and Becker (1990 (18)) who found the acute sensitivity of E. estuarius and R. abronius to be generally comparable to each other, and both were more sensitive than Neanthes arenaceodentata (survival and biomass endpoints), Panope generosa (survival), and Dendraster excentricus (survival).
22.214.171.124 Leptocheirus plumulosus was as sensitive as the freshwater amphipod Hyalella azteca to an artificially created gradient of sediment contamination when the latter was acclimated to oligohaline salinity (that is, 6 o/oo; McGee et al., 1993 (19)). DeWitt et al. (1992b (20)) compared the sensitivity of L. plumulosus with three other amphipod species, two mollusks, and one polychaete to highly contaminated sediment collected from Baltimore Harbor, MD, that was serially diluted with clean sediment. Leptocheirus plumulosus was more sensitive than the amphipods Hyalella azteca and Lepidactylus dytiscus and exhibited equal sensitivity with E. estuarius. Schlekat et al. (1995 (21)) describe the results of an interlaboratory comparison of 10-d tests with A. abdita, L. plumulosus and E. estuarius using dilutions of sediments collected from Black Rock Harbor, CT. There was strong agreement among species and laboratories in the ranking of sediment toxicity and the ability to discriminate between toxic and non-toxic sediments.
126.96.36.199 Hartwell et al. (2000 (22)) evaluated the response of Leptocheirus plumulosus (10-d survival or growth) to the response of the amphipod Lepidactylus dytiscus (10-d survival or growth), the polychaete Streblospio benedicti (10-d survival or growth), and lettuce germination (Lactuca sativa in 3-d exposure) and observed that L. plumulosus was relatively insensitive compared to the response of either L. dytiscus or S. benedicti in exposures to 4 sediments with elevated metal concentrations.
188.8.131.52 Ammonia is a naturally occurring compound in marine sediment that results from the degradation of organic debris. Interstitial ammonia concentrations in test sediment can range from <1 mg/L to in excess of 400 mg/L (Word et al., 1997 (23)). Some benthic infauna show toxicity to ammonia at concentrations of about 20 mg/L (Kohn et al., 1994 (24)). Based on water-only and spiked-sediment experiments with ammonia, threshold limits for test initiation and termination have been established for the L. plumulosus chronic test. Smaller (younger) individuals are more sensitive to ammonia than larger (older) individuals (DeWitt et al., 1997a (7), b (25). Results of a 28-d test indicated that neonates can tolerate very high levels of pore-water ammonia (>300 mg/L total ammonia) for short periods of time with no apparent long-term effects (Moore et al., 1997 (26)). It is not surprising L. plumulosus has a high tolerance for ammonia given that these amphipods are often found in organic rich sediments in which diagenesis can result in elevated pore-water ammonia concentrations. Insensitivity to ammonia by L. plumulosus should not be construed as an indicator of the sensitivity of the L. plumulosus sediment toxicity test to other chemicals of concern.
1.11.7 Limited comparative data is available for concurrent water-only exposures of all four species in single-chemical tests. Studies that do exist generally show that no one species is consistently the most sensitive.
184.108.40.206 The relative sensitivity of the four amphipod species to ammonia was determined in ten-d water only toxicity tests in order to aid interpretation of results of tests on sediments where this toxicant is present (USEPA 1994a (1)). These tests were static exposures that were generally conducted under conditions (for example, salinity, photoperiod) similar to those used for standard 10-d sediment tests. Departures from standard conditions included the absence of sediment and a test temperature of 20°C for L. plumulosus, rather than 25°C as dictated in this standard. Sensitivity to total ammonia increased with increasing pH for all four species. The rank sensitivity was R. abronius = A. abdita > E. estuarius > L. plumulosus. A similar study by Kohn et al. (1994 (24)) showed a similar but slightly different relative sensitivity to ammonia with A. abdita > R. abronius = L. plumulosus > E. estuarius.
220.127.116.11 Cadmium chloride has been a common reference toxicant for all four species in 4-d exposures. DeWitt et al. (1992a (6)) reports the rank sensitivity as R. abronius > A. abdita > L. plumulosus > E. estuarius at a common temperature and salinity of 15°C and 28 o/oo. A series of 4-d exposures to cadmium that were conducted at species-specific temperatures and salinities showed the following rank sensitivity: A. abdita = L. plumulosus = R. abronius > E. estuarius (USEPA 1994a (1)).
18.104.22.168 Relative species sensitivity frequently varies among contaminants; consequently, a battery of tests including organisms representing different trophic levels may be needed to assess sediment quality (Craig, 1984 (27); Williams et al. 1986 (15); Long et al., 1990 (28); Ingersoll et al., 1990 (29); Burton and Ingersoll, 1994 (31)). For example, Reish (1988 (32)) reported the relative toxicity of six metals (arsenic, cadmium, chromium, copper, mercury, and zinc) to crustaceans, polychaetes, pelecypods, and fishes and concluded that no one species or group of test organisms was the most sensitive to all of the metals.
1.11.8 The sensitivity of an organism is related to route of exposure and biochemical response to contaminants. Sediment-dwelling organisms can receive exposure from three primary sources: interstitial water, sediment particles, and overlying water. Food type, feeding rate, assimilation efficiency, and clearance rate will control the dose of contaminants from sediment. Benthic invertebrates often selectively consume different particle sizes (Harkey et al. 1994 (33)) or particles with higher organic carbon concentrations which may have higher contaminant concentrations. Grazers and other collector-gatherers that feed on aufwuchs and detritus may receive most of their body burden directly from materials attached to sediment or from actual sediment ingestion. In some amphipods (Landrum, 1989 (34)) and clams (Boese et al., 1990 (35)) uptake through the gut can exceed uptake across the gills for certain hydrophobic compounds. Organisms in direct contact with sediment may also accumulate contaminants by direct adsorption to the body wall or by absorption through the integument (Knezovich et al. 1987 (36)).
1.11.9 Despite the potential complexities in estimating the dose that an animal receives from sediment, the toxicity and bioaccumulation of many contaminants in sediment such as Kepone®, fluoranthene, organochlorines, and metals have been correlated with either the concentration of these chemicals in interstitial water or in the case of non-ionic organic chemicals, concentrations in sediment on an organic carbon normalized basis (Di Toro et al. 1990 (37); Di Toro et al. 1991(38)). The relative importance of whole sediment and interstitial water routes of exposure depends on the test organism and the specific contaminant (Knezovich et al. 1987 (36)). Because benthic communities contain a diversity of organisms, many combinations of exposure routes may be important. Therefore, behavior and feeding habits of a test organism can influence its ability to accumulate contaminants from sediment and should be considered when selecting test organisms for sediment testing.
1.11.10 The use of A. abdita, E. estuarius, R. abronius, and L. plumulosus in laboratory toxicity studies has been field validated with natural populations of benthic organisms (Swartz et al. 1994 (39) and Anderson et al. 2001 (40) for E. estuarius, Swartz et al. 1982 (43) and Anderson et al. 2001 (40) for R. abronius, McGee et al. 1999 (41)and McGee and Fisher 1999 (42) for L. plumulosus).
22.214.171.124 Data from USEPA Office of Research and Development's Environmental Monitoring and Assessment program were examined to evaluate the relationship between survival of Ampelisca abdita in sediment toxicity tests and the presence of amphipods, particularly ampeliscids, in field samples. Over 200 sediment samples from two years of sampling in the Virginian Province (Cape Cod, MA, to Cape Henry, VA) were available for comparing synchronous measurements of A. abdita survival in toxicity tests to benthic community enumeration. Although species of this genus were among the more frequently occurring taxa in these samples, ampeliscids were totally absent from stations that exhibited A. abdita test survival <60 % of that in control samples. Additionally, ampeliscids were found in very low densities at stations with amphipod test survival between 60 and 80 % (USEPA 1994a (1)). These data indicate that tests with
2. Referenced Documents (purchase separately) The documents listed below are referenced within the subject standard but are not provided as part of the standard.
D1129 Terminology Relating to Water
D4447 Guide for Disposal of Laboratory Chemicals and Samples
E29 Practice for Using Significant Digits in Test Data to Determine Conformance with Specifications
E105 Practice for Probability Sampling of Materials
E122 Practice for Calculating Sample Size to Estimate, With Specified Precision, the Average for a Characteristic of a Lot or Process
E141 Practice for Acceptance of Evidence Based on the Results of Probability Sampling
E177 Practice for Use of the Terms Precision and Bias in ASTM Test Methods
E178 Practice for Dealing With Outlying Observations
E456 Terminology Relating to Quality and Statistics
E691 Practice for Conducting an Interlaboratory Study to Determine the Precision of a Test Method
E729 Guide for Conducting Acute Toxicity Tests on Test Materials with Fishes, Macroinvertebrates, and Amphibians
E943 Terminology Relating to Biological Effects and Environmental Fate
E1241 Guide for Conducting Early Life-Stage Toxicity Tests with Fishes
E1325 Terminology Relating to Design of Experiments
E1391 Guide for Collection, Storage, Characterization, and Manipulation of Sediments for Toxicological Testing and for Selection of Samplers Used to Collect Benthic Invertebrates
E1402 Guide for Sampling Design
E1525 Guide for Designing Biological Tests with Sediments
E1611 Guide for Conducting Sediment Toxicity Tests with Polychaetous Annelids
E1688 Guide for Determination of the Bioaccumulation of Sediment-Associated Contaminants by Benthic Invertebrates
E1706 Test Method for Measuring the Toxicity of Sediment-Associated Contaminants with Freshwater Invertebrates
E1847 Practice for Statistical Analysis of Toxicity Tests Conducted Under ASTM Guidelines
E1850 Guide for Selection of Resident Species as Test Organisms for Aquatic and Sediment Toxicity Tests
Ampelisca abdita; amphipod; bioavailability; chronic; Eohaustorius estuarius; estuarine; invertebrates; Leptocheirus plumulosus; marine; Rhepoxynius abronius; sediment; toxicity; Acidity, alkalinity, pH--chemicals; Acute toxicity tests; Ampelisca abdita; Amphipods/Amphibia; Aqueous environments; Benthic macroinvertebrates (collecting); Biological data analysis--sediments; Bivalve molluscs; Chemical analysis--water applications; Contamination--environmental; Corophium; Crustacea; EC50 test; Eohaustorius estuarius; Estuarine environments; Field testing--environmental materials/applications; Geochemical characteristics; Grandidierella japonica; Leptocheirus Plumuulosus; Marine environments; Median lethal dose; Polychaetes; Reference toxicants; Rhepoxynium abronius; Saltwater; Seawater (natural/synthetic); Sediment toxicity testing; Static tests--environmental materials/applications; Ten-day testing; Toxicity/toxicology--water environments
ASTM International is a member of CrossRef.
Citing ASTM Standards
[Back to Top]
|
A near-sighted eye is usually longer than a normal-sighted eye. Incoming rays of light are bundled so that their focal point is not on, but in front of, the retina. Distant objects are perceived in a blurred manner, whilst objects up close are in focus. The longer the eye, the more pronounced the degree of near-sightedness.
The interactive animation to the side of this text allows you to see how the eye and image perception change with varying degrees of near-sightedness.
The power of refraction of the eyes can be reduced surgically by means of laser correction or intraocular lenses, which shifts the focal point backwards onto the retina. In the case of glasses or contact lenses, this occurs by means of a concave lens, the strength of which is expressed in minus dioptres.
|
Teenage vegetarians may be at greater risk of eating disorders and suicide than their meat-eating peers, according to US researchers.
A study from the University of Minnesota found that adolescent vegetarians were more weight- and body-conscious, and more likely to have been diagnosed with an eating disorder, and to have tried a variety of healthy and unhealthy weight-control practices such as diet pills, laxatives and vomiting. They were also more likely than their peers to have contemplated or attempted suicide.
The findings also indicated that adolescents were more likely than adults to be vegetarians for weight-control rather than for health or moral reasons.
Although the authors acknowledge that a vegetarian diet can be more healthy than one that contains red meat, they also note that, in some teens, being a vegetarian may be taken as a red flag for eating and other disorders related to self-image (J Adolesc Health, 2001; 29: 406-16).
|
Maryland Public Lands Managed by the Wildlife and Heritage Service
The Wildlife & Heritage Service (WHS) oversees the management of 47 Wildlife Management Areas (WMAs), ranging in size from under 20 acres to over 29,000 acres. The WMA system encompasses a total of 111,000 acres, with WMAs located in 18 of Maryland's 23 counties.
Mission of the WMA System
To conserve and enhance diverse wildlife populations and associated habitats while providing for public enjoyment of the State’s wildlife resources through hunting and other wildlife-dependent recreation.
Goals of the WMA System
The Wildlife and Heritage Service manages the WMAs for diverse wildlife populations and their habitats in a number of ways, such as applying prescribed burns, planting food plots, establishing native grasses, managing wetlands and performing timber stand work. Some habitats, such as forested areas, provide for wildlife without any direct management. Providing for wildlife-dependent recreation involves the installation and maintenance of parking lots, roads, trails, boat access facilities, and user areas for the disabled. Property boundaries, signs, and maps are also up-dated, as needed.
WMAs are primarily managed for hunting, trapping and other wildlife-dependent recreational uses. On the more popular areas, a system of lotteries and reservations is in place to avoid over use and conflicts among users. Information about hunting and trapping on public lands in Maryland is updated annually and published in the Guide to Hunting & Trapping in Maryland. Our staff also manages wildlife populations on other DNR properties, including certain State Parks, State Forests and Natural Resource Management Areas, as well as some private lands and local government properties (called Cooperative WMAs).
Expanding Public Use of WMA's
The Wildlife & Heritage Service, working in concert with DNR's Nature Tourism Program, is expanding the public use of WMA properties. For example, the Fishing Bay WMA Water Trail is specially designed for kayak and canoe users with an interest in birding and wildlife photography. This water trail offers an outstanding opportunity for paddlers to observe a variety wetland wildlife species in their native habitats.
For More Information on the Following:
- Wildlife Management Areas
- Public Hunting Lands
- WMA Maps & Information
- WMA Acres by Region
- Public Lands Managed by Region
- Public Dove Fields
- Guide to Marylandís Natural Areas
- Maryland State Parks
- Guide to Hunting and Trapping
- Hunting Seasons Calendar
- Disabled Hunter Access
- About Wildlife & Heritage Service
|
VMware Workstation 4Features | Documentation | Knowledge Base | Discussion Forums
With shared folders, you can easily share files among virtual machines and the host computer. To use shared folders, you must have the current version of VMware Tools installed in the guest operating system and you must use the Virtual Machine Control Panel to specify which directories are to be shared.
You can use shared folders with virtual machines running the following guest operating systems:
To set up one or more shared folders for a virtual machine, be sure the virtual machine is open in Workstation and click its tab to make it the active virtual machine. Go to Edit > Virtual Machine Settings > Options and click Shared folders.
You can add one or more directories to the list. Those directories may be on the host computer or they may be network directories accessible from the host computer.
In a Windows virtual machine, shared folders appear in My Network Places (Network Neighborhood in a Windows NT virtual machine) under VMware Shared Folders. For example, if you specify the name Test files for one of your shared folders, you can navigate to it by opening My Network Places > VMware Shared Folders > .host > Shared Folders > Test files.
You can also go directly to the folder using the UNC path
You can map a shared folder to a drive letter just as you would with a network share.
Note: To see shared folders displayed in this way, you must update VMware Tools in the virtual machine to the current version. If your guest operating system has the version of VMware Tools that shipped with VMware Workstation 4.0, shared folders appear as folders on a designated drive letter.
In a Linux virtual machine, shared folders appear under /mnt/hgfs. So the shared folder in this example would appear as /mnt/hgfs/Test files.
To add a new shared folder to the list, click Add. On a Windows host, a wizard guides you through the process. On a Linux host, a dialog box appears. Enter the required information, then click OK.
Provide the following information:
To change the settings for a shared folder on the list, click the folder's name to highlight it, then click Properties. The Properties dialog box appears.
Change any settings you wish, then click OK.
Note: You can use shared folders to share any type of file. However, Windows shortcuts and Linux symbolic links do not work correctly if you try to use them via shared folders.
Caution: Do not open a file in a shared folder from more than one application at a time. For example, you should not open the same file using an application on the host operating system and another application in the guest operating system. In some circumstances, doing so could cause data corruption in the file.
|
One of the most striking aspects of German-Jewish modernism, and certainly what distinguishes it from the surrounding German milieu, is its meditation on Jewish tradition, which to the modernists represented a pre-modern authenticity that they felt they had lost. Curiously, and perhaps not unlike my own rabbis, German-speaking Jewish modernists most commonly fixed their gaze not just on any Jewish tradition but that of Eastern European or, more vaguely, 'Oriental' Jews, a term that enabled them to stress their own distance from tradition while still claiming a mystical as well as genetic connection. Like other Orientalists, German modernists regarded the Eastern Jews as exotic and even intimidating, like Kafka's gatekeeper. Yet they also saw in the Eastern Jew a vision of their own personal origins, a ghost of authenticity past. For what are Western Jews, the Austrian writer Joseph Roth asked, if not Eastern Jews who had forgotten?
Roth mourned what seemed to him to be the irreversible process by which Jews westernized, "gave up," and lost the "sad beauty" he eulogized in his essay "The Wandering Jews" as well as his novel, Job. He, like Kafka, seemed to believe that there was no going back. Others, however, proposed various ways of reconnecting. The great Expressionist poet Else Lasker-Schüler liked pretending that she was, in fact, the Oriental "Prinz Jussuff." The novelist Jacob Wassermann similarly flattered himself by boasting of an "Oriental force" in his blood that powered his creativity.
Few, however, made as much out of this Jewish Orientalism as Martin Buber, who defined Western Jews as Orientals whom emancipation had denatured. Buber reassures his readers that the soul of the Jew is intact: "the Jew has remained an Oriental." One can even detect the Oriental spirit, he wrote, "in the most assimilated Jew, if one knows how to gain access to his soul; and even those who have eradicated the last vestiges of Judaism from the content of their thinking still, and ineradicably, carry Judaism within them in the pattern of their thought." For Buber, the Orient still thrived in Eastern Europe among the "decadent yet still wondrous Hasid of our days." The renewal of Judaism in the West lay precisely in turning to the East for inspiration and instruction.
Much of the German-Jewish reflection on 'Oriental' Jews is irrelevant today. Buber's Poland is long gone, Yiddish is dead, and the "Hasid of our days" is wondrous to very few. (Indeed, Buber's Orient never existed in the first place, as it was to a large extent the expression of his own yearnings.) What we can learn from, however, is the German-speaking Jewish modernists' ability to identify with Jewish tradition while remaining outside of it. They turned to Eastern European Judaism without renouncing their own modernity, without denying the distance between the traditions and themselves. In fact, that distance was the only thing of which they were really certain.
Obviously their relationship with tradition was attenuated and artificial. The modernists always regarded Jewish tradition through their own subjectivist lens, reifying it, orientalizing it, imposing numerous discursive constraints that deformed their perception of it. They were to a great extent its authors, and we should not only recognize this fact, but draw on it for inspiration as we renew and reinvent Jewish tradition to suit our own era. However, their Orientalism also brought them into an intimate relationship with the real liturgy, language, law, and lore of the Jewish people. Not content to just talk about the tradition, they tried to get inside it and learn it. Thus Kafka studied Hebrew and fetishized the Yiddish theater. Gershom Scholem became the great historian of Jewish mysticism. Whatever their path to tradition, they avoided the nihilism according which all knowledge, because it is so contingent, is suspect and therefore to be refused. Instead, they groped for ways to live with the Law and with themselves.
The Queer Guy at the Strip Club
The Gifts of the German Jews: Toward a Postmodern Judaism
My first shabbos
Stones of Jerusalem
Holocaust Video Testimonies: The Other Reality TV
Josh Tells a Bedtime Story
Zeek in Print
Buy online here
The Zeek Archive
|
4 New Species Of Zombie Ant Fungi Discovered In Brazilian Rainforest
Four new Brazilian species in the genus Ophiocordyceps have been published in the online journal PLoS ONE. The fungi, named by Dr. Harry Evans and Dr. David Hughes, belong to a group of "zombifying" fungi that infect ants and then manipulate their behavior, eventually killing the ants after securing a prime location for spore dispersal.
These results appear in a paper by Evans et al. entitled Hidden Diversity Behind the Zombie-Ant Fungus Ophiocordyceps unilateralis: Four New Species Described from Carpenter Ants in Minas Gerais, Brazil. This paper is the first to validly publish new fungal names in an online-only journal while still complying with the rules and recommendations of the International Code of Botanical Nomenclature (ICBN).
Beyond this important milestone, the paper is noteworthy for the attention it draws to undiscovered, complex, biological interactions in threatened habitats. The four new species all come from the Atlantic Rainforest of Brazil which is the most heavily degraded biodiversity hotspot on the planet. Ninety-two percent of its original coverage is gone.
The effect of biodiversity loss on community structure is well known. What researchers don’t know is how parasites, such as these zombie-inducing fungi, cope with fragmentation. Here the authors show that each of the four species is highly specialized on one ant species and has a suite of adaptations and spore types to ensure infection. The life-cycle of these fungi that infect, manipulate and kill ants before growing spore producing stalks from their heads is remarkably complicated. The present work establishes the identification tools to move forward and ask how forest fragmentation affects such disease dynamics.
On the Net:
|
Development of a new way to make a powerful tool for altering gene sequences should greatly increase the ability of researchers to knock out or otherwise alter the expression of any gene they are studying. The new method allows investigators to quickly create a large number of TALENs (transcription activator-like effector nucleases), enzymes that target specific DNA sequences and have several advantages over zinc-finger nucleases (ZFNs), which have become a critical tool for investigating gene function and potential gene therapy applications.
"I believe that TALENs and the ability to make them in high throughput, which this new technology allows, could literally change the way much of biology is practiced by enabling rapid and simple targeted knockout of any gene of interest by any researcher," says J. Keith Joung, MD, PhD, associate chief for Research in the Massachusetts General Hospital (MGH) Department of Pathology and co-senior author of the report that will appear in Nature Biotechnology and has received advance online release.
TALENs take advantage of TAL effectors, proteins naturally secreted by a plant bacteria that are able to recognize specific base pairs of DNA. A string of the appropriate TAL effectors can be designed to recognize and bind to any desired DNA sequence. TALENs are created by attaching a nuclease, an enzyme that snips through both DNA strands at the desired location, allowing the introduction of new genetic material. TALENs are able to target longer gene sequences than is possible with ZFNs and are significantly easier to construct. But until now there has been no inexpensive, publicly available method of rapidly generating a large number of TALENs.
The method developed by Joung and his colleagues called the FLASH (fast ligation-based automatable solid-phase high-throughput) system assembles DNA fragments encoding a TALEN on a magnetic bead held in place by an external magnet, allowing automated construction by a liquid-handling robot of DNA that encodes as many as 96 TALENs in a single day at a cost of around $75 per TALEN. Joung's team also developed a manual version of FLASH that would allow labs without access to robotic equipment to construct up to 24 TALEN sequences a day. In their test of the system in human cells, the investigators found that FLASH-assembled TALENs were able to successfully induce breaks in 84 of 96 targeted genes known to be involved in cancer or in epigenetic regulation.
"Finding that 85 to 90 percent of FLASH-assembled TALENs have very high genome-editing activity in human cells means that we can essentially target any DNA sequence of interest, a capability that greatly exceeds what has been possible with other nucleases," says Jeffry D. Sander, PhD, co-senior author of the FLASH report and a fellow in Joung's laboratory. "The ability to make a TALEN for any DNA sequence with a high probability of success changes the way we think about gene-altering technology because now the question isn't whether you can target your gene of interest but rather which genes do you want to target and alter."
The research team also found that the longer a TALEN was, the less likely it was to have toxic effects on a cell, which they suspect may indicate that shorter TALENs have a greater probability of binding to and altering unintended gene sites. Joung notes that this supports the importance of designing longer TALENs for future research and potential therapeutic applications.
In 2008, Joung and colleagues at other institutions established the Zinc Finger Consortium (http://zincfingers.org), which has made a method of engineering ZFNs broadly available to academic laboratories. His team is now making the information and materials required to create TALENs with FLASH available within the academic community, and information about accessing those tools is available at http://TALengineering.org. Gene editing nucleases, including both ZFNs and TALENs, were recently named "Method of the Year" for 2011 by the journal Nature Methods.
Joung says, "While I believe that TALENs ease of design and better targeting range will probably make them a preferred option over ZFNs made by publicly available methods, ZFNs' smaller size and the less repetitive nature of their amino acid sequences may give them advantages for certain applications. For the time being, it will be important to continue developing both technologies." Joung is an associate professor of Pathology and Sander an instructor in Pathology at Harvard Medical School,
Explore further: Researchers conduct first genomic survey of human skin fungal diversity
|
Jan. 1, 2011 A large collaborative study has added to the growing list of genetic variants that determine how tall a person will be. The research, published on December 30 in the American Journal of Human Genetics, identifies uncommon and previously unknown variants associated with height and might provide insight into the genetic architecture of other complex traits.
Although environmental variables can impact attained adult height, it is clear that height is primarily determined by specific alleles that an individual inherits. Height is thought to be influenced by variants in a large number of genes, and each variant is thought to have only a small impact on height. However, the genetics of height are still not completely understood. "All of the variants needed to explain height have not yet been identified, and it is likely that the additional genetic variants are uncommon in the population or of very small effect, requiring extremely large samples to be confidently identified," explains Dr. Hakon Hakonarson from The Children's Hospital of Philadelphia.
To search for genetic variants associated with adult height, researchers performed a complex genetic analysis of more than 100,000 individuals. "We set out to replicate previous genetic associations with height and to find relevant genomic locations not previously thought to underpin this complex trait" explains Dr. Brendan Keating, also from The Children's Hospital of Philadelphia. The authors report that they identified 64 height-associated variants, two of which would not have been observed without such a large sample size and the inclusion of direct genotyping of uncommon single-nucleotide polymorphisms (SNPs). A SNP is a variation in just one nucleotide of a genetic sequence; think of it as a spelling change affecting just one letter in an uncommonly long word.
These results suggest that genotyping arrays with SNPs that are relatively rare and occur in less than 5% of the population have the ability to capture new signals and disease variants that the common SNP arrays missed (i.e., 30 new signals in this study), as long as sample sizes are large enough. These low-frequency variants also confer greater effect sizes and, when associated with a disease, could be a lot closer to causative than more common variants. "The increased power to identify variants of small effect afforded by large sample size and dense genetic coverage including low-frequency SNPs within loci of interest has resulted in the identification of association between previously unreported genetic variants and height," concludes Dr. Keating.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
- Matthew B. Lanktree et al. Meta-analysis of Dense Genecentric Association Studies Reveals Common and Uncommon Variants Associated with Height. The American Journal of Human Genetics, 30 December 2010 DOI: 10.1016/j.ajhg.2010.11.007
Note: If no author is given, the source is cited instead.
|
The Mississippi Delta is famous for more than floods; it's the birthplace of uniquely American music. As the flood waters rose, many blues artists were inspired to write songs about the disaster and describe the experience of being in a flood. Mai Cramer, who has hosted her "Blues After Hours" radio show for over two decades, and Prof. David Evans, an ethnomusicologist at the University of Memphis, explain some of the history behind blues music, especially the stripped-down, raw style of music called Delta blues.
Blues is very visceral. For the most part it's narrative. It tells a story out of people's lives. Compared to other types of music, it's very authentic -- there aren't a lot of frills. What we usually think of as Delta blues is one person with a guitar, typically a slide guitar, and that real raw kind of singing. Delta blues grew up into modern Chicago blues. If you listen to Muddy Waters, for example, he's basically singing Delta blues that are citified and electrified. Delta blues is the foundation for that.
When we listen to blues music of the 1920s, it's like looking through a window at the experience people are having at the time. Typically, blues artists write out of their own experience. A lot of blues is about men and women, and relationships. Blues was sung at rent parties, where you'd play music and pass the hat to pay your rent. Or they'd be in shacks behind the fields, "juke joints" where people would drink, dance, and hear music. It's the music you'd play when you were relaxing or partying. Initially it was black music played by black people for black people. Only a few early performers, like Bessie Smith, who sold a lot of records, sold to white people. The blues were mostly only in the black community until large numbers of whites discovered the music in the 1950s and especially in the 1960s.
We always think of blues as being intense, as having an emotional intensity. Floods are natural disasters that overwhelm you the way emotions can overwhelm you, and so the flood is an important image for the blues, a metaphor for an experience that's too much, that's just impossible to handle.
Two artists who are still alive today have connections to the very earliest Delta blues. They're in their 80s. These two artists are not the only ones who play Delta blues, but they're among the last who were there when it started. Robert Junior Lockwood was up for a Grammy this year with traditional Delta blues, and he's a link to the Delta because his mother had a relationship with Robert Johnson, who was known as the King of the Delta Blues. Robert Junior learned from Robert Johnson. David "Honey Boy" Edwards is also still alive, and he's another important musician. He lived the life of an itinerant blues singer and wrote about it in his autobiography, The World Don't Owe Me Nothing. A lot of Delta blues appears on the Yazoo label.
Most blues songs are about personal experiences and personal feelings and they tend to concentrate on themes of man-woman relationships and the related emotions -- the whole range of emotions, happy and sad. Travel is another theme. Movement. Work and other social conditions. Social problems. Luck. Magic. More or less, the ups and downs of daily life. There are some that deal with themes of broader interest: current events, political events, commentary, and so on. These are still often tied to personal reactions and personal feelings. The flood was both a public event, a news story, and something that hundreds of thousands of African Americans experienced directly.
The Delta itself has throughout blues history been a stronghold of blues music. It was very intensely developed there, stylistically and creatively. It has been central to African American cultural life in the Delta, particularly in the first half of the 20th century. It's characterized stylistically by a very intense type of performance, a minimalist style that squeezes the maximum feeling and emotion out of each note. The perfomers typically sing and play very hard, and often explore very deep themes philosophically. In other words, it tends not to be superficial music, but a very deep expression of a personal, and a collective, feeling.
The music started around the beginning of the 20th century and it seems to have reached a creative peak in the 1920s that's captured in phonograph records of that era, starting in the year 1920. A number of Mississippi Delta artists recorded in that decade. This was really a golden age, particularly of the country blues. The first flowering of country blues on records happened in 1926.
It seems there were 25 or 30 records by blues artists on or related to the 1927 flood. The songs present a variety of commentary on the flood. The ones by the few artists that were from the area, who might have actually experienced the flood (like Charlie Patton or Alice Pearson) tend to be the most realistic in their descriptions, the most accurate in their details. Some of the others are inaccurate, based on hearsay, some sentimentalize the flood, some even trivialize it, or find some way to connect it to the man-woman theme, or sexual double entendre, getting back to more standard blues themes. A song by Atlanta artist Barbecue Bob, "Mississippi Heavy Water Blues," describes losing his woman, who's washed away in the flood. They range over quite a bit of emotional territory.
Blind Lemon Jefferson, from Texas, recorded a flood song, and frequently performed in the Delta, in small theaters in towns like Greenwood and Greenville. Sometimes he would just come into town and set up in a park, and draw a crowd. Lonnie Johnson, who was based in St. Louis, recorded a flood song also. There were artists in other styles, too, the vaudeville blues singers, who also recorded flood songs. Bessie Smith had probably the most famous song on the flood, but there's a peculiarity to it. It was recorded before the flood. "Back Water Blues" was recorded in February 1927, before the great disaster of April. Perhaps the buildup of rain made her anticipate the flood; it was released just as the flood came, and as a result, it became a big hit. It's a description of one woman's experience of a flood. There had been a lot of rain for weeks prior to the flood so she might have in some way anticipated the flood, or it might have been a coincidence.
There had been generic flood songs in the 1920s. There was a piece called "Muddy Water" which was a pop song of 1926. Bessie Smith also recorded this song. There were certainly enough floods in all parts of the lowland South so that flood themes would be taken up often. On the religious side, in gospel music, there were some recordings that saw greater significance in this flood. One in particular, "The1927 Flood," by Elders McIntorsh and Edwards (recorded in December 1928), saw the hand of God in the flood, as a punishment for wickedness. A black preacher in Memphis, the Reverend Sutton E. Griggs, saw the flood as a metaphor of black-white cooperation, the people trying to shore up the levees, something that led to better race relations, although the historical fact about it was that there were some major race-related problems related to the relief effort.
The whites in charge of the relief effort thought that the blacks would just pitch in after the flood to restore the old order, and give volunteer labor so they could go back to being sharecroppers. The blacks tended to view the flood as wiping the slate clean, wiping out the old order. The flood wiped out the crops in the areas it devastated, so any black sharecroppers in that area knew they weren't going to get a crop, with the water and mud staying up until June. It was impossible to get a good crop. So they had to go somewhere and find some work. A lot of people headed North.
Charlie Patton's great two-part song, "High Water Everywhere," was recorded in December 1929, two and a half years after the flood. Patton was from the Delta. He had probably composed it earlier; his recording career didn't start until 1929. But he and his record company thought the song still had relevance.
In the early 1920s virtually all American record companies were recording blues material, and they had developed the concept of race records along with other specialized genres of music: hillbilly music, various non-English language ethnic series for immigrant communities, etc. It was a marketing strategy the record companies had, to direct their promotional efforts in these communities. Unfortunately it had the effect of isolating these American musical traditions and keping them out of the mainstream of American music, so that they didn't come to the attention of most Americans, but remained on the periphery. There were a few African American artists who had mainstream appeal: Duke Ellington, Louis Armstrong, and people like that. But the vast majority of African American recording artists, especially blues and gospel singers, sold almost entirely within the black community. If there hadn't been race records, much of this music might not have been recorded at all. It was recorded, but it didn't reach a bigger audience.
Some artists, like Louis Armstrong, were heard on the radio, but blues was hardly broadcast at all. Some stations in the South started barn dance programs, like the Grand Ole Opry, which had a black harmonica player, but it was pretty unusual. Record players would have been one of the few luxuries, one of the few pieces of furniture, poor people might have had, more so even than radios. They were very important cultural items, even among people who were relatively poor. And in the cities you'd have people who had come from the country, listening to the blues.
Blues is a music that's highly personalized, that deals with fairly intimate personal relationships, so you have to read through the songs to see broader social issues. But the personal relationships described in the blues are affected by social conditions of poverty, racism, the nature of work, rural life, and so on, and these shape how people relate to each other. You have to do a little bit of projection from those lyrics; blues are not usually songs of ideology or protest. But you can detect an overriding aura of dissatisfaction in the blues. They deal with the changes and fluctuations of life, and the possibilities of change, too, on a very personal level.
My American Experience
We invite you to tell us your own stories - whether you lived through a tumultuous time period or learned about it from a relative, a book or a movie.
|
What is HIPAA?
The Health Insurance Portability and Accountability Act (HIPAA) of 1996 is set of statutes designed to improve the efficiency and effectiveness of the US health care system:
Title I: Title I of HIPAA provides rules to
"improve the portability and continuity of health insurance coverage"
for workers when they change employers.
- Title II: Title II of HIPAA provides rules for controlling health care fraud and abuse, and includes an "Administrative Simplification" section that sets standards for enabling the electronic exchange of health information.
Provisions in the "Administrative Simplification" section of Title II include rules protecting the privacy and security of health data. These rules are enforced by the US Department of Health and Human Services Office for Civil Rights (OCR):
- The Privacy Rule protects the privacy of individually identifiable
health information. For more, see Privacy
Rule on the OCR web site.
- The Security Rule sets national standards for the security of electronic protected health information (ePHI). For more, see Security Rule on the OCR web site.
In 2009, HIPAA enforcement rules were strengthened by the Health Information Technology for Economic and Clinical Health (HITECH) Act. Subtitle D of the HITECH Act improved privacy and security provisions found in the original HIPAA privacy and security rules.
At Indiana University, compliance with the HIPAA privacy and security rules is coordinated through the Office for Clinical Affairs, with the interim HIPAA Privacy Officer and interim HIPAA Security Officer. For more about HIPAA compliance at IU, see the HIPAA Compliance page.
For more about HIPAA and the HITECH Act, see these US Health and Human Services pages:
- HIPAA Administrative Simplification Statute and Rules
- Summary of the HIPAA Privacy Rule
- Summary of the HIPAA Security Rule
- HITECH Act Enforcement Interim Final Rule
Last modified on December 07, 2012.
|
12-13 year old boys plan their own workshop where they meet with 12 senior citizens from the local area.
Charlotte Berry, Assistant Headteacher, Billericay School, explains what they did at their school.
“The Historypin workshop came about as part of our work with a group of twelve boys (aged 12-13) and their mentors (aged 16-17). We’ve been working with them to help them explore their individual skills and abilities and to develop positive behavior, particularly in group work.
I needed an activity that they could plan themselves where they could demonstrate motivation, responsibility and a degree of altruism. I wanted something that would involve them with the local community as part of the school’s Community Cohesion work.
I showed Historypin.com to the boys and we explored the site, discussed photos we’d brought in, scanned them and pinned them to the site. The boys then had to plan an entire workshop themselves to which we’d invite 12 senior citizens from the local area.
On the day of the workshop, we provided tea and coffee for the senior citizens and the boys ran some warm-up and ‘getting to know you’ activities. All of our guests had brought in photos and willingly discussed their memories and the stories behind them. The boys were totally engaged and absolutely fascinated. I was incredibly proud of how easily they chatted with the senior citizens because of their genuine interest in the photos they had brought in.
I’ve never seen them write as much as they wrote on the Historypin prompt sheet
They then scanned and uploaded photos, pinning them onto Historypin.com and adding information. The site is designed so that this is very straightforward to do this. They also explored other photos on the site and travelled all over Historypin using the Google Street View feature.
Initial evaluations of the Historypin sessions show that everyone thoroughly enjoyed the experience. The project completely fulfilled some of the key aims of the We Are What We Do’s inter-generational work - the sharing of knowledge and skills, breaking down barriers and over-turning misconceptions between generations.
We will definitely run these sessions again. Historypin has so much potential to be used across the curriculum, especially in History and English, and is perfect for designing community projects. We also intend to work with our feeder primary schools to help them run Historypin workshops with their children, parents and grandparents.”
|
International Labour Organisation
The International Labour Organisation (ILO) is the United Nations' specialised agency for labour issues.
ILO seeks to promote human and labour rights by establishing international standards of basic labour rights: freedom of association, the right to organize, collective bargaining, abolition of forced labour, equality of opportunity and treatment, and other standards regulating conditions across the entire spectrum of work related issues.
The International Programme on the Elimination of Child Labour (IPEC) works towards the progressive elimination of child labour by strengthening national capacities to address child labour problems, and by creating a worldwide movement to combat it. Founded by ILO in 1992, IPEC's work is considered especially important in contributing to ILO's Decent Work Agenda.
The impetus of World Scouting's involvement with ILO-IPEC is its interest in youth rights and protection. This interest was formally recognized with the Memorandum of Understanding on Child Labour in 2004.
In terms of advocacy, much of World Scouting's work with ILO-IPEC has been through YEN. Regarding child rights and protection, since 1992 World Scouting and ILO-IPEC have carried out the Extension Programme in Kenya. The Extension Program targets marginalized young people and helps them reintegrate into society by teaching them life and vocational skills.
12th June - World Day Against Child Labour 2009
Theme: Give girls a chance, end child labour
The World Day Against Child Labour will be celebrated on 12 June 2009. The World Day this year marks the tenth anniversary of the adoption of the landmark ILO Convention No. 182, which addresses the need for action to tackle the worst forms of child labour.
The International Labour Organization (ILO) and World Organization of the Scout Movement (WOSM) share a common commitment to pursuing social justice and peace, empowering young people through human rights-based educational programmes and promoting the social dimension of globalisation. This long-standing partnership was recently strengthened through a renewed Memorandum of Understanding (MoU) extending our cooperation in the fight against child labour, earlier this year.
Ms. Michele Jankanish, Director of the ILO’s International Programme on the Elimination of Child Labour (IPEC ) and Mr. Luc Panissod, Acting Secretary-General of the World Organization of the Scout Movement (WOSM), signed a new Memorandum of Understanding (MoU) today, to extend their cooperation in the fight against child labour for a further 3 years.
The cooperation between the two organizations will be based on a shared vision in pursuing social justice and peace, the empowerment of young people, and promoting the social dimensions of globalization.
The ILO has estimated that some 165 million children between the ages of 5 and 14 are involved in child labour. Many of these children work long hours, often in dangerous conditions. This year the World Day against Child Labour will be marked around the world with activities to raise awareness that Education is the right response to child labour. World Scouting, as the largest global youth movement which prides itself on its strong history of non-formal education, is encouraged to get involved and draw attention to the key issues related to Child Labour.
In view of the common commitment to defending and promoting the rights of children and young people, the partnership between the International Labour Organization (ILO) and World Scouting has continued to grow. An increasing number of joint initiatives between Scouts Organizations and ILO Field and Branch Offices are being developed at the local level, in particular in the framework of the SCREAM – Supporting Children’s Rights through Education, the Arts and the Media – programme ( www.ilo.org/scream).
The International Labour Organisation (ILO) is working with World Scouting to address the issue of child labour. This is mainly done through the International Programme on the Elimination of Child Labour (IPEC).
|
- SW Valley
Mark Henle/The Republic
Anyone who lives in metro Phoenix has seen the brown cloud that drapes the horizon on a still day.
The pollution is more than what can be seen. Valley air carries bits of dirt, toxic metals, ammonia from farm fertilizer, ozone that irritates nasal tissues. Freeways spew a mix of exhaust and dust, putting anyone who lives nearby at an increased risk.
A day with bad air takes its toll first on those who are already ill. A heavily polluted day sends children to emergency rooms. People with respiratory problems are forced indoors. But a lifetime with bad air takes a toll on each one of us. People who live in bad air live shorter lives.
Government plans typically focus only on narrow parts of the problem — so overall solutions continue to fall short. The Valley sprawls with little regard for how growth has affected air quality — so each day, traffic rolls on as 300,000 people or more live their lives in the danger zone.
The biggest problem may be the most obvious: No one who lives in the Valley can escape the air we breathe.
|
When we were growing up, there learned that there were four basic tastes: salt, sweet, sour and bitter. We were told that our taste buds had special taste receptors in particular zones on our tongues that detected these four tastes and that all other components of flavor came from our olfactory sense–the sense of smell.
Well, it turns out that what we learned in middle school about tastes, our tongues and noses, was not quite complete or correct.
There is a fifth basic taste.
It is called “umami,” and a Japanese scientist named Kitunae Ikeda isolated one compound which contibutes this taste back in 1908. Working with a seaweed broth, he isolated the amino acid, glutamate, as one of the sources for the taste which is described as “meaty, rich, savory and satisfying.” Glutamate itself was already a known substance, having been discovered in 1866 by a German chemist named Dr. Karl Ritthausen who discovered it while studying gluten in wheat.
“Umami” itself is a compound Japanese word, from the root words, “umai” meaning “delicious,” and “mi,” meaning “essence,” and while it is often used to describe the flavor enhancing ability of the salt form of glutamate, ,monosodium glutamate, that is not the only proper context for its useage.
In fact, researchers have found that umami accurately describes the flavor of many amino acids and proteins. In 2000, researchers at the University of Miami discovered the taste receptor for umami, which essentially proves that umami is a basic taste, for which humans had evolved a hunger. This receptor, named “taste-mGluR4″ responds not only to glutamate, but in greater and lesser degrees, to every other amino acid and nucelotide.
Considering the myriad of uses to which amino acids are put in the human body, it is no wonder we are programmed to enjoy their flavors. Amino acids are necessary in building muscles, enzymes and other chemicals necessary to bodily function.
So, what does all of this mean to cooks?
Does this mean we need to study chemistry and put MSG in everything?
It means we just need to look at what foods have large supplies of naturally occurring glutamates and amino acids and combine them with the principles we already know of good cooking, to help us make our dishes even more delicious.
It isn’t like any of these ingredients are new or anything.
People all over the world have been cooking with glutamate and amino acid rich foods for thousands of years.
Take a look at the foods surrounding the new cookbook, The Fifth Taste, above, and think about how many of them you have in your kitchen right now. If you are like me, you probably have plenty of umami sitting in your cupboards, refrigerators, shelves and countertops, just waiting to add goodness to your next meal. A quick glance at my illustration should identify soy sauce, nori, dried and fresh shiitake mushrooms, red wine, truffle oil, parmesan cheese, sun dried tomatoes and tomato paste.
Every serious cook in the world is bound to have one or two of those ingredients in their kitchen at any given time. The concatenation of jars, bottles, tubes, packages and loose items above are just what I pulled off my shelves this morning when I went on a mission to find good examples of umami-rich foods.
Over the next few days, look for posts specifically listing and identifying umami rich foods from both the East and the West, recipes featuring my favorite flavor and a review of the new cookbook, The Fifth Taste: Cooking With Umami by David and Anna Kasabian.
The upshot of all of this is, if you don’t know umami now, you will by the time I am finished with you.
Sorry, the comment form is closed at this time.
|
Heart Paper Chain
These heart paper chains are easy to make but they do require scissors so adult supervision is recommended.
- Take a peice of paper and cut it length-wise into 4 strips.
- Valley-fold the paper in half and unfold.
- Valley-fold again into quarters, and unfold.
- Mountain-fold each of the four sections in half. You should have 8 sections.
- Compress the strip of paper along the already made creases. The pleats will look like an accordion.
- Draw a heart shape onto the top layer of paper. Draw it such that the rounded top-part of the heart (shown with an arrow) doesn't connect with the bottom part of the heart. Cut along the line making sure that you don't cut the rounded corner of the heart (they need to remain connected to one another).
- Unfold and voila! A heart paper chain. Repeat with the other strips of paper and connect them together with tape.
Note: there are two possible result. The first is as shown above: four connected hearts. The other possibility is 3 connected hearts and two half-hearts at the beginning and end of the chain. Experiment a bit to determine how to position the paper to get the chain that you prefer.
|
As Internet Protocol (IP)-based networks are increasingly deployed, packet-based applications such as voice over IP (VoIP), IP video, video telephony (VT), integrated voice and email, and instant messaging (IM), have emerged. The ability to integrate these services over the same network has become more important as customers appreciate and demand the bundling of multiple services. Many of these services benefit from session based connections between communicating network devices. For example, rather than having each data transmission between two devices be considered independent from each other, a series of data transmissions may be logically grouped together as a session. As session based traffic increases in the network, the problems of how to provide redundancy and load balancing among a cluster of session handling servers have to be addressed.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 a is a diagram of an exemplary system for implementing a congestion control method;
FIG. 1b is a system diagram of an exemplary upstream network device;
FIG. 2a is a representation of server and cluster capacities;
FIG. 2b is a representation of server and cluster capacities in the event of a server failure;
FIG. 3 is a flowchart depicting exemplary steps and decisions related to a process for managing congestion control and server redundancy;
FIG. 4 is a flowchart depicting exemplary steps and decisions related to a process for updating server utilization factors; and
FIG. 5 is a flowchart depicting exemplary steps and decisions related to a process for scheduling packet data traffic to servers in the cluster.
Exemplary illustrations of a congestion control method for session based network traffic are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual illustration, numerous implementation-specific decisions must be made to achieve the specific goals of the developer, such as compliance with system-related and business-related constraints that will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those having the benefit of this disclosure.
Referring now to the drawings wherein like numerals indicate like or corresponding parts throughout the several views, exemplary illustrations are provided.
FIG. 1a illustrates a system 100 for use with a congestion control method for session based network traffic. Details of the elements depicted in the figures are included following a brief functional overview of the system 100 and method. Packetized data on a packet data network 105 may be transmitted to and from any of a number of interconnected network devices. For example, end user devices such as a Personal Computer (PC) 110, a mobile phone 115, a television 120 via a set-top-box 125, etc., may be the recipient or initiator of data traffic over the packet network 105. Servers 130a-c may receive and process data traffic from the end user devices. Moreover, the servers 130a-c may provide responsive data traffic to the end user devices. When multiple servers 130a-c collectively handle the same type of requests and data traffic, they may be arranged into a cluster 135.
When arranged as the cluster 135, additional measures may need to be implemented to ensure that the servers 130 are utilized in a suitable manner. For example, it may be desirable to have similarly situated servers 130 handle corresponding amounts of data. Accordingly, a load-balancing scheme may be implemented to assign certain amounts of data processing to particular servers 130 of the cluster. Additionally, certain amounts of the data processing capacities of the servers 130 may need to be reserved for redundancy. A redundancy scheme may allow the cluster 135 to maintain normal operations even with a failure of one of the servers 130.
By focusing on the capacity of each server to process data as well as on the amount of data actually processed by each server 130, a single cluster manager may be able to implement both redundancy planning and load-balancing schemes. A router 140, or any other network device acting as the cluster manager, may have the responsibility of tracking the amount of data actually processed by each server 130 and scheduling new data traffic to a particular server. Accordingly, the router 140 may provide a congestion control module 145 to implement the redundancy planning and load-balancing schemes.
The packet network 105 may be a packet switched communication network such as an Internet Protocol (IP) network. The packet network 105 generally interconnects various computing devices and the like through a common communication protocol, e.g. the Internet Protocol. Interconnections in and with the packet network 105 may be made by various media including wires, radio frequency transmissions, and optical cables. Other devices connecting to and included with the packet network 105, e.g., switches, routers, etc., are omitted for simplicity of illustration in FIG. 1. The packet network 105 may interface with an IP Multimedia Subsystem (IMS), which integrates voice, video, and data communications on the same network infrastructure.
The PC 110, mobile phone 115, and television 120 by way of a set-top-box 125 merely represent three of many possible network devices capable of connecting to the cluster 135 for session based data processing. Each of these devices has computer software including an implementation of the network protocol needed to communicate over the packet network 105. Additionally, the devices may also implement higher level protocols to interface with the session handling facilities of the servers 130. For example, the mobile phone 115 may include instructions for conducting a voice communication based session over the packet network 105. Likewise, the set-top-box 125 may include instructions for conducting a video based session over the packet network 105.
The servers 130 of the cluster 135 generally provide session handling capabilities. For example, they may be able to initiate a session based on a request from one of the network devices 110, 115, 125, etc., process data during the session, and terminate the session as necessary. Session Initiation Protocol (SIP) is a signaling protocol for initiating, managing and terminating application sessions in IP networks, and provides a mechanism to allow voice, video, and data to be integrated over the same network. Accordingly, the servers 130 may implement SIP for session handling.
The router 140 may interconnect one or more computing devices, e.g., servers 130, to the packet network 105. Moreover, the router 108 may establish and operate a local area network (LAN) for the servers 130, and may route certain communications thereof. For example the computing devices 110 may be connected to the router 108 using a wireless connection, a network cable such as a “Cat5” cable, or the like.
The router 140 may further act as the cluster manager of the cluster 135. The cluster manager may be an upstream network device to the cluster 135 such as a device positioned between the cluster and the packet network 105. However, in another exemplary approach, one of the servers, e.g., 130a, may be designated as the cluster manager. In such an approach, the designated server 130a may be considered upstream in that it acts as a gateway of the cluster 135 for incoming network traffic.
The congestion control module 145 may include computer instructions for implementing redundancy and load balancing schemes. The congestion control module will be provided by the device acting as the cluster manager, e.g., the router 140 as depicted in the exemplary approach of FIG. 1. As will be discussed in detail below, the congestion control module 145 may continuously schedule incoming traffic to particular servers 130, calculate utilization rates of the servers, monitor the cluster 135 for server failures, and update redundancy factors based on the state of the cluster. Positioning the congestion control module 145 upstream from the servers 130 may allow for the estimation of current utilization rates of the servers. Estimating utilization rates by the congestion control module 145 may eliminate time consuming bi-direction communication between the servers 130 and router 140 to determine the actual current utilization rates.
FIG. 1b illustrates the elements of an exemplary upstream network device, such as the router 140. As illustrated, the router 140 may include elements for guaranteeing the quality of service for different classifications of data. For example, the incoming packet data 155 may first encounter a classifier 160. The classifier 160 may inspect the header of the packet data 155 to identify the proper classification. A marker 165 may write, or rewrite, a portion of the packet header to more clearly identify the determined classification. For example, the marker 165 may write to the Differentiated Service Code Point (DSCP) field of the header.
A meter 170 may track the amount of incoming data 155 processed by the router 140. Moreover, the meter 170 may track the amount of data 155 for each classification. A policer/shaper 175 may use the tracked amounts from the meter 170 to enforce particular traffic routing polices, e.g., quality of service guarantees, service level agreements, etc. To enforce a policy, the policer/shaper 175 may drop packets if the tracked amount of data 155 exceeds the service level agreement. Additionally, the policer/shaper 175 may buffer or delay traffic that fails to conform to policy being enforced. A scheduler 180 has the responsibility of deciding which packets to forward to the cluster 135 for data processing. The scheduler typically bases its decision on the priority levels of the data as well as on the service level agreements. However, the scheduler 180 may be further influenced by the congestion control module 145. In one exemplary approach, the congestion control module 145 may be integrated into the scheduler 180. However, in another exemplary approach, the congestion control module 145 may be distinct from the scheduler 180 while still providing input thereto.
FIG. 2a illustrates a representation 200 of server and cluster capacities during normal operation. Each server 130 may have an actual capacity 205, which represents the normal or default ability of a server to process data. As depicted, each server 130 has the same actual capacity 205a-d. However, in another exemplary approach, servers 130 with different actual capacities may be part of the same cluster 135.
The actual capacity 205 may be artificially limited or reduced to an available capacity 210. The reduction of the actual capacity 205 provides redundant capacity 215 that is reserved for periods in which there is a server failure. The reduction of the actual capacity 205 of each server 130 may differ. Accordingly, a respective redundancy factor, ri, i=1, 2, . . . , n, where 0≦ri≦1, may be established for each server 130. The redundancy factor ri states the amount of redundant capacity as a fraction or percentage of the actual capacity 205. Accordingly, the available capacity 210a for each server 130 will be expressed as (Actual Capacity*(1−ri)), while the redundant capacity 215 will be expressed as (Actual Capacity*ri)
If ri=0, it is assumed that the server 130 is expected to have an available capacity 210 equal to the actual capacity 205. Accordingly, any server with a redundancy factor of zero (ri=0) will not provide any redundancy to the cluster 135. On the other hand, if ri=1, then the server will limit its entire actual capacity 205 for redundant capacity 215. Moreover, any server with a redundancy factor of one (ri=1) will not handle any data processing requests unless there has been a failure of another server in the cluster 135. In general, higher redundancy factor values limit larger amounts of actual capacity 205 for redundancy capacity 215.
The available capacity 210 may be used to handle data processing sessions from remote devices. A current usage 220a-d of each server reflects the amount of the available capacity 210 that is currently being used to handle data processing sessions. The current usage 220 will typically fluctuate as the server 130 receives new sessions and completes others. Moreover, the data processing demands on the server 130 may vary throughout a session. The router 140, or equivalent cluster manager, may determine which server 130 should handle newly received packet data. As will be discussed in detail with respect to FIG. 4, a utilization factor may be defined as the ratio of the estimate of current usage 200 over the available capacity 210. In one exemplary approach, the router 140 may implement packet data scheduling decisions by directing incoming traffic to the server meeting at least one threshold criterion. In one illustrative approach the criterion includes a consideration of utilization factor and in another approach the criterion is the lowest utilization factor.
Because the servers 130 act together as a cluster 135, the sum of the actual capacity 205 of each server defines an actual cluster capacity 240. Similarly, the sum of the available capacities 210 of each server defines an available cluster capacity 250. The sum of the redundant capacities 215 of each server defines a cluster redundant capacity 255. The actual cluster capacity 240 will remain constant so long as there are no server failures in the cluster 135. Likewise, the available cluster capacity 250 and the cluster redundant capacity 255 will remain constant so long as there are no changes to the any of the redundancy factors (ri). However, a cluster usage 260 will fluctuate as the sum of the current usage 220 of each server varies.
FIG. 2b illustrates a representation 202 of server and cluster capacities during a failure of one of the servers. As indicated by the X, the actual capacity 205a is currently unavailable due to a server failure. Accordingly, the actual cluster capacity 240 is reduced by the actual capacity 205a of the failed server 130. Because of the failure, the redundancy capacity 215 of the remaining servers may be reallocated as available capacity 210. Moreover, the redundant factor of the remaining servers may be set to zero (ri=0) in order to cause the available capacity 210 to fully encompass the actual capacity 205. The current usage 220a of the failed server 130 represents sessions with incomplete or unfinished data processing. Accordingly, the sessions encompassing the current usage 220a may be redistributed to the remaining servers of the cluster 135, thereby increasing the current usage 220b-d levels thereof. As expected, the cluster usage 260 will typically remain unchanged.
The router 140 and servers 130 may be any general purpose computing device, such as a PC, or a specialized network device. The router 140 and servers 130 may have software, such as an operating system with low-level driver software, and the like, for receiving signals over network links. The operating system may also include a network protocol stack, for establishing and accepting network connections from remote devices.
The router 140 and servers 130 may employ any of a number of user-level and embedded operating systems known to those skilled in the art, including, but by no means limited to, known versions and/or varieties of the Microsoft Windows® operating system, the Unix operating system (e.g., the Solaris® operating system distributed by Sun Microsystems of Menlo Park, Calif.), the AIX UNIX operating system distributed by International Business Machines of Armonk, N.Y., and the Linux operating system. Computing devices may include any one of a number of computing devices known to those skilled in the art, including, without limitation, a computer workstation, a desktop, notebook, laptop, or handheld computer, or some other computing device known to those skilled in the art.
The router 140 and servers 130 may include instructions executable by one or more processing elements such as those listed above. Computer-executable instructions may be compiled or interpreted from computer programs created using a variety of programming languages and/or technologies known to those skilled in the art, including, without limitation, and either alone or in combination, Java™, C, C++, Visual Basic, Java Script, Perl, etc. In general, a processor (e.g., a microprocessor) receives instructions, e.g., from a memory, a computer-readable medium, etc., and executes these instructions, thereby performing one or more processes, including one or more of the processes described herein. Such instructions and other data may be stored and transmitted using a variety of known computer-readable media.
A computer-readable medium includes any medium that participates in providing data (e.g., instructions), which may be read by a computer. Such a medium may take many forms, including, but not limited to, non-volatile media, and volatile media. Non-volatile media include, for example, optical or magnetic disks and other persistent memory. Volatile media include dynamic random access memory (DRAM), which typically constitutes a main memory. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EEPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
FIGS. 3-5 and the description thereof below present exemplary approaches to the functional details of the congestion control module 145. As illustrated, processes 300, 400 and 500 all operate concurrently. Concurrent operation may allow for the constant monitoring and detection of any failures in the servers 130 of the cluster 135. However, in another exemplary approach, the steps of the processes may be rearranged to operate sequentially. For example, after the initial set-up steps of process 300, process 400 may operate repeatedly for a period of time. Subsequently, process 400 may be paused while the cluster 135 is checked for server failures and process 500 updates the utilization factors.
FIG. 3 illustrates a flowchart of an exemplary process 300 for managing congestion control and server redundancy. The router 140 may include a computer-readable medium having stored instructions for carrying out certain operations described herein, including some or all of the operations described with respect to process 300. For example, some or all of such instructions may be included in the congestion control module 145. As described below, some steps of process 300 may include user input and interactions. However, it is to be understood that fully automated or other types of programmatic techniques may implement steps that include user input.
Process 300 begins in step 305 by receiving initial parameters. The initial parameters include at least the actual capacities 205 and redundancy factors for each of the servers 130. The parameters may be received from user input, e.g., via a command line interface, Graphical User Interface, etc. In another exemplary approach, the parameters may be provided in a configuration file, or the like. Accordingly, the parameters may be received in step 305 by opening the file for reading and extracting the relevant data.
Next, in step 310, an expected traffic load may be established. The expected traffic load may be used to alter or set the redundancy factors of the servers. Historical traffic loads for similar dates and times may be used to establish the expected traffic load. Moreover, the expected traffic load presets a baseline value to be used when initially setting the redundancy factors.
Next, in step 315, the actual capacity 205 of each server may be limited to respective available capacities 210 based on the redundancy factors. In generally, the sum of the available capacities 210, also referred to as the available cluster capacity 250, will correspond to the expected traffic load in order to ensure that all expected traffic will be able to be processed. Moreover, the limiting provides redundant capacity 215 which is reserved for times in which there is a failure of one of the servers 130 of the cluster 135. This initialization step sets baseline values for the available capacities 210. However, in the event of a server failure, the available capacities 210 may be altered by changing the respective redundancy factors.
Following, step 315, steps 320, 325, and 330 may operate concurrently as discussed above. Step 320 includes the steps and decisions of process 500 discussed below. Similarly, step 325 includes the steps and decisions of process 400, also discussed below. Because the utilization factor (uij) is based on the estimated current usage for the given time interval (j), the utilization factor will be calculated at each interval. The process will continue to schedule the data for the given time interval. At the conclusion of a time interval, the process must update the current usage based on the recent traffic load of each server.
In step 330, it is determined whether any of the servers 130 have failed. In one exemplary approach, the router 140 may attempt to contact the servers 130, e.g., by initiate a connection, transmitting a so-called ping (Internet Control Message Protocol echo), etc. In another exemplary approach, the servers 130 may be configured to send out a communication, sometimes referred to as a life beat, to the router 140. No matter the approach, the router 140 will continuously monitor the servers 130 for failures. For example, the lack of a response or the lack of a life beat may be indicative of a server failure.
In step 335, the redundancy factors are set to a failure state if a server failure is detected in step 340. As discussed above, the redundancy factors may be dynamically set to a high values, e.g., one, in order to allocate all of the redundant capacity 215 as available capacity 210.
In step 340, the redundancy factors are set to the initial parameters if a server failure is not detected. In most cases the redundancy factors will already be set to the initial parameters. However, if the functionally of a server 130 has just been restored following a failure, the redundancy factors may need to be changed from the failure state. As discussed above, the redundancy factors may be established such that the available cluster capacity 250 corresponds to the baseline or expected traffic load.
Next, in step 345, the parameters may be adjusted. For example, the redundancy factors may vary based on time and date to correspond with expected traffic loads and the service level that needs to be provided by the cluster 135. For example, if service must be guaranteed to a high degree for a certain time, the redundancy factors may be set to a low level to ensure there is redundant capacity 215 to accommodate any server failures. Accordingly, the parameters may be scheduled for particular times. However, in another exemplary approach, process 300 may be adaptive to current traffic conditions. For example, the parameters may automatically adjust in the face of changing traffic conditions.
If the parameters need to be adjusted, process 300 may return to step 305 to receive the new parameters. However, if no adjustment is required, process 300 may return to concurrent steps 320, 325, and 330.
FIG. 4 illustrates a flowchart of an exemplary process 400 for updating server utilization factors. The router 140 may include a computer-readable medium having stored instructions for carrying out certain operations described herein, including some or all of the operations described with respect to process 400. For example, some or all of such instructions may be included in the congestion control module 145. As discussed above, process 400 may be sub-process for process 300, e.g., in step 325.
Process 400 begins in step 405 by determining the available capacity 210 of each server. In one exemplary approach, the actual capacity 205 and the respective redundancy factor (ri) may be retrieved. For example, these values may be provided as initial parameters via user input, a configuration file, etc. The actual capacity 205 may be multiplied by the redundancy factor (1−ri) to determine the limited available capacity 210.
Next, in step 410, the current usage 220 of each server 130 may be estimated. Unlike the expected traffic load discussed above for setting baseline values for the available capacity, the estimated traffic load attempts to determine the current or actual traffic load. Because it may be too costly to constantly monitor the actual traffic load, e.g., amount of packet data traffic sent to a server 130, the router may break the monitoring into discrete time intervals. In one exemplary approach, the router 140 may monitor the amount of packet data sent to a server every 200 milliseconds. Accordingly, the router 140 only knows historical amounts of packet data traffic sent to a server 130. Moreover, the router may not know the actual amount of traffic sent to a sever during the instant time interval. However, because traffic volumes could potentially change dramatically even during a brief interval, an understanding of the current usage 220 of a server 130 is important for properly balancing the data processing load over the cluster 135. Moreover, the scheduling decision (discussed with respect to FIG. 5) is ideally based on an estimated current usage (Eλij) and not on historical usage.
In one exemplary approach, the current usage 220 may be based on a weighted moving average of the actual amounts of usage in prior intervals (λi,j−1), (λi,j−2), etc. Because the data processing may be based on sessions which exist and draw on server capacity for a period of time, it may be useful to base the current usage on more than just the actual usage of the most recent interval (λi,j−1). A weighting factor (0≦w≦1) may be selected to allocate the weight given to the most recent interval (λi,j l) and the second most recent interval (λi,j−2). For example, if it is unusual for a typical session to extend beyond a single interval, the weighting factor (w) may be set to a high value to give more weight the most recent period. Similarly, a lower value of (w) may be selected if it is likely that sessions draw on server capacity 210 for more than one time interval. Accordingly, the estimated current usage (Eλij) of each (i) server at each (j) time interval may be represented as Eλij=w·λi,j−1+(1−w)·λi,j−2. In other exemplary approaches, formulas that take even more historical values of the actual usage may be appropriate. As will be discussed with respect to FIG. 5, the actual usages for prior time intervals (λi,j−1) (λi,j−2), etc., may be stored during the packet data scheduling.
Next, in step 415, utilization factors for each server 130 may be calculated. The utilization factor may represent the estimated current usage (Eλij) as a ratio to the available capacity 210, where the available capacity is the actual capacity limited by the redundancy factor (ri). In one exemplary approach, the utilization factor (uij) may be expressed as
FIG. 5 illustrates a flowchart of an exemplary process 500 for scheduling packet data traffic to servers in the cluster. The router 140 may include a computer-readable medium having stored instructions for carrying out certain operations described herein, including some or all of the operations described with respect to process 500. For example, some or all of such instructions may be included in the congestion control module 145. As discussed above, process 500 may be sub-process for process 300, e.g., in step 320.
Process 500 begins in step 505 when incoming packet data traffic 155 is received by the router 140. As discussed above with respect to FIG. 2b, the router may classify, meter, mark, and shape the traffic as necessary. Subsequent to these preliminary steps, the scheduler 180, in coordination with the congestion control module 145, may proceed with the following steps to further process and direct the traffic to a particular server 130 of the cluster 135.
Next, in step 510, it is determined whether the received packet data belongs to an existing session. For example, the data may include a session identifier thereby associating the data with a particular session.
In step 515, the packet data will be scheduled to the server 130 that is already handling the session to which the data belongs. The session data from step 510 may be used to determine which server is handling a particular session. While not depicted, process 500 may also store associations between the servers 130 and sessions being processed thereby.
In step 520, following the determination in step 510 that the packet data does not belong to an existing session, the data will be scheduled to one of the servers 130 of the cluster 135. As discussed above with respect to process 400, utilization factors may be maintained and updated for each server. The utilization factors express the estimated usage of the server 130 with respect to the available capacity 210. The server with the highest utilization factor has the least amount of unused available capacity 210. To effectively balance the traffic load between the servers 130 of the cluster 135, the new traffic may be scheduled to the server 130 having the lowest utilization factor.
Following both steps 515 and 520, the record of the amount of packet data traffic sent to the server 130 may be updated in step 525. The router 140, or cluster manager, may keep historical records of the amounts of traffic sent to each server, e.g., a record for each of the last five time intervals. As discussed above, a time interval, e.g., 200 ms, may be defined for breaking down the calculation of the utilization factors. The historical records may be in the form of a circular list, or equivalent data structure, with an index value, e.g., 0-4, used to identify the record associated with the current time interval. Accordingly, the amount of the traffic scheduled to the server in either step 515 or step 520 will be added to the record associate with the server and the current time interval. While in another exemplary approach, the servers 130 could report back their actual current usage rates, monitoring estimated usage rates by the router 140 may eliminate time consuming bi-directional communication required for such reporting.
Accordingly, an exemplary congestion control method for session based network traffic has been described. Session handling servers 130 arranged in a cluster 135 may receive packet data traffic 155 from an upstream network device such as a router 140. The actual traffic handling capacity 205 of each server 130 may be limited by a redundancy factor (ri) to respective available capacities 210 in order to provide redundancy capacity 215. The redundancy factors (ri) may be altered dynamically in order to provide more or less redundancy in the cluster 135. For example, the redundancy factors (ri) may be decreased during times in which service availability is critically important. At other times, the redundancy factors may be increased to provide more available processing capacity 210, which may be useful in the event of a failure of one of the servers 130. To balance the traffic load across the cluster 135, the amount of data traffic sent to each server may be tracked and recorded for a number of historical time intervals. Some or all of the historical records may be used to estimate a current usage (Eλij). Newly received traffic that is not associated with an existing session may be scheduled to the server having the lowest utilization factor (uij), e.g., the ratio of the estimated usage to the available capacity.
With regard to the processes, systems, methods, heuristics, etc. described herein, it should be understood that, although the steps of such processes, etc. have been described as occurring according to a certain ordered sequence, such processes could be practiced with the described steps performed in an order other than the order described herein. It further should be understood that certain steps could be performed simultaneously, that other steps could be added, or that certain steps described herein could be omitted. In other words, the descriptions of processes herein are provided for the purpose of illustrating certain systems, and should in no way be construed so as to limit the claimed invention.
Accordingly, it is to be understood that the above description is intended to be illustrative and not restrictive. Many systems and applications other than the examples provided would be apparent upon reading the above description. The scope of the invention should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is anticipated and intended that future developments will occur in the arts discussed herein, and that the disclosed systems and methods will be incorporated into such future systems. In sum, it should be understood that the disclosure is capable of modification and variation and is limited only by the following claims.
All terms used in the claims are intended to be given their broadest reasonable constructions and their ordinary meanings as understood by those skilled in the art unless an explicit indication to the contrary is made herein. In particular, use of the singular articles such as “a,” “the,” “said,” etc. should be read to recite one or more of the indicated elements unless a claim recites explicitly to the contrary.
|
white blood cell Component of the blood that functions in the immune system. Also known as a leukocyte.
wood The inner layer of the stems of woody plants; composed of xylem.
X-chromosome One of the sex chromosomes.
xerophytic leaves The leaves of plants that grow under arid conditions with low levels of soil and water. Usually characterized by water-conserving features such as thick cuticle and sunken stomatal pits.
x-ray diffraction Technique utilized to study atomic structure of crystalline substances by noting the patterns produced by x-rays shot through the crystal.
xylem Tissue in the vascular system of plants that moves water and dissolved nutrients from the roots to the leaves; composed of various cell types including tracheids and vessel elements. Plant tissue type that conducts water and nutrients from the roots to the leaves.
zebroid A hybrid animal that results from breeding zebras and horses.
Z lines Dense areas in myoÞbrils that mark the beginning of the sarcomeres. The actin Þlaments of the sarcomeres are anchored in the Z lines.
zone of differentiation Area in plant roots where recently produced cells develop into different cell types.
zone of elongation Area in plant roots where recently produced cells grow and elongate prior to differentiation.
zone of intolerance The area outside the geographic range where a population is absent; grades into the zone of physiological stress.
zone of physiological stress The area in a population's geographic range where members of population are rare due to physical and biological limiting factors.
zygomycetes One of the division of the fungi, characterized by the production of zygospores; includes the bread molds.
zygospore In fungi, a structure that forms from the diploid zygote created by the fusion of haploid hyphae of different mating types. After a period of dormancy, the zygospore forms sporangia, where meiosis occurs and spores form.
zygote A fertilized egg. A diploid cell resulting from fertilization of an egg by a sperm cell.
Back to Table of Contents | Back to Main Glossary Page
Tuesday May 18 2010
The URL of this page is:
|
Alabama Beach Mouse
ALABAMA BEACH MOUSE
Photo Credit: Nick R. Holler
SCIENTIFIC NAME: Peromyscus polionotus ammobates (Bowen)
DESCRIPTION: Smallest (adults, total length =122-153 mm [4.8-6.0 in.]; weights =10.0-17.0 g [0.35-0.60 oz.]; pregnant females reaching 22-25 g [0.78-0.88 oz.]) species of Peromyscus in North America (Hall 1981b). Tail short, usually 55-65 percent of body length. Males generally smaller than females. Brown to pale gray above, with pure white undersides and feet. A dark brown mid-dorsal stripe is common. Tail bicolored, with variable (10-80 percent of tail length) dark brown stripe on dorsal surface and pure white underneath (Howell 1939; Hall 1981b).
DISTRIBUTION: Historic distribution was along the coastal dunes of Baldwin County, Alabama, from the western tip of Fort Morgan Peninsula eastward to the Perdido Bay inlet, including Ono Island. The type locality was a sand bar immediately west of Perdido Key inlet (Alabama Point, Bowen 1968). Type locality has been heavily developed and no longer exhibits natural characteristics. Because of extensive development throughout the Alabama Gulf Coast, the present-day distribution of the Alabama beach mouse is greatly reduced (Holliman 1983). Active populations are known to exist in areas of public ownership at Fort Morgan and within the Perdue Unit of the Bon Secour National Wildlife Refuge (Swilling and Wooten 2002). Discontinuous occupation of dune and scrub habitat between these two sites also occurs. Have been re-established at Gulf State Park. Trapping and visual surveys suggest extirpation from all areas east of Gulf State Park.
HABITAT: Typically includes primary, secondary, and scrub dunes of the coastal strand community (Bowen 1968, Rave and Holler 1992). Densities often greatest in sparsely vegetated areas within the primary dune zone. Recent research indicated that scrub habitat is more important than previously thought. Recognition of the value of this habitat as refugia from hurricanes and other storm events has prompted formal redesignation of the Critical Habitat limit for this subspecies. Only rarely found associated with human dwellings.
LIFE HISTORY AND ECOLOGY: Monogamous; pair bonding strong and parental cooperation in rearing has been noted (Blair 1951, Margulis 1997, Swilling and Wooten 1992). Litter sizes range from two to eight (mode = four) (Caldwell and Gentry 1965, Smith 1966). Gestation period averages 28 days with a postpartum estrus common. Reproduction occurs throughout the year, but typically slows during summer and peaks during late fall/early winter in correlation with availability of forage seeds. A semifossorial, nocturnal rodent that digs distinctive burrows in sandy soils. Burrows typically consist of an entrance tube up to one meter (three feet) deep leading to one or more chambers (Hayne 1936, Smith 1966). An escape tunnel is normally present from the nest chamber to just below the surface. Nests of dried grasses and other fibers are found in the central chamber. Burrow openings are frequently located within vegetation. A fan-shaped plume of expelled sand is characteristic of active burrows. Entrance tunnels are blocked several centimeters (three to five inches) below surface by sand plugs, presumably for predator defense. Granivorous-omnivorous, with a majority of diet being seasonal seeds (Smith 1966, Gentry and Smith 1968, Moyers 1996). Wind-deposited seeds such as sea oats and bluestem important components of diet; acorns eaten when available. Also consumes a variety of animal foods, including both insects and vertebrates. Insects reported in diet include beetles, leaf hoppers, true bugs, and ants. Nocturnal, with daytime activity rare; nightly movements directly affected by weather conditions. Radio tracking indicates activity throughout the night, with peaks occurring shortly after dusk and again after midnight (Lynn 2000). Capable of dispersing over five kilometers (3.1 miles) (Smith 1966) and commonly traverse 0.5 kilometers (0.31 miles) of habitat per night, but most observations indicate that individuals settle within a few hundred meters (200-1,000 feet) of their natal sites. Juveniles disperse an average of 160 meters (500 feet), effectively one home range, away from the natal site (Swilling and Wooten 2002). Dispersal distances for juvenile males and females not reported to differ. Home range size varies according to season and reproductive state. Average values reported for Alabama beach mice were 4,086 -5,512 square meters (43,981-59,330 square feet) from trapping data and 6,783-7,000 square meters (73,011-75,347 square feet) from telemetry data, but ranges as small as 389 square meters (4,187 square feet) and as large as 29,330 square meters (315,715 square feet) have been observed (Lynn 2000). Home range sizes do not differ significantly between males and females. In general, populations show little evidence of intraspecies competition with increasing densities yielding increased compaction of home ranges. This combination of tolerance and dispersal results in the formation of spatial "neighborhoods" within populations. For Alabama beach mice, approximate size of these spatial units is 550 meters (1,800 feet; linear) with occupancy by 40-70 mice. Average life span in natural populations less than nine months although common to encounter mice more than one year of age. Captures of mice known to be two years old have been reported and captive mice have reached four or more years of age. Preyed upon by the red and gray fox , great horned owl, great blue heron, weasel, striped skunk, raccoon, various snakes including coach-whip and pygmy and eastern diamondback rattlesnakes, and domestic dogs and cats.
BASIS FOR STATUS CLASSIFICATION: Habitat loss and fragmentation associated with residential and commercial real estate development single most important factor contributing to imperiled status. Existing or proposed beachfront development will substantially alter all Alabama beach mouse habitat not in public ownership. Reduction of available habitat and isolation of the remaining populations substantially increases vulnerability to the effects of tropical storms, weather cycles, predation, and other environmental factors. Substantial disagreement exists as to the current status and appropriate management protocol for the Alabama beach mouse. Various researchers have argued it is in immediate jeopardy of range-wide extinction if habitat loss is allowed to continue. This position is supported by evidence of widespread extirpation from all developed areas in the eastern portion of the historic distribution. Similar levels of development are occurring throughout the Fort Morgan Peninsula and real estate development on all private areas is proceeding rapidly. In addition, Population Viability Analyses indicate that extinction of even the largest remaining populations is likely within 50 years if current trends continue (Oli et al. 2001). Listed as endangered by the U.S. Fish and Wildlife Service in 1986.
Author: Michael C. Wooten
|
Document for February 2nd:
Treaty of Guadalupe Hidalgo
This treaty, signed on February 2, 1848, ended the war between the United States and Mexico. By its terms, Mexico ceded 55 percent of its territory, including parts of present-day Arizona, California, New Mexico, Texas, Colorado, Nevada, and Utah, to the United States.
Read more at Our Documents...
Share, comment and suggest new documents at the Today's Document Tumblr Blog
|
"I never saw more simplicity," recorded the famous evangelist George Whitefield in 1740, after preaching at Skippack in southeastern Pennsylvania. Had he ridden fifteen miles back through the woods to Germantown, and viewed the Mennonite meetinghouse there, he would doubtless have mused further. For even the successor of that log building of 1708, built of stone in 1770, greets a twenty-first century eye as a statement of simplicity.
More than that, as this guidebook explains, Germantowns Mennonite meetinghouse is an expression of continuity. It was children and grandchildren of European Mennonites who gathered here along the towns only major street, in the first of their peoples congregations to endure in America. The location was the site of a homestead bought by a Dutch-speaking paper-maker four years after the towns founding in 1683.
Willem Rittinghuysen (William Rittenhouse) had immigrated from Amsterdam with his wife Gertruid and three children. The first paper-maker in the English colonies, he was also elected in 1698 as its first Mennonite minister. By then he and his partner-son Nicholas had already moved out of town to the site of their mill along the nearby Wissahickon Creek. It was in his Lower Rhenish accent, in the house of Mennonite neighbor Isaac van Bebber, that the slowly accruing congregation heard the scriptural message for a decade.
Willem himself never saw the log meetinghouse built in the months following his unexpected death in 1708. Nor will we see it. What we can visit is the sturdy stone replacement that served the congregation for two centuries after 1770. "Simple, substantial and beautiful," to use the phrase of a Lancaster County Mennonite preacher, it has become an icon of American Mennonite memory, even while its surroundings have changed beyond the imagination of its first users.
Location can speak. Standing in the doorway and looking outward, one faces, across the urban horizon, in the direction of the paper mill site on a branch of the Wissahickon Creek. A tiny village of buildings there, one bearing the date of 1707, evokes physicallyunlike the written-only records of earlier and temporary Mennonite dwellings in Manhattan or along the Delaware Baythe earliest permanent Mennonite dwelling outside of Europe. The quiet stream running through the dell still called Rittenhousetown reminds us of the mill it once powered. And back in the heart of the main town, the north-south angle of Germantown Avenue recalls the Indian trail along which a lot fell to the communitys first Mennonite couple, Jan and Mercken Lensen, at the towns very beginning in the late fall of 1683.
The earliest stones clustered next to the meetinghouse do not reach far enough back to include Jan and Merckens names. But names such as Cassel, Funk, Keyser, and Rittenhouse declare that this Mennonite community was a convergence. The hearts that beat here came from at least four different regions of Europe, both urban and rural. Some names are Palatine, suggesting spiritual origins in Zurich, while others witness to the heritage of Menno Simons of Friesland. This realization flavors our recognition of the varieties of the present residents of this historic community. It also reminds us that the church of Christ will always be about bringing together a family that transcends nations, cultures and languages.
History is here in layers. Native Americans visited in the first huts; schoolmaster Christopher Dock held summer schools here; peach trees once picturesquely lined the street running by Dirk Keysers gracious residence; George Washingtons troops met in confused shock with British redcoats in front of the Meetinghouse; a congregation struggled for two centuries after most of their people had moved into the country. It was after 1960 when a flock surprisingly regathered from many points to worship in this modern urban setting with a long memory.
The miraculously enduring meetinghouse along Germantown Avenue has become a place to meditate on how spiritual concerns leave their testimony. Those miscellaneous Mennonites living in a straggling village three centuries ago had come from a variety of motivesincluding persecution and economic opportunity. But in this rustic village of linen-weavers they brought to common focus what they believed. Instead of merging their identities in a generic spirituality, they insisted on being responsible to their parental church fellowships in Europe. Even before electing a preacher, "they tried to instruct each other," meeting in homes. They waited twenty-five years to celebrate a communion which they believed was an accountable one. Then they chose leaders from all four of the main communities of their European backgrounds. And built a meetinghouse. In that remarkable, scrupulous convergence, they became the departing point, and an example, for a fellowship that would spread across North America.
May this careful guidebook, in both word and image, instruct our imagination, helping us to be intelligent and appreciative pilgrims as we come back to visit Germantown.
John L. Ruth
Colonial Germantown Mennonites orders:
|Click here to explore joining InnerCircle readers club and receiving occasional updates and special discounts.|
© 2006 by Cascadia Publishing House
|
This issue is currently out of stock and will not be reprinted.
Subscribe today and save! |
The world's best-selling astronomy magazine offers you the most exciting, visually stunning, and timely coverage of the heavens above. Each monthly issue includes expert science reporting, vivid color photography, complete sky coverage, spot-on observing tips, informative telescope reviews, and much more! All this in an easy-to-understand, user-friendly style that's perfect for astronomers at any level.
Looking inside the Sun
The Sun rings with sound waves that may give away secrets hidden deep in the solar interior.
A New Window on Star Birth
New submillimeter telescopes will open up unexplored territory.
The Curious Shapes of Cosmic Jets
Individual circumstances determine why jets come in so many different shapes.
The Many Faces of the Sun
During the International Solar Month astronomers observed the Sun across the wavelengths for the first time.
ASTRONOMY Sky Almanac
Eight Lunar Wonders
Odd lunar features have fascinated observers for centuries.
Your Own Piece of the Solar System
Create a three-dimensional model of your favorite planetary feature.
Galaxy Hunting around the Big Dipper
Springtime offers galaxies bright enough to be seen in binoculars and small telescopes.
How to Observe Planets during the Day
You don't have to wait until night to see planets and stars.
Behind the Scenes
Who Will Miss the Night Sky?
Saturn's Hexagon Jet
NASA's Mars Rover
Capturing Mars on Paper
Meetings and Events
Readings and Resources
Look for this icon. This denotes premium subscriber content.
Learn more »
|
Please Read How You Can Help Keep the Encyclopedia Free
The term ‘homosexuality’ was coined in the late 19th century by a German psychologist, Karoly Maria Benkert. Although the term is new, discussions about sexuality in general, and same-sex attraction in particular, have occasioned philosophical discussion ranging from Plato's Symposium to contemporary queer theory. Since the history of cultural understandings of same-sex attraction is relevant to the philosophical issues raised by those understandings, it is necessary to review briefly some of the social history of homosexuality. Arising out of this history, at least in the West, is the idea of natural law and some interpretations of that law as forbidding homosexual sex. References to natural law still play an important role in contemporary debates about homosexuality in religion, politics, and even courtrooms. Finally, perhaps the most significant recent social change involving homosexuality is the emergence of the gay liberation movement in the West. In philosophical circles this movement is, in part, represented through a rather diverse group of thinkers who are grouped under the label of queer theory. A central issue raised by queer theory, which will be discussed below, is whether homosexuality, and hence also heterosexuality and bisexuality, is socially constructed or purely driven by biological forces.
- 1. History
- 2. Historiographical Debates
- 3. Natural Law
- 4. Queer Theory and the Social Construction of Sexuality
- 5. Conclusion
- Academic Tools
- Other Internet Resources
- Related Entries
As has been frequently noted, the ancient Greeks did not have terms or concepts that correspond to the contemporary dichotomy of ‘heterosexual’ and ‘homosexual’. There is a wealth of material from ancient Greece pertinent to issues of sexuality, ranging from dialogues of Plato, such as the Symposium, to plays by Aristophanes, and Greek artwork and vases. What follows is a brief description of ancient Greek attitudes, but it is important to recognize that there was regional variation. For example, in parts of Ionia there were general strictures against same-sex eros, while in Elis and Boiotia (e.g., Thebes), it was approved of and even celebrated (cf. Dover, 1989; Halperin, 1990).
Probably the most frequent assumption of sexual orientation is that persons can respond erotically to beauty in either sex. Diogenes Laeurtius, for example, wrote of Alcibiades, the Athenian general and politician of the 5th century B.C., “in his adolescence he drew away the husbands from their wives, and as a young man the wives from their husbands.” (Quoted in Greenberg, 1988, 144) Some persons were noted for their exclusive interests in persons of one gender. For example, Alexander the Great and the founder of Stoicism, Zeno of Citium, were known for their exclusive interest in boys and other men. Such persons, however, are generally portrayed as the exception. Furthermore, the issue of what gender one is attracted to is seen as an issue of taste or preference, rather than as a moral issue. A character in Plutarch's Erotikos (Dialogue on Love) argues that “the noble lover of beauty engages in love wherever he sees excellence and splendid natural endowment without regard for any difference in physiological detail.” (Ibid., 146) Gender just becomes irrelevant “detail” and instead the excellence in character and beauty is what is most important.
Even though the gender that one was erotically attracted to (at any specific time, given the assumption that persons will likely be attracted to persons of both sexes) was not important, other issues were salient, such as whether one exercised moderation. Status concerns were also of the highest importance. Given that only free men had full status, women and male slaves were not problematic sexual partners. Sex between freemen, however, was problematic for status. The central distinction in ancient Greek sexual relations was between taking an active or insertive role, versus a passive or penetrated one. The passive role was acceptable only for inferiors, such as women, slaves, or male youths who were not yet citizens. Hence the cultural ideal of a same-sex relationship was between an older man, probably in his 20's or 30's, known as the erastes, and a boy whose beard had not yet begun to grow, the eromenos or paidika. In this relationship there was courtship ritual, involving gifts (such as a rooster), and other norms. The erastes had to show that he had nobler interests in the boy, rather than a purely sexual concern. The boy was not to submit too easily, and if pursued by more than one man, was to show discretion and pick the more noble one. There is also evidence that penetration was often avoided by having the erastes face his beloved and place his penis between the thighs of the eromenos, which is known as intercrural sex. The relationship was to be temporary and should end upon the boy reaching adulthood (Dover, 1989). To continue in a submissive role even while one should be an equal citizen was considered troubling, although there certainly were many adult male same-sex relationships that were noted and not strongly stigmatized. While the passive role was thus seen as problematic, to be attracted to men was often taken as a sign of masculinity. Greek gods, such as Zeus, had stories of same-sex exploits attributed to them, as did other key figures in Greek myth and literature, such as Achilles and Hercules. Plato, in the Symposium, argues for an army to be comprised of same-sex lovers. Thebes did form such a regiment, the Sacred Band of Thebes, formed of 500 soldiers. They were renowned in the ancient world for their valor in battle.
Ancient Rome had many parallels in its understanding of same-sex attraction, and sexual issues more generally, to ancient Greece. This is especially true under the Republic. Yet under the Empire, Roman society slowly became more negative in its views towards sexuality, probably due to social and economic turmoil, even before Christianity became influential.
Exactly what attitude the New Testament has towards sexuality in general, and same-sex attraction in particular, is a matter of sharp debate. John Boswell argues, in his fascinating Christianity, Social Tolerance, and Homosexuality, that many passages taken today as condemnations of homosexuality are more concerned with prostitution, or where same-sex acts are described as “unnatural” the meaning is more akin to ‘out of the ordinary’ rather than as immoral (Boswell, 1980, ch.4; see also Boswell, 1994). Yet others have criticized, sometimes persuasively, Boswell's scholarship (see Greenberg, 1988, ch.5). What is clear, however, is that while condemnation of same-sex attraction is marginal to the Gospels and only an intermittent focus in the rest of the New Testament, early Christian church fathers were much more outspoken. In their writings there is a horror at any sort of sex, but in a few generations these views eased, in part due no doubt to practical concerns of recruiting converts. By the fourth and fifth centuries the mainstream Christian view allowed for procreative sex.
This viewpoint, that procreative sex within marriage is allowed, while every other expression of sexuality is sinful, can be found, for example, in St. Augustine. This understanding leads to a concern with the gender of one's partner that is not found in previous Greek or Roman views, and it clearly forbids homosexual acts. Soon this attitude, especially towards homosexual sex, came to be reflected in Roman Law. In Justinian's Code, promulgated in 529, persons who engaged in homosexual sex were to be executed, although those who were repentant could be spared. Historians agree that the late Roman Empire saw a rise in intolerance towards sexuality, although there were again important regional variations.
With the decline of the Roman Empire, and its replacement by various barbarian kingdoms, a general tolerance (with the sole exception of Visigothic Spain) of homosexual acts prevailed. As one prominent scholar puts it, “European secular law contained few measures against homosexuality until the middle of the thirteenth century.” (Greenberg, 1988, 260) Even while some Christian theologians continued to denounce nonprocreative sexuality, including same-sex acts, a genre of homophilic literature, especially among the clergy, developed in the eleventh and twelfth centuries (Boswell, 1980, chapters 8 and 9).
The latter part of the twelfth through the fourteenth centuries, however, saw a sharp rise in intolerance towards homosexual sex, alongside persecution of Jews, Muslims, heretics, and others. While the causes of this are somewhat unclear, it is likely that increased class conflict alongside the Gregorian reform movement in the Catholic Church were two important factors. The Church itself started to appeal to a conception of “nature” as the standard of morality, and drew it in such a way so as to forbid homosexual sex (as well as extramarital sex, nonprocreative sex within marriage, and often masturbation). For example, the first ecumenical council to condemn homosexual sex, Lateran III of 1179, stated that “Whoever shall be found to have committed that incontinence which is against nature” shall be punished, the severity of which depended upon whether the transgressor was a cleric or layperson (quoted in Boswell, 1980, 277). This appeal to natural law (discussed below) became very influential in the Western tradition. An important point to note, however, is that the key category here is the ‘sodomite,’ which differs from the contemporary idea of ‘homosexual’. A sodomite was understood as act-defined, rather than as a type of person. Someone who had desires to engage in sodomy, yet did not act upon them, was not a sodomite. Also, persons who engaged in heterosexual sodomy were also sodomites. There are reports of persons being burned to death or beheaded for sodomy with a spouse (Greenberg, 1988, 277). Finally, a person who had engaged in sodomy, yet who had repented of his sin and vowed to never do it again, was no longer a sodomite. The gender of one's partner is again not of decisive importance, although some medieval theologians single out same-sex sodomy as the worst type of sexual crime.
For the next several centuries in Europe, the laws against homosexual sex were severe in their penalties. Enforcement, however, was episodic. In some regions, decades would pass without any prosecutions. Yet the Dutch, in the 1730's, mounted a harsh anti-sodomy campaign (alongside an anti-Gypsy pogrom), even using torture to obtain confessions. As many as one hundred men and boys were executed and denied burial (Greenberg, 1988, 313-4). Also, the degree to which sodomy and same-sex attraction were accepted varied by class, with the middle class taking the narrowest view, while the aristocracy and nobility often accepted public expressions of alternative sexualities. At times, even with the risk of severe punishment, same-sex oriented subcultures would flourish in cities, sometimes only to be suppressed by the authorities. In the 19th century there was a significant reduction in the legal penalties for sodomy. The Napoleonic code decriminalized sodomy, and with Napoleon's conquests that Code spread. Furthermore, in many countries where homosexual sex remained a crime, the general movement at this time away from the death penalty usually meant that sodomy was removed from the list of capital offenses.
In the 18th and 19th centuries an overtly theological framework no longer dominated the discourse about same-sex attraction. Instead, secular arguments and interpretations became increasingly common. Probably the most important secular domain for discussions of homosexuality was in medicine, including psychology. This discourse, in turn, linked up with considerations about the state and its need for a growing population, good soldiers, and intact families marked by clearly defined gender roles. Doctors were called in by courts to examine sex crime defendants (Foucault, 1980; Greenberg, 1988). At the same time, the dramatic increase in school attendance rates and the average length of time spent in school, reduced transgenerational contact, and hence also the frequency of transgenerational sex. Same-sex relations between persons of roughly the same age became the norm.
Clearly the rise in the prestige of medicine resulted in part from the increasing ability of science to account for natural phenomena on the basis of mechanistic causation. The application of this viewpoint to humans led to accounts of sexuality as innate or biologically driven. The voluntarism of the medieval understanding of sodomy, that sodomites chose sin, gave way to the modern notion of homosexuality as a deep, unchosen characteristic of persons, regardless of whether they act upon that orientation. The idea of a ‘latent sodomite’ would not have made sense, yet under this new view it does make sense to speak of a person as a ‘latent homosexual.’ Instead of specific acts defining a person, as in the medieval view, an entire physical and mental makeup, usually portrayed as somehow defective or pathological, is ascribed to the modern category of ‘homosexual.’ Although there are historical precursors to these ideas (e.g., Aristotle gave a physiological explanation of passive homosexuality), medicine gave them greater public exposure and credibility (Greenberg, 1988, ch.15). The effects of these ideas cut in conflicting ways. Since homosexuality is, by this view, not chosen, it makes less sense to criminalize it. Persons are not choosing evil acts. Yet persons may be expressing a diseased or pathological mental state, and hence medical intervention for a cure is appropriate. Hence doctors, especially psychiatrists, campaigned for the repeal or reduction of criminal penalties for consensual homosexual sodomy, yet intervened to “rehabilitate” homosexuals. They also sought to develop techniques to prevent children from becoming homosexual, for example by arguing that childhood masturbation caused homosexuality, hence it must be closely guarded against.
In the 20th century sexual roles were redefined once again. For a variety of reasons, premarital intercourse slowly became more common and eventually acceptable. With the decline of prohibitions against sex for the sake of pleasure even outside of marriage, it became more difficult to argue against gay sex. These trends were especially strong in the 1960's, and it was in this context that the gay liberation movement took off. Although gay and lesbian rights groups had been around for decades, the low-key approach of the Mattachine Society (named after a medieval secret society) and the Daughters of Bilitis had not gained much ground. This changed in the early morning hours of June 28, 1969, when the patrons of the Stonewall Inn, a gay bar in Greenwich Village, rioted after a police raid. In the aftermath of that event, gay and lesbian groups began to organize around the country. Gay Democratic clubs were created in every major city, and one fourth of all college campuses had gay and lesbian groups (Shilts, 1993, ch.28). Large gay urban communities in cities from coast to coast became the norm. The American Psychiatric Association removed homosexuality from its official listing of mental disorders. The increased visibility of gays and lesbians has become a permanent feature of American life despite the two critical setbacks of the AIDS epidemic and an anti-gay backlash (see Berman, 1993, for a good survey). The post-Stonewall era has also seen marked changes in Western Europe, where the repeal of anti-sodomy laws and legal equality for gays and lesbians has become common.
Broader currents in society have influenced the ways in which scholars and activists have approached research into sexuality and same-sex attraction. Some early 20th century researchers and equality advocates, seeking to vindicate same-sex relations in societies that disparaged and criminalized it, put forward lists of famous historical figures attracted to persons of the same sex. Such lists implied a common historical entity underlying sexual attraction, whether one called it ‘inversion’ or ‘homosexuality.’ This approach (or perhaps closely related family of approaches) is commonly called essentialism. Historians and researchers sympathetic to the gay liberation movement of the late 1960s and 1970s produced a number of books that implicitly relied on an essentialist approach. In the 1970s and 1980s John Boswell raised it to a new level of methodological and historical sophistication, although his position shifted over time to one of virtual agnosticism between essentialists and their critics. Crompton’s work (2003) is a notable contemporary example of an essentialist methodology.
Essentialists claim that categories of sexual attraction are observed rather than created. For example, while ancient Greece did not have terms that correspond to the heterosexual/homosexual division, persons did note men who were only attracted to person of a specific sex. Through history and across cultures there are consistent features, albeit with meaningful variety over time and space, in sexual attraction to the point that it makes sense of speak of specific sexual orientations. According to this view, homosexuality is a specific, natural kind rather than a cultural or historical product. Essentialists allow that there are cultural differences in how homosexuality is expressed and interpreted, but they emphasize that this does not prevent it from being a universal category of human sexual expression.
In contrast, in the 1970s and since a number of researchers, often influenced by Mary McIntosh or Michel Foucault, argued that class relations, the human sciences, and other historically constructed forces create sexual categories and the personal identities associated with them. For advocates of this view, such as David Halperin, how sex is organized in a given cultural and historical setting is irreducibly particular (Halperin, 2002). The emphasis on the social creation of sexual experience and expression led to the labeling of the viewpoint as social constructionism, although more recently several of its proponents have preferred the term ‘historicism.’ Thus homosexuality, as a specific sexual construction, is best understood as a solely modern, Western concept and role. Prior to the development of this construction, persons were not really ‘homosexual’ even when they were only attracted to persons of the same sex. The differences between, say, ancient Greece, with its emphasis on pederasty, role in the sex act, and social status, and the contemporary Western role of ‘gay’ or ‘homosexual’ are simply too great to collapse into one category.
In a manner closely related to the claims of queer theory, discussed below, social constructionists argue that specific social constructs produce sexual ways of being. There is no given mode of sexuality that is independent of culture; even the concept and experience of sexual orientation itself are products of history. For advocates of this view, the range of historical sexual diversity, and the fluidity of human possibility, is simply too varied to be adequately captured by any specific conceptual scheme.
There is a significant political dimension to this seemingly abstract historiographical debate. Social constructionists argue that essentialism is the weaker position politically for at least two reasons. First, by accepting a basic heterosexual/homosexual organizing dichotomy, essentialism wrongly concedes that heterosexuality is the norm and that homosexuality is, strictly speaking, abnormal and the basis for a permanent minority. Second, social constructionists argue that an important goal of historical investigations should be to put into question contemporary organizing schemas about sexuality. The acceptance of the contemporary heterosexual/homosexual dichotomy is conservative, perhaps even reactionary, and forecloses the exploration of new possibilities. (There are related queer theory criticisms of the essentialist position, discussed below.) In contrast, essentialists argue that a historicist approach forecloses the very possibility of a ‘gay history.’ Instead, the field of investigation becomes other social forces and how they ‘produce’ a distinct form or forms of sexuality. Only an essentialist approach can maintain the project of gay history, and minority histories in general, as a force for liberation.
Today natural law theory offers the most common intellectual defense for differential treatment of gays and lesbians, and as such it merits attention. The development of natural law is a long and very complicated story, but a reasonable place to begin is with the dialogues of Plato, for this is where some of the central ideas are first articulated, and, significantly enough, are immediately applied to the sexual domain. For the Sophists, the human world is a realm of convention and change, rather than of unchanging moral truth. Plato, in contrast, argued that unchanging truths underpin the flux of the material world. Reality, including eternal moral truths, is a matter of phusis. Even though there is clearly a great degree of variety in conventions from one city to another (something ancient Greeks became increasingly aware of), there is still an unwritten standard, or law, that humans should live under.
In the Laws, Plato applies the idea of a fixed, natural law to sex, and takes a much harsher line than he does in the Symposium or the Phraedrus. In Book One he writes about how opposite-sex sex acts cause pleasure by nature, while same-sex sexuality is “unnatural” (636c). In Book Eight, the Athenian speaker considers how to have legislation banning homosexual acts, masturbation, and illegitimate procreative sex widely accepted. He then states that this law is according to nature (838-839d). Probably the best way of understanding Plato's discussion here is in the context of his overall concerns with the appetitive part of the soul and how best to control it. Plato clearly sees same-sex passions as especially strong, and hence particularly problematic, although in the Symposium that erotic attraction could be the catalyst for a life of philosophy, rather than base sensuality (Cf. Dover, 1989, 153-170; Nussbaum, 1999, esp. chapter 12).
Other figures played important roles in the development of natural law theory. Aristotle, with his emphasis upon reason as the distinctive human function, and the Stoics, with their emphasis upon human beings as a part of the natural order of the cosmos, both helped to shape the natural law perspective which says that “True law is right reason in agreement with nature,” as Cicero put it. Aristotle, in his approach, did allow for change to occur according to nature, and therefore the way that natural law is embodied could itself change with time, which was an idea Aquinas later incorporated into his own natural law theory. Aristotle did not write extensively about sexual issues, since he was less concerned with the appetites than Plato. Probably the best reconstruction of his views places him in mainstream Greek society as outlined above; the main issue is that of active versus a passive role, with only the latter problematic for those who either are or will become citizens. Zeno, the founder of Stoicism, was, according to his contemporaries, only attracted to men, and his thought had no prohibitions against same-sex sexuality. In contrast, Cicero, a later Stoic, was dismissive about sexuality in general, with some harsher remarks towards same-sex pursuits (Cicero, 1966, 407-415).
The most influential formulation of natural law theory was made by Thomas Aquinas in the thirteenth century. Integrating an Aristotelian approach with Christian theology, Aquinas emphasized the centrality of certain human goods, including marriage and procreation. While Aquinas did not write much about same-sex sexual relations, he did write at length about various sex acts as sins. For Aquinas, sexuality that was within the bounds of marriage and which helped to further what he saw as the distinctive goods of marriage, mainly love, companionship, and legitimate offspring, was permissible, and even good. Aquinas did not argue that procreation was a necessary part of moral or just sex; married couples could enjoy sex without the motive of having children, and sex in marriages where one or both partners is sterile (perhaps because the woman is postmenopausal) is also potentially just (given a motive of expressing love). So far Aquinas' view actually need not rule out homosexual sex. For example, a Thomist could embrace same-sex marriage, and then apply the same reasoning, simply seeing the couple as a reproductively sterile, yet still fully loving and companionate union.
Aquinas, in a significant move, adds a requirement that for any given sex act to be moral it must be of a generative kind. The only way that this can be achieved is via vaginal intercourse. That is, since only the emission of semen in a vagina can result in natural reproduction, only sex acts of that type are generative, even if a given sex act does not lead to reproduction, and even if it is impossible due to infertility. The consequence of this addition is to rule out the possibility, of course, that homosexual sex could ever be moral (even if done within a loving marriage), in addition to forbidding any non-vaginal sex for opposite-sex married couples. What is the justification for this important addition? This question is made all the more pressing in that Aquinas does allow that how broad moral rules apply to individuals may vary considerably, since the nature of persons also varies to some extent. That is, since Aquinas allows that individual natures vary, one could simply argue that one is, by nature, emotionally and physically attracted to persons of one's own gender, and hence to pursue same-sex relationships is ‘natural’ (Sullivan, 1995). Unfortunately, Aquinas does not spell out a justification for this generative requirement.
More recent natural law theorists, however, have tried a couple different lines of defense for Aquinas' ‘generative type’ requirement. The first is that sex acts that involve either homosexuality, heterosexual sodomy, or which use contraception, frustrate the purpose of the sex organs, which is reproductive. This argument, often called the ‘perverted faculty argument’, is perhaps implicit in Aquinas. It has, however, come in for sharp attack (see Weitham, 1997), and the best recent defenders of a Thomistic natural law approach are attempting to move beyond it (e.g., George, 1999, dismisses the argument). If their arguments fail, of course, they must allow that some homosexual sex acts are morally permissible (even positively good), although they would still have resources with which to argue against casual gay (and straight) sex.
Although the specifics of the second sort of argument offered by various contemporary natural law theorists vary, the common elements are strong (Finnis, 1994; George, 1999). As Thomists, their argument rests largely upon an account of human goods. The two most important for the argument against homosexual sex (though not against homosexuality as an orientation which is not acted upon, and hence in this they follow official Catholic doctrine; see George, 1999, ch.15) are personal integration and marriage. Personal integration, in this view, is the idea that humans, as agents, need to have integration between their intentions as agents and their embodied selves. Thus, to use one's or another's body as a mere means to one's own pleasure, as they argue happens with masturbation, causes ‘dis-integration’ of the self. That is, one's intention then is just to use a body (one's own or another's) as a mere means to the end of pleasure, and this detracts from personal integration. Yet one could easily reply that two persons of the same sex engaging in sexual union does not necessarily imply any sort of ‘use’ of the other as a mere means to one's own pleasure. Hence, natural law theorists respond that sexual union in the context of the realization of marriage as an important human good is the only permissible expression of sexuality. Yet this argument requires drawing how marriage is an important good in a very particular way, since it puts procreation at the center of marriage as its “natural fulfillment” (George, 1999, 168). Natural law theorists, if they want to support their objection to homosexual sex, have to emphasize procreation. If, for example, they were to place love and mutual support for human flourishing at the center, it is clear that many same-sex couples would meet this standard. Hence their sexual acts would be morally just.
There are, however, several objections that are made against this account of marriage as a central human good. One is that by placing procreation as the ‘natural fulfillment’ of marriage, sterile marriages are thereby denigrated. Sex in an opposite-sex marriage where the partners know that one or both of them are sterile is not done for procreation. Yet surely it is not wrong. Why, then, is homosexual sex in the same context (a long-term companionate union) wrong (Macedo, 1995)? The natural law rejoinder is that while vaginal intercourse is a potentially procreative sex act, considered in itself (though admitting the possibility that it may be impossible for a particular couple), oral and anal sex acts are never potentially procreative, whether heterosexual or homosexual (George, 1999). But is this biological distinction also morally relevant, and in the manner that natural law theorists assume? Natural law theorists, in their discussions of these issues, seem to waver. On the one hand, they want to defend an ideal of marriage as a loving union wherein two persons are committed to their mutual flourishing, and where sex is a complement to that ideal. Yet that opens the possibility of permissible gay sex, or heterosexual sodomy, both of which they want to oppose. So they then defend an account of sexuality which seems crudely reductive, emphasizing procreation to the point where literally a male orgasm anywhere except in the vagina of one's loving spouse is impermissible. Then, when accused of being reductive, they move back to the broader ideal of marriage.
Natural law theory, at present, has made significant concessions to mainstream liberal thought. In contrast certainly to its medieval formulation, most contemporary natural law theorists argue for limited governmental power, and do not believe that the state has an interest in attempting to prevent all moral wrongdoing. Still, they do argue against homosexuality, and against legal protections for gays and lesbians in terms of employment and housing, even to the point of serving as expert witnesses in court cases or helping in the writing of amicus curae briefs. They also argue against same sex marriage (Bradley, 2001; George, 2001).
With the rise of the gay liberation movement in the post-Stonewall era, overtly gay and lesbian perspectives began to be put forward in politics, philosophy and literary theory. Initially these often were overtly linked to feminist analyses of patriarchy (e.g., Rich, 1980) or other, earlier approaches to theory. Yet in the late 1980's and early 1990's queer theory was developed, although there are obviously important antecedents which make it difficult to date it precisely. There are a number of ways in which queer theory differed from earlier gay liberation theory, but an important initial difference can be gotten at by examining the reasons for opting for the term ‘queer’ as opposed to ‘gay and lesbian.’ Some versions of, for example, lesbian theory portrayed the essence of lesbian identity and sexuality in very specific terms: non-hierarchical, consensual, and, specifically in terms of sexuality, as not necessarily focused upon genitalia (e.g., Faderman, 1985). Lesbians arguing from this framework, for example, could very well criticize natural law theorists as inscribing into the very “law of nature” an essentially masculine sexuality, focused upon the genitals, penetration, and the status of the male orgasm (natural law theorists rarely mention female orgasms).
This approach, based upon characterizations of ‘lesbian’ and ‘gay’ identity and sexuality, however, suffered from three difficulties. First, it appeared even though the goal was to critique a heterosexist regime for its exclusion and marginalization of those whose sexuality is different, any specific or “essentialist” account of gay or lesbian sexuality had the same effect. Sticking with the example used above, of a specific conceptualization of lesbian identity, it denigrates women who are sexually and emotionally attracted to other women, yet who do not fit the description. Sado-masochists and butch/fem lesbians arguably do not fit this ideal of ‘equality’ offered. A second problem was that by placing such an emphasis upon the gender of one's sexual partner(s), other possible important sources of identity are marginalized, such as race and ethnicity. What is of utmost importance, for example, for a black lesbian is her lesbianism, rather than her race. Many gays and lesbians of color attacked this approach, accusing it of re-inscribing an essentially white identity into the heart of gay or lesbian identity (Jagose, 1996).
The third and final problem for the gay liberationist approach was that it often took this category of ‘identity’ itself as unproblematic and unhistorical. Such a view, however, largely because of arguments developed within poststructuralism, seemed increasingly untenable. The key figure in the attack upon identity as ahistorical is Michel Foucault. In a series of works he set out to analyze the history of sexuality from ancient Greece to the modern era (1980, 1985, 1986). Although the project was tragically cut short by his death in 1984, from complications arising from AIDS, Foucault articulated how profoundly understandings of sexuality can vary across time and space, and his arguments have proven very influential in gay and lesbian theorizing in general, and queer theory in particular (Spargo, 1999; Stychin, 2005).
One of the reasons for the historical review above is that it helps to give some background for understanding the claim that sexuality is socially constructed, rather than given by nature. Moreover, in order to not prejudge the issue of social constructionism versus essentialism, I avoided applying the term ‘homosexual’ to the ancient or medieval eras. In ancient Greece the gender of one's partner(s) was not important, but instead whether one took the active or passive role. In the medieval view, a ‘sodomite’ was a person who succumbed to temptation and engaged in certain non-procreative sex acts. Although the gender of the partner was more important than in the ancient view, the broader theological framework placed the emphasis upon a sin versus refraining-from-sin dichotomy. With the rise of the notion of ‘homosexuality’ in the modern era, a person is placed into a specific category even if one does not act upon those inclinations. What is the common, natural sexuality expressed across these three very different cultures? The social constructionist answer is that there is no ‘natural’ sexuality; all sexual understandings are constructed within and mediated by cultural understandings. The examples can be pushed much further by incorporating anthropological data outside of the Western tradition (Halperin, 1990; Greenberg, 1988). Yet even within the narrower context offered here, the differences between them are striking. The assumption in ancient Greece was that men (less is known about women) can respond erotically to either sex, and the vast majority of men who engaged in same-sex relationships were also married (or would later become married). Yet the contemporary understanding of homosexuality divides the sexual domain in two, heterosexual and homosexual, and most heterosexuals cannot respond erotically to their own sex.
In saying that sexuality is a social construct, these theorists are not saying that these understandings are not real. Since persons are also constructs of their culture (in this view), we are made into those categories. Hence today persons of course understand themselves as straight or gay (or perhaps bisexual), and it is very difficult to step outside of these categories, even once one comes to seem them as the historical constructs they are.
Gay and lesbian theory was thus faced with three significant problems, all of which involved difficulties with the notion of ‘identity.’ Queer theory thus arose in large part as an attempt to overcome them. How queer theory does so can be seen by looking at the term ‘queer’ itself. In contrast to gay or lesbian, ‘queer,’ it is argued, does not refer to an essence, whether of a sexual nature or not. Instead it is purely relational, standing as an undefined term that gets its meaning precisely by being that which is outside of the norm, however that norm itself may be defined. As one of the most articulate queer theorists puts it: “Queer is … whatever is at odds with the normal, the legitimate, the dominant. There is nothing in particular to which it necessarily refers. It is an identity without an essence” (Halperin, 1995, 62, original emphasis). By lacking any essence, queer does not marginalize those whose sexuality is outside of any gay or lesbian norm, such as sado-masochists. Since specific conceptualizations of sexuality are avoided, and hence not put at the center of any definition of queer, it allows more freedom for self-identification for, say, black lesbians to identify as much or more with their race (or any other trait, such as involvement in an S & M subculture) than with lesbianism. Finally, it incorporates the insights of poststructuralism about the difficulties in ascribing any essence or non-historical aspect to identity.
This central move by queer theorists, the claim that the categories through which identity is understood are all social constructs rather than given to us by nature, opens up a number of analytical possibilities. For example, queer theorists examine how fundamental notions of gender and sex which seem so natural and self-evident to persons in the modern West are in fact constructed and reinforced through everyday actions, and that this occurs in ways that privilege heterosexuality (Butler, 1990, 1993). Also examined are medical categories which are themselves socially constructed (Fausto-Sterling, 2000, is an erudite example of this, although she is not ultimately a queer theorist). Others examine how language and especially divisions between what is said and what is not said, corresponding to the dichotomy between ‘closeted’ and ‘out,’ especially in regards to the modern division of heterosexual/homosexual, structure much of modern thought. That is, it is argued that when we look at dichotomies such as natural/artificial, or masculine/feminine, we find in the background an implicit reliance upon a very recent, and arbitrary, understanding of the sexual world as split into two species (Sedgwick, 1990). The fluidity of categories created through queer theory even opens the possibility of new sorts of histories that examine previously silent types of affections and relationships (Carter, 2005).
Another critical perspective opened up by a queer approach, although certainly implicit in those just referred to, is especially important. Since most anti-gay and lesbian arguments rely upon the alleged naturalness of heterosexuality, queer theorists attempt to show how these categories are themselves deeply social constructs. An example helps to illustrate the approach. In an essay against gay marriage, chosen because it is very representative, James Q. Wilson (1996) contends that gay men have a “great tendency” to be promiscuous. In contrast, he puts forward loving, monogamous marriage as the natural condition of heterosexuality. Heterosexuality, in his argument, is an odd combination of something completely natural yet simultaneously endangered. One is born straight, yet this natural condition can be subverted by such things as the presence of gay couples, gay teachers, or even excessive talk about homosexuality. Wilson's argument requires a radical disjunction between heterosexuality and homosexuality. If gayness is radically different, it is legitimate to suppress it. Wilson has the courage to be forthright about this element of his argument; he comes out against “the political imposition of tolerance” towards gays and lesbians (Wilson, 1996, 35).
It is a common move in queer theory to bracket, at least temporarily, issues of truth and falsity (Halperin, 1995). Instead, the analysis focuses on the social function of discourse. Questions of who counts as an expert and why, and concerns about the effects of the expert's discourse are given equal status to questions of the verity of what is said. This approach reveals that hidden underneath Wilson's (and other anti-gay) work is an important epistemological move. Since heterosexuality is the natural condition, it is a place that is spoken from but not inquired into. In contrast, homosexuality is the aberration and hence it needs to be studied but it is not an authoritative place from which one can speak. By virtue of this heterosexual privilege, Wilson is allowed the voice of the impartial, fair-minded expert. Yet, as the history section above shows, there are striking discontinuities in understandings of sexuality, and this is true to the point that, according to queer theorists, we should not think of sexuality as having any particular nature at all. Through undoing our infatuation with any specific conception of sexuality, the queer theorist opens space for marginalized forms.
Queer theory, however, has been criticized in a myriad of ways (Jagose, 1996). One set of criticisms comes from theorists who are sympathetic to gay liberation conceived as a project of radical social change. An initial criticism is that precisely because ‘queer’ does not refer to any specific sexual status or gender object choice, for example Halperin (1995) allows that straight persons may be ‘queer,’ it robs gays and lesbians of the distinctiveness of what makes them marginal. It desexualizes identity, when the issue is precisely about a sexual identity (Jagose, 1996). A related criticism is that queer theory, since it refuses any essence or reference to standard ideas of normality, cannot make crucial distinctions. For example, queer theorists usually argue that one of the advantages of the term ‘queer’ is that it thereby includes transsexuals, sado-masochists, and other marginalized sexualities. How far does this extend? Is transgenerational sex (e.g., pedophilia) permissible? Are there any limits upon the forms of acceptable sado-masochism or fetishism? While some queer theorists specifically disallow pedophilia, it is an open question whether the theory has the resources to support such a distinction. Furthermore, some queer theorists overtly refuse to rule out pedophiles as ‘queer’ (Halperin, 1995, 62) Another criticism is that queer theory, in part because it typically has recourse to a very technical jargon, is written by a narrow elite for that narrow elite. It is therefore class biased and also, in practice, only really referred to at universities and colleges (Malinowitz, 1993).
Queer theory is also criticized by those who reject the desirability of radical social change. For example, centrist and conservative gays and lesbians have criticized a queer approach by arguing that it will be “disastrously counter-productive” (Bawer, 1996, xii). If ‘queer’ keeps its connotation of something perverse and at odds with mainstream society, which is precisely what most queer theorists want, it would seem to only validate the attacks upon gays and lesbians made by conservatives. Sullivan (1996) also criticizes queer theorists for relying upon Foucault's account of power, which he argues does not allow for meaningful resistance. It seems likely, however, that Sullivan's understanding of Foucault's notions of power and resistance are misguided.
The debates about homosexuality, in part because they often involve public policy and legal issues, tend to be sharply polarized. Those most concerned with homosexuality, positively or negatively, are also those most engaged, with natural law theorists arguing for gays and lesbians having a reduced legal status, and queer theorists engaged in critique and deconstruction of what they see as a heterosexist regime. Yet the two do not talk much to one another, but rather ignore or talk past one another. There are some theorists in the middle. For example, Michael Sandel takes an Aristotelian approach from which he argues that gay and lesbian relationships can realize the same goods that heterosexual relationships do (Sandel, 1995). He largely shares the account of important human goods that natural law theorists have, yet in his evaluation of the worth of same-sex relationships, he is clearly sympathetic to gay and lesbian concerns. Similarly, Bruce Bawer (1993) and Andrew Sullivan (1995) have written eloquent defenses of full legal equality for gays and lesbians, including marriage rights. Yet neither argue for any systematic reform of broader American culture or politics. In this they are essentially conservative. Therefore, rather unsurprisingly, these centrists are attacked from both sides. Sullivan, for example, has been criticized at length both by queer theorists (e.g., Phelan, 2001) and natural law theorists (e.g., George, 1999).
Yet as the foregoing also clearly shows, the policy and legal debates surrounding homosexuality involve fundamental issues of morality and justice. Perhaps most centrally of all, they cut to issues of personal identity and self-definition. Hence there is another, and even deeper, set of reasons for the polarization that marks these debates.
- Bawer, Bruce, 1993, A Place at the Table: The Gay Individual in American Society. New York: Poseidon Press.
- –––, 1996. Beyond Queer: Challenging Gay Left Orthodoxy. New York: The Free Press.
- Berman, Paul, 1993, “Democracy and Homosexuality” in The New Republic. Vol.209, No.25 (December 20): pp.17-35.
- Boswell, John, 1980, Christianity, Social Tolerance, and Homosexuality: Gay People in Western Europe from the Beginning of the Christian Era to the Fourteenth Century. Chicago: The University of Chicago Press.
- –––, 1994, Same-Sex Unions in Premodern Europe. New York: Vintage Books.
- Bradley, Gerard V., 2001, “The End of Marriage” in Marriage and the Common Good. Ed. by Kenneth D. Whitehead. South Bend, IN: St. Augustine's Press.
- Butler, Judith, 1990, Gender Trouble: Feminism and the Subversion of Identity. New York: Routledge.
- –––, 1993, Bodies That Matter: On the Discursive Limits of “Sex”. New York: Routledge.
- Carter, Julian, 2005, “On Mother-Love: History, Queer Theory, and Nonlesbian Identity” Journal of the History of Sexuality, Vol.14: 107-138.
- Cicero, 1966, Tusculan Disputations. Cambridge, MA: Harvard University Press.
- Crompton, Louis, 2003, Homosexuality and Civilization. Cambridge, MA: Harvard University Press.
- Dover, K.J., 1978, 1989, Greek Homosexuality. Cambridge, MA: Harvard University Press.
- Faderman, Lillian, 1985, Surpassing the Love of Men: Romantic Friendship and Love Between Women from the Renaissance to the Present. London: The Women's Press.
- Fausto-Sterling, Anne, 2000, Sexing the Body: Gender Politics and the Construction of Sexuality. New York: Basic Books.
- Finnis, John, 1994, “Law, Morality, and ‘Sexual Orientation’” Notre Dame Law Review 69: 1049-1076.
- Foucault, Michel, 1980, The History of Sexuality. Volume One: An Introduction. Translated by Robert Hurley. New York: Vintage Books.
- –––,1985, The History of Sexuality. Volume Two: The Use of Pleasure. New York: Pantheon Books.
- –––, 1986, The History of Sexuality. Volume Three: The Care of the Self. New York: Pantheon.
- George, Robert P., 1999, In Defense of Natural Law. New York: Oxford University Press.
- –––, 2001, “‘Same-Sex Marriage’ and ‘Moral Neutrality’” in Marriage and the Common Good. Ed. by Kenneth D. Whitehead. South Bend, IN: St. Augustine's Press.
- Greenberg, David F., 1988, The Construction of Homosexuality. Chicago: The University of Chicago Press.
- Halperin, David M., 1990, One Hundred Years of Homosexuality: and other essays on Greek love. New York: Routledge.
- –––, 1995, Saint Foucault: Towards a Gay Hagiography. New York: Oxford University Press.
- Jagose, Annamarie, 1996, Queer Theory: An Introduction. New York: New York University Press.
- Macedo, Stephen, 1995, “Homosexuality and the Conservative Mind” Georgetown Law Journal 84: 261-300.
- Malinowitz, Harriet, 1993, “Queer Theory: Whose Theory?” Frontiers, Vol.13: 168-184.
- Nussbaum, Martha, 1999, Sex and Social Justice. New York: Oxford University Press.
- Phelan, Shane, 2001, Sexual Strangers: Gays, Lesbians, and Dilemmas of Citizenship. Philadelphia: Temple University Press.
- Plato, The Symposium. Translated by Walter Hamilton. New York: Penguin Books, 1981.
- Plato, The Laws. Translated by Trevor Saunders. New York: Penguin Books, 1970.
- Rich, Adrienne, 1980, “Compulsory Heterosexuality and Lesbian Existence” in Women, Sex, and Sexuality. Edited by Catharine Stimpson and Ethel Spector Person. Chicago: University of Chicago Press.
- Sandel, Michael J., 1995, “Moral Argument and Liberal Toleration: Abortion and Homosexuality” in New Communitarian Thinking: Persons, Virtues, Institutions, and Communities. Edited by Amitai Etzioni. Charlottesville: University Press of Virginia.
- Sedgwick, Eve Kosofsky, 1990, Epistemology of the Closet. Berkeley: University of California Press.
- Shilts, Randy, 1993, Conduct Unbecoming: Gays and Lesbians in the U.S. Military. New York: St. Martin's Press.
- Spargo, Tasmin, 1999, Foucault and Queer Theory. New York: Totem Books.
- Stychin, Carl F., 2005, “Being Gay” Government and Opposition, Vol.40: 90-109.
- Sullivan, Andrew, 1995, Virtually Normal: An Argument about Homosexuality. New York: Knopf.
- Weitham, Paul J., 1997, “Natural Law, Morality, and Sexual Complementarity” in Sex, Preference, and Family: Essay on Law and Nature. Edited by David M. Estlund and Martha C. Nussbaum. New York: Oxford University Press.
- Wilson, James Q., 1996, “Against Homosexual Marriage” Commentary, Vol.101, No.3 (March): 34-39.
How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
|
I. E. Coop
In such a wide field as pastures and crops it is impossible in the time and space available, to do more than summarise the main lines of research and development of recent years having special reference to underdeveloped countries. In the long-term the greatest advances have been made in fundamental knowledge, such as - the basic physiology and biochemistry of plant growth, the significance of the C4 pathway in tropical plants, genetic engineering, embryo transfer in animals, remote sensing photography, the ecology of the world's grasslands. The pressing problems of rangeland degeneration, and social and economic change in human societies, are also better understood.
The immediate problem and the task of this paper is to descend to a lower practical level of what has been learned about the possibilities of increasing pasture and forage production, and of utilising feed grown in the most efficient manner.
Approximately one half of the sheep and three quarters of the goats in developing countries are within the tropics, the remainder being in a band from North Africa through the Near East to China. It is useful to have some classification of climatic zones governing plant growth, and for the tropics it is given below:
|Zone||Rainfall1 (mm)||Rainfall2 (mm)||Growing Period (days)||Dry Season (months)|
|Arid||< 400||< 500||< 90||> 8|
|Semi-arid||400 – 750||500 – 1000||90 – 180||6 – 8|
|Sub humid||750 – 1200||1000 – 1500||180 – 270||4–6|
|Humid||> 1200||> 1500||> 270||< 4|
1 From Unesco (1979)
2 From Jahnke (1982) for Africa
A feature of research and development work in the tropics, with the exception of the arid zone, is that cattle rather than sheep and goats have been used, so that in the absence of good data on small ruminants one has unfortunately to interpret from cattle data. It is proposed to discuss briefly the recent pasture utilisation studies with sheep in the temperate zone and then move to the arid and semi-arid pastoral zones, followed by the cropping/livestock situation in the semi-arid and subhumid zones and finally to the subhumid and humid zones.
4 RD, Christchurch, New Zealand.
PASTURE UTILISATION IN THE TEMPERATE ZONE
Research activities, and the methods derived therefrom, for increasing ruminant production in developed countries follow fairly standarised lines - breeding and selection for improved cultivars, determination of plant nutrient (fertiliser) requirements, determination of animal feed requirements, grazing management studies aimed at integrating pasture growth and animal requirements with maximum efficiency on a year round basis. In the temperate zone under favourable conditions grass-legume pastures are capable of yielding 10–20t DM/ha/ annum on cultivatable land and 5–10t DM/ha/annum on hill land oversown with clovers especially (Trifolium repens).
While these studies have progressed on a broad front, in recent years special attention has been devoted to the most difficult and complex area - the efficient use of pasture under-grazing (Morley, 1981; Parsons et al., 1983). Techniques have been developed for measuring herbage mass, ratio of green to dead leaf, net DM growth of green material under various grazing pressures, the intake of the grazing sheep (or goat) at various levels of pasture availability, the pasture availability needed to promote given levels of production, and the residual pasture (DM/ha) at which production falls below critical levels. Concurrently research has determined the critical and non-critical nutritional periods in the annual cycle of the ewe, and the extent to which the resilience of the ewe to gain and lose body fat may be used to buffer seasonal peaks and troughs of pasture growth (Coop, 1982; Milligan, 1983).
Finally to put this into practice requires control of pasture growth and control or rationing of intake of the grazing sheep. This can only be achieved by adequate subdivision with fencing. This has been greatly facilitated by the development of electric fencing and in really intensive systems by the additional use of cheap portable electric fencing. Such fencing is also used for strip grazing of forage crops, in order to get maximum utilization of the forage.
The efficient conversion of pasture to animal production is a highly complex matter because of the interactions between the grazing animal and the pasture, interactions which vary with season. At low stocking rates percentage utilisation is low and continuous grazing is as good as, or better than, rotational grazing. However to obtain high animal production per hectare intensive grazing at high stocking rates is required in order to give a high percentage utilisation. In this case rotational grazing is superior. In practice compromises become necessary because pasture growth is seasonal, there are periods when utilisation is sacrificed in the interests of achieving high individual animal growth rates and others, such as in winter, when utilisation is much more important than liveweight gain.
The efficiency of utilisation of native pasture in extensive grazing systems running less than 2 sheep/ha is estimated in recent research to be below 30%. When such pasture is improved by oversowing, fertilisation and fencing, utilisation can be increased to 60–70%. On really intensively grazed cultivated pastures efficiencies of 70–85% are possible on a year-round basis. Some appreciation of intensive grazing and utilisation may be gauged from the current practice of wintering pregnant ewes in New Zealand, where the ewes are rotated, at a density of 1000 ewes per hectare, on a daily shift behind electric fences.
In the Northern Hemisphere where winters are colder, greater reliance on hay and silage is made for winter feed. It is estimated that the percentage utilisation of metabolisable energy (ME) of the original pasture, when consumed as hay, is below 50%. For this, and for reasons of cost, the Southern Hemisphere grazing countries place emphasis on utilisation of pasture by the grazing animal with minimal use of conserved fodder.
Advances in grazing management and utilisation have nevertheless been made in northern countries. One example of this is the “two pasture” system developed for the wet cold hill country of Britain, whereby a smaller area of improved pasture is integrated with the larger area of unimproved hill land and utilised at strategic points in the annual cycle of the ewes. Another is the “three pasture” system aimed at minimising worm parasite problems. Finally in all the major grazing countries there is increasing evidence that cattle, sheep and goats can all be beneficial to one another, the special grazing characteristics of each being complimentary to the other. With coats this is seen especially in their preferences for weeds and pasture species not relished by sheep.
If proof is needed that modern pasture/sheep technology can lead to increased animal production the case of New Zealand may be quoted, where from approximately the same area of grazing land, sheep numbers have increased from 33 millions in 1950 to 53 millions in 1965, to 70 millions in 1982 with proportionate increases in meat and wool output. While only some of this temperate zone technology is immediately transferrable to developing countries, the objectives and the principles certainly are.
ARID AND SEMI-ARID ZONE RANGELANDS
The extensive rangelands of the arid and semi-arid zones of developing countries and the peoples they support are in varying degrees of crisis as a result of rangeland degradation, brought about by overstocking. The area is traditionally used solely by pastoralists under nomadic and transhumant systems, but the pressure of human population has led to the incursion of agriculturalists with their livestock into marginal areas, so putting an unbearable pressure on the rangeland vegetation.
Much has been written about the current state of rangeland vegetation, the social and economic impediments as well as the technical difficulties in reversing the deterioration (e.g. Unesco, 1979; Jahnke, 1982; Harrington, 1982 and Malechek, 1982). While there are cases or instances of potential improvements or improvements actually made, the concensus of opinion of authors is that the only solution short and midterm is to reduce grazing pressure. It is recommended that this be achieved by destocking, or by deferred grazing or some other form of grazing management which would permit a more even grazing and reduce severe overgrazing on critical areas. A recent FAO review (FAO 1984) commented that there is need for rehabilitation by the introduction of good management, that forage cultivation is not yet generally accepted and conservation of hay and silage rarely practised. There is a need to introduce forage trees and browze shrubs, but there was little likelihood of increasing forage availability in the near future due to pressure of livestock combined with the persistence of drought.
The productivity of the arid and semi-arid zone rangelands is low. Jahnke (1982), quoting other authorities, gives a figure of 2.5kg DM/ha/annum per mm rainfall, or It DM/ha/annum at 400 mm which is likely to be inefficiently utilised. Such yields cannot hope to generate enough income to provide incentive to introduce improved species even if this were technologically feasible.
While acceptance by the inhabitants and by Governments that reduction in grazing pressure is the only short term solution, one must not be entirely negative. Observation and development project results indicate that there are avenues for improvement and some specific examples of these are listed below.
Grain yields and sheep production were twice as great in South Australia through replacing fallow with subterranean clover and medic pasture, compared with Algeria having a similar Mediterranean climate but not integrating crop and sheep grazing (Allden, 1982, quoting Carter).
In the Drought Prone Areas Programme in Western India the introduction of Cenchrus ciliaris and Lasiurus sindicus increased DM yield from 0.4 t to 3t/ha/annum (Jain, 1983).
Depleted rangeland in China has been shown to be capable of yielding 3t DM/ha/annum by oversowing with milk vetch and fertiliser (Chinzagco project, pers. comm). In another site having 300 mm rainfall, all in summer, the yields of native grassland have been doubled with fertiliser alone, while in cultivated areas the use of newer cultivars of sorghums, maize, and annual grasses for silage, and native grass for hay has also doubled the number of stock carried as well as improving them greatly (FAO 1983).
The Syrian Arab Republic Rangeland Conservation and Development Project is one of the best known, reviving the ancient “Hema” system of grazing control, introducing Atriplex spp. planting fodder trees and creating lamb fattening cooperatives (Draz 1978).
The wide ranging development project in Morocco where Agropyron elongatum has been introduced into a Stipa-Artemisia ecosystem in a 300 mm rainfall area (El Gharbaoui, 1984).
The introduction of Atriplex and Kochia spp. in Saudia Arabia (Hassan 1984),
The legumes Stylosanthes humilis and to a lesser degree S. guyanensis have been shown to be capable of being oversown or direct drilled on sites in the semi-arid zone.
There are also arid or semi-arid rangelands in the temperate zone (U.S.A., South America, South Africa, Australia) which have also degenerated under overstocking during the last 100 years and it is significant that in all of these stock numbers have declined. The most intensively studied are those in the U.S.A. and in a recent review of rangeland management and reseeding results, it is commented that “a considerable portion of western rangelands currently support vegetation assemblages greatly below their potential” (Herbel, 1984; Young et. a!., 1984). Wilson, A.D. (1982) in another review concludes that “there are no technological improvements in the pipeline that will lead to major productivity gains. The basic restrictions of sparse vegetation, low rainfall and a harsh climate are not subject to technological innovation”. Nevertheless there are instances that in all of these countries improvements are technically possible. To take but one example, Stevens and Villalta (1983) at high altitudes in Peru were able to establish ryegrass-clover pastures and to direct-seed lucerne into rangeland with large increases in sheep numbers carried.
The problem is that research and development projects in both developed and developing countries on which the possibilities if improvement have been shown, have high inputs of technical and economic aid. Whether they can survive in a straight commercial sense and whether it is economic to attempt to increase production is highly dubious. In the more favourable sites it may be so, but for most of it, the problem is to halt further deterioration. The poor income-generating power of the extensive rangelands dictates that any improvements must be ecologically sound and low cost, and should act in a catalytic role to permit better utilisation of the much larger area of unimproved land.
Research priorities suggested should include grazing management studies to provide more even grazing pressure, forage conservation, selection of species and cultivars extending growth into the dry period, integration with cropping systems. (Unesco, 1979; Malechek, 1982; Butterworth et.al,1984).
CROP - LIVESTOCK INTEGRATION
Crop production is an occupation of agriculturalists living in villages mostly in the semi-arid and subhumid zones. Traditionally some nomads have included the grazing of crop stubbles in their annual movement, while transhumant pastoralists have also made use of stubbles and crop residues during the dry period. The increasing sedentarisation or semi-sedentarisation of nomads and transhumants, together with movement of agriculturalists with their own livestock in the opposite direction into drier areas, is reducing the areas available for grazing and also increasing the risks of crop failure. The integration of cropping with sheep and goats is primarily in the semi-arid zone but extends into the subhumid zone. Although the cropping regime yields more DM/ha in the form of stubbles, straws and byproducts available for stock the increases in stock numbers more than offsets this. Nevertheless cropping systems and the more intensive and settled human existence in villages or permanent abodes, offers an environment much more amenable to technological change and improvement than does the rangeland. The following research developments in recent years are some of the more promising.
The breeding of improved cultivars of human feed crops - wheat, maize, sorghum, groundnuts etc. and research on fertiliser responses, together with an appreciation that in subsistence agriculture, fertilizer put on crops increases yield sufficiently to release land for planting in animal forage crops.
Research and demonstration has shown that forage production can be expanded considerably by inter-row sowing of legumes with the cereal, using improved cultivars of forage species, and especially replacing the traditional fallow with sown perennial or annual forage crops. Legumes such as Stylosanthes and vetches, and other tropical legumes in higher rainfall areas, are much preferred since their nitrogen level and nutritive value are high and they increase soil nitrogen for the next cereal crop. High yields have been obtained in Cyprus from barley and barley/vetch forage made into hay (Osman and Nersoyan, 1984; Unesco, 1979; FAO, 1983). If a move to greater use of forage crops and more efficient use of grazing stubbles is to be made then control of the sheep and goats becomes important. Attempts should therefore be made to gain acceptance of the electric fence by herders and cultivators.
Intensive fattening of lambs and kids, on locally grown roughage plus concentrates and byproducts, has a double advantage of controlled marketing with a superior product and more importantly of removing young animals to be fattened from the overgrazed rangeland, thereby reducing the grazing pressure. Lamb fattening trials have been reported from several countries showing typically that weaned lambs make gains of 100 – 250 g/day with feed conversion ratios of 6 to 10 according to the energy content of the diet. There is a need to examine what effect this has on the total system.
Some arid and semi-arid areas have water available for irrigation, which is used mainly for cereal or cash crops (cotton) but some is available for forage. Water from the Nile is used in Egypt and Sudan, underground water in Libya and Saudi Arabia. Extremely high yields of lucerne (Medicago sativa ) and Egyptian clover (Trifolium alexandrinum) are obtained and provide a high protein source for cattle, sheep and goats.
Improving the utilisation of low quality roughages is also possible. Low protein levels characteristic of tropical forages during the long dry period are a limiting factor in animal intake and performance. (Minson, 1982). A considerable amount of research work has been done over the last 20 – 25 years on the use of urea to improve the voluntary intake of straws and other low quality roughages by cattle, sheep and goats. Trials conducted in pens have almost universally given good results but selective grazing by animals in the field has caused some doubts about its application in a grazing context (Coombe 1981). A more recent discovery is that alkali or ammonia treatment of straw can increase digestibility by 10–15units, e.g. from 45% to 55–60%. Encouraging results are being obtained from the technique at both the village level (Dolberg et.al., 1981), and the factory level (Creek et.al., 1984).
A much better understanding of protein requirements of sheep and goats has been developed during the last decade, with recognition of the significance of rumen nondegradable protein. This is of special importance in the tropics (Lindsay, 1984).
The outlook then for improvements in pasture and crop production, and of utilisation by sheep and goats in the cropping areas is reasonably encouraging. Whether it can keep pace with the increases in human population is another matter. Fortunately much of the research done in developed countries is less sensitive to environment in a cropping activity than in a grazing activity, and is therefore more likely to find application in the cropping scene. The most important fields of research in the cropping areas as far as sheep and goats are concerned are likely to be further integration of pastoral ism with cropping, conservation and forage production for the dry period, and improvements in the utilisation of straws.
Somewhat similar problems exist in the semi-arid/cold regions of the world such as in the arc from Turkey to China. Here the winter replaces the dry period of the tropics. In the USSR and Northern China for example, many pastoralists have been semi-or wholly sedenterised, and winter bases exist in villages or have been especially constructed. The growing of forage, partly for grazing but mostly for conservation as hay and silage, is a dominant feature of the system (Demiruren, 1982).
SUBHUMID AND HUMID ZONES
Though the line of demarkation between the semi-arid and subhumid zones is diffuse, there is a distinct trend towards tree crop agriculture as well as cropping, towards tall-grass pasture species and a greater density of villages, especially where associated with rice culture. This is accompanied by a shift in the relative importance of large and small ruminants. Whereas in developing countries sheep and goats outnumber cattle by nearly 2 : 1 in the arid and semi-arid zones, cattle outnumber sheep and goats in the subhumid and humid zones. As far as sheep and goats are concerned there are no longer any pastoralists and nearly all the animals are associated with village and cropping agriculture.
Tropical Pasture Development
Present native pastures consisting of Hyperrhenia, Andropogon, Themeda and many other species exist in a savanna landscape derived from forest or woodland. Soils are heavily leached, grazing is primarily with cattle and fire plays an important part in the grass, scrub, tree balance. The most important development in this area in the last few decades has undoubtedly been the selection, breeding and cultivation of improved cultivars of tropical grasses and legumes. The legume is particularly important because of the low nitrogen status of tropical soils. Though this work has been carried out in several tropical environments the driving force has been the CSIRO Division of Tropical Pastures in Queensland, Australia (Mannetje, 1982; Minson, 1982). Now there are established cattle ranches and cattle projects in most tropical countries with rainfall in excess of 800–1000 mm.
Unfortunately, in relation to sheep and goats, the basic grazing experiments and present projects are almost wholly involved with cattle. There are good reasons for this cattle dominance, but not for the exclusion of small ruminants. Very high yields of pasture DM are attainable - up to 30 – 40t/ha/annum but control of pasture growth, maintenance of the grass-legume balance, and ingress of weeds do present greater problems than with temperate pastures (Mannetje, 1982). Nevertheless the potential of these tropical pasture species for small ruminants with or without cattle should be explored. Some trials using sheep and goats have been recorded (Boulton and Norton, 1982; Potts and Humphreys, 1983; Susetyo et.al. 1983) but not yet on a farm scale. Some of the improved species, especially legumes such as Stylosanthes humilis and S. guyanensis, Macroptilium, Desmodium spp are also finding use as forages for establishment on fallows which are grazed by sheep and goats in both semi- and subhumid zones.
Sheep and Goats in the Village
The place of sheep and goats in the village is much the same as described for the arid and semi-arid zones. The scope for increases in numbers and production especially of goats has been emphasised (Devendra, 1980; Roy-Smith, 1982; Zulkifli et.al., 1980; Wilson, R. T. 1982), involving greater utilisation of the considerable byproducts available from cereal and tree crop production, the introduction of improved grass and legume species in available land and recognition of the part which the milking goat rather than the cow could play. The opportunity to exploit the production of tropical grasses and legumes is facilitated by the fact that many sheep and goats, especially in the humid zone, are fed under a cut and carry system, or are let out during the day time on a controlled grazing system. There is no doubt that small ruminant production in this zone could be increased and there are good reasons why it should. Most of the arguments about the relative merits of cattle, sheep and goats are based on personal factors, and on calculations of what ought to be, and there is a need for comparisons to be made on a strictly scientific basis. It is encouraging that recognition of animal production potential in the humid tropics is evidenced by the creation of the joint Australian/Indonesian Centre for animal research and development in Ciawi, Indonesia in 1975.
Utilisation of Pasture in Plantations
In the humid tropics there are large areas of tree crops such as coconut, rubber and oil palm. They are established in association with a tropical legume cover crop which in time regresses to grasses and weeds. Except in coconut plantations often grazed by cattle the herbage available is generally not used at all. Attention has been given by the Rubber Research Institute of Malaysia (Tan and Abraham, 1980) to using sheep to consume this herbage and to reduce the high cost of weed control. Promising results are being achieved, confirming that a considerable potential exists for the utilisation of this large feed resource.
Forage Trees and Tree Byproducts
The utilisation of edible trees and shrubs, and of tree byproducts such as leaves, pods and seeds has received considerable attention in recent years. The characteristics and feed values of tree crops have been reviewed recently by Hutagalung (1981). Particular attention has been given to leguminous trees-such as Leucaena, Gliricidia, Tagasaste spp. since the leaves of leguminous trees, and especially L. Leucocephala, have protein levels in excess of 20%. Recent evidence (Bamualim et.al., 1984) shows that the protein is a good by-pass protein, capable of enhancing intake of low quality roughages where these form the main diet. Acacia spp prevalent in many parts of the tropics can also be valuable in droughts (Snook, 1984).
Leguminous trees and edible shrubs are not confined to the subhumid and humid zones, but can contribute also in the semi-arid zone, and are likely to be utilised much more than in the past.
There is considerable scientific evidence that both pasture and animal production can be increased substantially in all except the arid zone. The major problems are lack of capital to implement improvement and uncertainty about its viability, economically. The greatest and quickest responses in pasture and forage production are likely to be in the subhumid zone, and also in cropping systems.
To maintain impetus in research it is suggested that priority be given to:
breeding and selection of pasture species for the semi-arid zone aimed at increasing the length of the growing period. Selection of low phosphate demanding legumes for this zone.
studies of the utilisation of tropical pastures by sheep and goats.
pasture conservation for the dry period, forage production on cropping land.
studies to increase the complementarity (or integration) of rangeland and cropping land.
continuation of studies aimed at improving the utilisation of low quality roughages by sheep and goats.
Allden, W. G. 1982. In Nutritional Limits to Animal Production from Pastures. (Ed. J. 0. Hacker), pp. 45–65. Comm. Agr. Bur.
Bamualim, A., Weston, R. H., Hogan, J. P. and Murray, R. M., 1984. Anim. Prod. in Australia, 15: 255–258.
Boulton, P. and Norton, B. W.,1982. Anim., Prod, in Australia, 14:641.
Butterworth, M. H., Lambourne, L. J. and Anderson, K., 1984. Proc. 2nd Int. Rangeland Congr., Adelaide. (In Press).
Coombe, J. B.,1981. In Grazing Animals (Ed. F. W. H. Morley), pp. 319–332, Elsevier, Amsterdam.
Coop, I. E., 1982. In Sheep and Goat Production (Ed. T. E. Coop), pp. 351–375, Elsevier, Amsterdam.
Creek, M. J., Barker, J. J. and Hargus, W. A., 1984. World Anim. Rev., 15: 12.
Demiruren, A.,1982. In Sheep and Goat Production (Ed. I. E. Coop), pp. 425–440, Elsevier, Amsterdam.
Devendra, C, 1980. J. Anim. Sci., 51:461–473.
Dolberg, F., Saadullah, M., Haque, M. and Ahmed, R., 1981. World Anim. Rev., 38: 37–41.
Draz, 0., 1978. Proc. 1st Int. Rangeland Congr., Denver, Colorado, U.S.A.
El Gharbaoui, 1984. Proc. 2nd Int. Rangeland Congr., Adelaide. (In Press).
FAO, 1983. Pilot Demonstration Centre for Intensive Pasture, Fodder and Livestock Production. Peoples Republic of China. FAO - CPR/79/001.
FAO, 1984. World Anim. Rev. 51: 2–11.
Harrington, G. N., 1982. In Grazing Animals (Ed. F. W. H. Morley), pp. 181–202, Elsevier, Amsterdam.
Hassan, A. A.,1984. Proc. 2nd Int. Rangeland Congr., Adelaide.(In Press).
Herbel, C. A., 1984. In Developing Strategies for Rangeland Management. pp. 1167–1178, NRC/Nat. Acad. Sci., Westview Press.
Hutagalung, R. I., 1981. In Intensive Animal Production in Developing Countries ( Ed. A. J. Smith and R. G. Gunn) pp. 151–184.
Jahnke, H. E., 1982. Livestock Production Systems and Livestock Development in Tropical Africa Kieler W. b. V. Vauk. 253 pp.
Jain, H. K., 1983. Proc. XIV Int. Grassl. Congr., p. 440.
Lindsay, J. A.,1984. Anim. Prod, in Australia, 15:114–117.
Malechek, J. C, 1982. Proc. 3rd Int. Conf. on Goat Prod. and Disease, pp. 404–408.
Mannetje, L. 't., 1982. Nutritioanl Limits to Animal Production from Pastures. (Ed. J. B. Hacker), pp. 67–85. Comm. Agr. Bur.
Milligan, K. G., 1983. Controlled Grazing Systems. Aglink FPP 681, N.Z. Min. Agr., Fish., Wellington.
Minson, D. J., 1982. In Grazing Animals (Ed. F. W. H. Morley), pp. 143–158, Elsevier, Amsterdam.
Morley, F. H. W.,1981. In Grazing Animals (Ed. F. W. H. Morley), pp. 379–400. Elsevier, Amsterdam.
Osman, A. E. and Nersoyan, N., 1984. Proc. 2nd Int Grassl. Congr., Adelaide. (In Press).
Parsons, A. J., Leafe, E. L., Collett, B., Penning, P. D. and Lewis, J., 1983. J. Appl. Ecology, 20: 127–139.
Potts, A. and Hymphreys, L. R., 1983. J. Agr. Sci, 101: 1–7.
Roy-Smith, F., 1982. In Intensive Animal Production in Developing Countries (Ed. A. J. Smith and R. G. Gunn), pp. 387–394. Brit. Soc. Anim. Prod. Pub. No. 4.
Snook, L. C, 1984. Anim. Prod, in Australia, 15:589–592.
Stevens, E. J. and Villalta, P., 1983. Proc. 14th Int. Grassl. Congr., p. 176.
Susetyo, S., Whiteman, P. C. and Humphreys, L. R., 1983. Proc. XIV Int. Grassl. Cong r., p. 412.
Tan, K. H. and Abraham, P. D., 1980. Proc. 1st Asian Australasian Anim. Sci. Congr., Malaysian Soc. Anim. Prod.
Unesco/UNEP/FAO, 1979. Tropical Grazing Land Ecosystems - a State of Knowledge Report. Unesco, Paris. 655 pp.
Wilson, A. D., 1982. In Sheep and Goat Production (Ed. I. E. Coop), pp. 309–330. Elsevier, Amsterdam
Wilson, R. T., 1982. Proc. 3rd Int. Conf. on Goat Prod, and Disease, pp. 186–195.
Young, J. A., Evans, R. A. and Eckert, Jnr. R. E., 1984. In Developing Strategies for Rangeland Management, pp. 1259–1300. NRC/Nat. Acad. Sci., Westview Press.
Zuikifli, S. D., Zulficar and Rachmat Setiadi, 1980. Proc. 2nd Ruminant Seminar, Ciawi, Indonesia. pp. 42–54.
|
James Ford Rhodes (18481927). History of the Civil War, 18611865 1917.
responsibility that was not clearly his, probably prevented him from urging his President to negotiate a peace; but, if the memories of private conversation may be believed, he had lost all hope of success. It was Jefferson Davis who in this matter imposed his will on all his subordinates and it was he more than anybody else who stood in the way of an attempt to secure favorable terms for the South in a reconstruction of the Union.
If Davis, Lee and the Confederate Congress could have made up their minds to sue for peace, the contemporaneous occurrences in Washington reveal the magnanimous spirit in which they would have been met by Abraham Lincoln.
Two days after the Hampton Roads Conference, on Sunday evening, February 5, the President called his Cabinet together to consult them in regard to a message he proposed to send recommending that Congress empower him to pay to the eleven slave States of the Southern Confederacy then in arms against the Union and to the five Union slave States four hundred million dollars as compensation for their slaves provided that all resistance to the national authority should cease on April first next. The Cabinet unanimously disapproved this project and Lincoln with a deep sigh said, You are all opposed to me and I will not send the message. Such a proposal to the Southern Confederacy, tottering to her fall, only sixty-three days before Lees surrender to Grant would have shown magnanimous foresight. Had the Confederate States accepted it, there would have been an immediate fraternal union after the Civil War. Had they rejected it, the President and Congress would have made a noble record. The offer, however, was too wise and too generous to be widely approved of men; Lincoln of all those in authority had reached a moral height where he must dwell alone and impotent. But when reflecting on the
|
samedi 7 avril 2012 13:24tell me.
- Modifié TEJ2911 samedi 7 avril 2012 13:26
Toutes les réponses
samedi 7 avril 2012 16:23
You need at least the one class for a program to run. Creating a new project will automatically create the class for you . If you have older code, create a new project and drop the old code into the new project. A console application wil automatically generate a main function. Take you old main function code an paste it into the new main function. If you have any additioanl functions just place the code after the main function keeping the code inside the same class as main.
samedi 7 avril 2012 17:42
No you cannot execute it. C# at least need one system defined or user defined class. That is what an objects oriented programming language should be.
I am bit curious to know what you want to achieve.
dimanche 8 avril 2012 07:04
do you mean you don't required .net framework to run your program ?
then you have to you use c++ to create your Project.
|
Vegetative system is simple, effective
Richard Baumert doesn’t have thousands of pigs or calves on feed, but his feedlots drained the wrong way. Handling manure from his hog feeding platforms and cattle yards in a cost-effective way wouldn’t normally be considered an easy task. But thanks to a system developed by Christopher Henry, University of Nebraska Extension engineer, and engineer technician Jason Gross, Baumert’s situation was remedied without breaking the bank.
“It solved the problem of water running through our feedyards,” says Baumert.
Instead of building an extensive and expensive holding lagoon, Baumert, Henry and Gross developed a small sediment basin below the yards that collects rainwater, snowmelt and effluent from his pens. There is also a diversion that keeps clean water drained from the farmstead from mixing with pen runoff.
Liquids from the basin are pumped a short distance to a hillside and dispersed over a 3-acre vegetative treatment area by K-Line irrigation tubing and 16 irrigation pods. “This was a way to divert the water and collect it before it reached a nearby creek,” says Baumert.
At a glance
• Baumert’s VTS sprinkler system is one of only five like it in the country.
• He pumps fluids from his sediment basin to a vegetative treatment area.
• He uses 16 K-Line irrigation pods to distribute the fluids.
In Baumert’s system, funded in part through a grant from Nebraska Environmental Trust and the Nebraska Department of Environmental Quality Section 319 program, liquid effluent is never stored in the basin for long. It’s pumped using a 10-hp Kohler diesel engine within 36 hours over the mixed grass area. A floating filter and hose, much like the ones used by local fire departments to disperse standing water, draws water into the pump, which moves it to the K-Line irrigation setup on a hillside nearby. There are several ways to disperse of the effluent in such a vegetative treatment system, or VTS, according to Henry, but for Baumert’s operation, this seemed to be the best solution.
“Richard did the project voluntarily, that is with the blessing of the Nebraska Department of Environmental Quality, but not as a requirement to continue feeding cattle,” Henry says. “Richard is a forward-thinking producer. He saw that the runoff from his lots was a problem and did something about it without waiting until a NDEQ inspector required him to put controls in place.”
VTS consists of three distinctive components, including a sediment basin, vegetative treatment area composed of perennial vegetation used for treatment of runoff, and a water distribution system. Solids from the sediment basin are removed annually and spread on crop fields.
“A VTS is a permanent installation dedicated to managing the runoff from the open lot system,” Henry says. “Traditionally, gravity transfer has been used to distribute runoff to VTAs, such as gated pipe or surface flow.
“The advantage of the sprinkler concept is that we can locate VTAs on sites where we cannot locate a gravity or even a pump VTS because of soil type and topography,” he says. Baumert’s sprinkler system is one of only five like it in the country. “We are building three more this summer,” says Henry.
Sprinkler VTS systems are relatively inexpensive compared to other options of handling manure runoff from open lots. Through the work of Henry and Gross, the cost has been reduced by 65% over the past seven years, to just under $70 per irrigation head.
Since installation last fall, Baumert says he has pumped out his basin at least four times. “I was surprised at how simple it is,” he says.In Baumert’s case, simple systems like his VTS are the most effective.
For more information about installation of a VTS as a demonstration project or as a cost-eligible practice under the Natural Resources Conservation Service’s EQIP, contact UNL Extension engineer Christopher Henry at 402-472-6529 or learn more at afo.unl.edu.
AHEAD OF THE GAME: Richard Baumert voluntarily installed a vegetative treatment area at his hog and cattle yards to prevent runoff from reaching a nearby stream.
POD IS PIVOTAL: Liquid effluent is dispersed from a sediment basin through 16 K-Line irrigation pods spread across a 3-acre vegetative treatment area.
This article published in the July, 2010 edition of NEBRASKA FARMER.
All rights reserved. Copyright Farm Progress Cos. 2010.
|
Lawn and Garden Chemicals
The Problem with Pesticides, Herbicides and Fertilizers
By Eric Vinje, Planet Natural
At one time garden chemicals were championed as the panacea for agricultural shortages and deficits. Pesticides, it was said, were the technological answer to dealing with insects, weeds and other intruders that nature sent the farmer’s way. Herbicides increased yields by decreasing weeds. And chemicals kept soils fertile, making for more vigorous, more productive crops. Over time, we’ve learned that these claims are exaggerated if not completely false
But these synthetic products have a down-side, one that threatens the environment and the very future of food production. Chemical fertilizers, herbicides and pesticides poison our waters, our soils, other living creatures and our own bodies. Their effectiveness, touted by big budget, corporate-driven marketing plans, isn’t all it’s cracked up to be. In light of these trade-offs, and the fact that healthy and potentially more effective organic alternatives exist, why should we risk our soils, our water and the health of our children?
It’s argued that agricultural chemicals are needed if we’re to supply the world’s exploding population with food. But it’s a false argument, one that ignores the fact that the world already grows more than enough food to feed its billions (see Frances Moore Lappe’s World Hunger: 12 Myths and Food First: Beyond the Myth of Scarcity for a discussion of global food-distribution problems and solutions). Eliminating synthetic fertilizers, herbicides and pesticides to the greatest degree possible and replacing them with practical and effective organic methods will not only benefit the current generation but generations to come. While the billions of tons of chemicals applied to the earth largely serve commercial agriculture, eliminating chemicals from our own lawns and gardens can also make a difference, directly benefiting our families and communities.
While most modern herbicides are designed to kill only plants and have little or no toxicity to humans, many still have extreme consequences in the environment, changing habitats in ways that affect insects and wildlife. These consequences extend to water courses where they may kill beneficial aquatic plants and fish.
Some herbicides continue to be toxic to animals and plants. One study showed dogs who play in herbicide-treated yards have three-times the risk of cancer. A Swedish study linked herbicides with non-Hodgkin’s lymphoma cancer in humans.
Paraquat, one of the most widely used herbicides in the world, is so toxic it’s frequently used in third-world countries as a means of suicide. Large, unintentional exposure to paraquat almost always leads to death. Smaller exposures, usually through inhalation, have been linked to lung damage, heart and kidney failure, Parkinson’s disease and eye damage. Controversy exists around the use of herbicides more commonly used by home gardeners, such as, 2,4-D and Roundup. A manufacturer supported review of studies found Roundup safe for use around humans while anti-herbicide groups cite studies that find it affecting human embryonic, placental, and umbilical cells in vitro as well as testosterone development in mice. There was an outcry in 2003 when the Environmental Protection Agency decided not to limit the sale of the weed-killer atrazine amid charges that the chemical industry had undue influence in the decision. The EPA’s own research had shown that atrazine was toxic to some water-borne species in extremely low parts-per-billion. (A few years ago France ordered the withdrawal of atrazine and related weedkillers, saying the chemicals were building up in water supplies and threatening human health.)
At Planet Natural, we offer a variety of quality organic weed control products, including barriers, earth-friendly herbicides and weeding tools. All are safe for you, your family and your pets.
The overuse of chemical or inorganic fertilizers has serious consequences including the leaching of nitrates into the ground water supply and the introduction of certain contaminants, including cadmium, into the soils. Fertilizer run-off into ponds, lakes and streams over stimulates algae growth, suffocating other aquatic plants, invertebrates and fish. Toxic fertilizers made from industrial waste can bring mercury, lead and arsenic to our soil and water supplies. The last few years have seen efforts to control both nitrogen applications and the use of toxic fertilizers in the U.S. and other western nations.
Tip: Slow release, organic lawn fertilizer benefits your soil while providing nutrients for your grass. Not only does it improve soil structure, it encourages beneficial soil microbes that attack pests and diseases.
By far, the most damage to the environment and the biggest threat to our health comes from pesticides. It is becoming harder and harder to justify their wide-spread use. We now know that pesticides aren’t as effective as claimed and that they cause more harm than good. And the pests they’re reputed to eliminate? Maybe they aren’t so bad after all, especially if they are managed intelligently.
Farmers often accept that using synthetic fertilizers creates a trade-off — high yields of plants with a lot more insect problems. The same companies that manufacture fertilizers have an answer: buy their pesticides and their seeds that are genetically engineered to withstand the pesticides.
Pesticides tend to create a vicious circle. The more they are used, the more they are needed. While the costs of pesticides increases, their effectiveness is decreasing — meaning that more and more chemicals are needed to get the job done. As pesticides get more toxic, they are also getting less effective at reducing the pest problem. Could the problem be with the pesticides, not the pests?
Recently Ontario, Canada banned lawn and garden pesticides for home use. There is a similar ban in Quebec. As more comes to light about the dangers of pesticides, the more people are realizing that they just don’t have a place in our world.
Tip: Using toxic pesticides upsets your garden’s natural balance, harming beneficial insects as well as pests. At Planet Natural, all of our natural pest control products are safe for you, your family and the environment. And they’re effective.
Secondary Pest Problems
Pesticides do not discriminate; they kill all insects in their path. This means the beneficial insects, such as the ones that prey on harmful insects, are killed right along with the ones you aim to get rid of. Pollinators also fall victim to pesticides.
When every original insect is gone, the habitat is wide open for infiltration by other insects. This new, blank-slate habitat has no predators so the secondary insect population explodes and you wind up with a bigger insect issue than in the first place. Now you need more and stronger pesticides.
Note: Less than 1% of the insects in the world are considered pests. The other 99% play an integral role in the ecosystem.
Pesticides are great for one thing: they aid pests in developing resistance and create stronger bugs. Every time a field or garden is sprayed a few insects who are stronger and more genetically advanced are likely to survive. These survivors mate and create offspring which are more resistant to the pesticide. Because the lifecycle of an insect is so short, and each generation is more resistant than the last, it doesn’t take long for a super bug to develop. The only recourse is to create a stronger and more toxic pesticide.
There are many insects around today that are resistant to any insecticide. We’ve created monsters we can’t kill.
When it comes down to it, pesticide use is very expensive. There are the original costs of buying the poison and sprayer, plus all the protective gear you need to wear when using it (Farmers spend $2.4 billion each year on insecticides and fungicides; see “Are Pests the Problem — or Pesticides?“).
There are other costs, as well. A new load of pesticides has to be trucked in for the secondary pest outbreaks and there is a loss in revenue when the pesticides stop working. We all pay for the government to regulate pesticides and for the legal battles over safety and the environment. The manufacture of these chemicals requires vast quantities of fossil fuels at a time when those costs are at a premium.
Pesticides don’t stay where they are put. They soak into the soil, contaminate groundwater and surface streams and drift through the air. The pesticides you use in your garden can end up in lakes and ponds, in your neighbor’s yard and in your house. Many agricultural pesticides are proven neurotoxins as well as likely endocrine disruptors and carcinogens.
No one can forget the result of DDT, which is still used in India, North Korea and a few other countries for malaria control. The widespread agricultural use of DDT threatened many birds, including the bald eagle, with extinction. It is highly toxic to fish and shell fish. Mammals were also adversely affected. Cats were especially affected by DDT and its use often resulted in explosions of rodent populations where it was applied because of decimated predator numbers. In humans, DDT is suspected of causing many cancers, especially breast cancer, and adversely affects reproduction. Though not acutely toxic (its slow-building affects accumulate over the years; it is classified as having “chronic toxicity”), DDT persists and accumulates in the environment, collecting in human tissue until it reaches damaging levels. Even its effectiveness against mosquitoes has diminished as the insects have gradually developed resistance to the chemical. Yet its toxicity to humans lives on. A 2002 study in the U.S. found at least half its subjects still had detectable levels of DDT. The U.S. ban on DDT in 1972 is largely credited in saving the bald eagle.
DDT isn’t the only pesticide that causes great damage to the ecosystem. Continued pesticide use results in groundwater contamination, death and poisoning of domestic pets and livestock, loss of honeybees and other pollinators, deformed frogs, bird death and fishery losses are all at least partially the result of pesticide use.
In California, the pesticides carbofuran (used on alfalfa, grapes and rice) and diazinon are responsible for the majority of bird kills, affecting many species of songbirds, waterfowl and raptors. Controlled studies have shown that when carbofuran is applied to crops, as many as 17 birds die for every five acres treated.
Pesticides, along with fertilizers, also produce dead zones in estuaries and bays, areas starved of oxygen and depleted of marine life.
Human Health Costs
It’s not just the non-human realm that is feeling the brunt of pesticide use. Consider the legacy of DDT. It wasn’t until 2001 that the link between DDT and premature births and low birth weights in humans born in the 1950s and ’60s was discovered. The low amounts of DDT used for malaria control have been shown to cause miscarriage and premature birth, reduced sperm counts, inability to breast feed and increases in infant deaths.
Post-DDT pesticides continue to be suspect in everything from cancer to mental retardation. Recently, an Australian toxicologist reported in the journal Science of the Total Environment that pesticides may be responsible for some of the intellectual development problems in children that were previously associated with lead.
Studies have found a link between pesticides and Parkinson’s Disease, autism and child cancers, neuroblastoma, leukemia, chronic infections, bronchitis, asthma, sinusitis, infertility, neurological disorders, aggression and depression (see “The Chlordane Pesticide Problem“).
Clearly there are a lot of problems with the use of pesticides, herbicides and inorganic fertilizers, but is there a solution? Yes, go organic! The Natural Resources Defense Council (NRDC) found that U.S. farmers’ reliance on synthetic fertilizers and insecticides may be based on an outdated understanding of plant chemistry, and that organic gardening methods can be validated by hard science.
Scientists found that corn borer moths laid 18 times as many eggs in corn grown in conventional (fertilized and pesticide-rich) soil as corn grown in organic soil. It appears that the corn grown with large doses of nitrogen, phosphorous and other elements found in synthetic fertilizers produced more sugar and amino acids — which the insects preferred.
Growing organically, doesn’t just mean not using synthetic fertilizers and pesticides, it means creating a healthy environment where plants can grow strong and harmful insects and weeds are balanced with beneficial insects and desirable plants.
To create a healthy ecosystem within the garden or farm, start with healthy soil. Improving garden soil by adding organic matter (such as compost), balancing the soil pH and using organic fertilizers when needed. Then rotate your crops annually. Make sure your plants get the right amount of water and sunlight and grow what’s best adapted to your region.
Consider the Better Pest Management and Integrated Pest Management systems, techniques that look past short term gains (which are often followed by quick losses) to take in the long-term picture for environmental and human health.
It might seem counterintuitive at first, but attracting or releasing certain insects into the garden can actually help control other insects. Beneficial insects prey on some harmful garden pests, reducing their population numbers. Predators, parasites and pollinators can all increase the health of your garden.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.