text
stringlengths 219
516k
| score
float64 2.78
4.96
|
---|---|
NASA / JPL-Caltech / STScI / ESA
This image of a pair of colliding galaxies called NGC 6240 shows them in a
rare, short-lived phase of their evolution just before they merge into a
single, larger galaxy. Click on the image for a larger version.
Nothing draws a crowd like a spectacular crash - whether it's a NASCAR auto race or a galactic collision. Over the past month, Internet users voted for a cosmic smash-up as their favorite target for a future close-up from the Hubble Space Telescope, and this week you can feast your eyes on two fantastic images of galaxies in gridlock.
The first "train wreck" comes from NASA's Spitzer Space Telescope. This is a biggie: Two huge galaxies, each anchored by a central black hole that's millions of times as massive as the sun, are moving toward an imminent pile-up. Exactly how imminent? Millions of years after the scene captured in this image - a time span that's a mere blink of the eye on the cosmic scale.
"One of the most exciting things about the image is that this object is unique," Stephanie Bush of the Harvard-Smithsonian Center for Astrophysics, says in a news release about the observations. "Merging is a quick process, especially when you get to the train wreck that is happening. There just aren't many galactic mergers at this stage in the nearby universe."
Spitzer's image of NGC 6240, which is 400 million light-years away in the constellation Ophiuchus, highlights the bursts of infrared radiation as the dust and gas from the two galaxies slam together. All that pressure creates new generations of hot stars, blazing away in infrared wavelengths even though the radiation in visible wavelengths is obscured by dust clouds. Because of this phenomenon, these starry swirls are known as luminous infrared galaxies.
In the news release, the Spitzer science team point to the streams of stars being ripped off the galaxies - "tidal tails" that extend into space in all directions. And this is just the warmup act: Bush and her colleagues expect the galactic black holes to hit head-on. That would upgrade NGC 6240's status to that of an ultra-luminous infrared galaxy, thousands of times as bright in infrared as our own Milky Way.
The findings are detailed in The Astrophysical Journal. In addition to Bush, the paper's co-authors include Zhong Wang, Margarita Karovska and Giovanni Fazio, all of the Harvard-Smithsonian Center for Astrophysics.
This week's other galactic crash was witnessed by the European Southern Observatory's Very Large Telescope in Chile. Two galaxies are piling into each other 70 million light-years away in the constellation Libra, and just as in the case of NGC 6240, the clashing clouds of gas and dust are sparking waves of stellar fireworks.
|This color composite image from the ESO Very
Large Telescope in Chile shows Arp 261. Click on
the picture for a larger version.
These galaxies, collectively known as Arp 261, aren't as big as the monsters in NGC 6240. They're on the scale of dwarf galaxies, similar to the Magellanic Clouds orbiting the Milky Way. In this week's image advisory, the ESO says the focus of research in this picture actually isn't the wide-screen view of smashing galaxies, but a detailed look at an unusually long-lasting, X-ray-emitting supernova. This image adds little white bars to highlight the location of the supernova.
The picture also includes other objects at a wide range of distances. If you click on a higher-resolution view, you'll be able to make out a sprinkling of background galaxies on the right side of the picture. Those galaxies may be 50 to 100 times farther away than Arp 261, the ESO says.
Toward the top left corner of the picture, you can see two red-green-blue streaks. Those are two small asteroids in our solar system's main asteroid belt. The streaks are multicolored because the ESO's picture was taken through different color filters - and the asteroids were moving through the telescope field even as the exposures were switched from one filter to the next.
Correction for 3:30 p.m. ET March 18: I fixed a bad link to the Saturn transit story ... Sorry about that! After reading all the perceptive comments below, I've also edited the item to straighten something out about the timing of events at NGC 6240. We will likely see an even more spectacular pile-up there millions of years from now, but because the galaxies are so distant, and because the speed of light is finite, that phase of the pile-up will have happened hundreds of millions of years earlier.
| 3.186793 |
Since its origin, the Scouting program has been an educational experience concerned with values. In 1910, the first activities for Scouts were designed to build character, physical fitness, practical skills, and service. These elements were part of the original Cub Scout program and continue to be part of Cub Scouting today
Character development should extend into every aspect of a boy's life. Character development should also extend into every aspect of Cub Scouting. Cub Scout leaders should strive to use Cub Scouting's 12 core values throughout all elements of the program—service projects, ceremonies, games, skits, songs, crafts, and all the other activities enjoyed at den and pack meetings
Cub Scouting's 12 Core Values
Contributing service and showing responsibility to local, state, and national communities.
Being kind and considerate, and showing concern for the well-being of others.
Being helpful and working together with others toward a common goal
Being brave and doing what is right regardless of our fears, the difficulties, or the consequences.
Having inner strength and confidence based on our trust in God.
||Health and Fitness:
Being personally committed to keeping our minds and bodies clean and fit.
Telling the truth and being worthy of trust.
Sticking with something and not giving up, even if it is difficult.
Being cheerful and setting our minds to look for and find the best in all situations.
Using human and other resources to their fullest.
Showing regard for the worth of something or someone.
Fulfilling our duty to God, country, other people, and ourselves.
|12 Core Values and
the Scout Law
Boy Scouts learn and strive to live by the Scout Law:
A Scout is trustworthy, loyal, helpful, friendly, courteous, kind, obedient, cheerful, thrifty, brave, clean, and reverent
Many of the core values of Cub Scouting relate directly to the Scout Law:
|Health and Fitness
Character can be defined as the collection of core values by an individual that leads to moral commitment and action.
Character development should challenge Cub Scouts to experience core values
in six general areas: God, world, country, community, family, and self.
Character is "values in action."
The goals of the Cub Scout leader are
- to seek out and maximize the many opportunities to incorporate character development
- to convince the young Cub Scout that character is important to the individual, to his family, community, country, world, and God
Character development should not be viewed as something done occasionally as part of a separate program, or as part of only one area of life. For in reality, character development is a part of everything a Cub Scout does. Character development lessons can be found in every aspect of the Cub Scouting experience.
When it comes to developing character, the complete person must be considered. Character development involves at least three critical areas:
- Know (thought)
- Commit (feeling)
- Practice (behavior)
In Cub Scouting, addressing these three critical areas and relating them to values is referred to as Character Connections.
Character Connections asks the Cub Scout to:
Character development includes moral knowledge—both awareness and reasoning. For example, children must understand what honesty means and they must be able to reason about and interpret each situation, and then decide how to apply the principles of honesty.
What do I think or know about the core value? How does the context of this situation affect this core value? What are some historical, literary, or religious examples representing the core value?
Character development includes attention to moral motivation. Children must be committed to doing what they know is right. They must be able to understand the perspectives of others, to consider how others feel, and to develop an active moral conscience.
Why is this core value important? What makes living out this core value different? What will it take to live out this core value?
Character development includes the development of moral habits through guided practice. Children need opportunities to practice the social and emotional skills necessary for doing what is right but difficult, and to experience the core values in their lives.
How can I act according to this core value? How do I live out this core value? How can I practice this value at school, at home, and with my friends?
To make Character Connections an integral part of Cub Scouting, the 12 core values are being integrated throughout the boys' handbooks and advancement program. Program support for character development can be found in Cub Scout Program Helps, in the Cub Scout Leader Book, and at your monthly roundtable meetings.
- Core values are the basis of good character development.
- Character must be broadly defined to include thinking, feeling, and behavior.
- Core values should be promoted throughout all phases of life.
| 3.01362 |
Darkness is our natural state. During the 200,000 years of human existence our species has only known electric light for .06% of that time. Yes there were candles and torches, but the thought of just flipping a switch to turn on all the lights in a room was inconceivable. Today we recognize the shapes of cities based on the lights they shine at our satellites.
Here in Europe electricity is like oxygen. It is everywhere. We would never even consider paying for electricity at an airport or cafe to charge the batteries of our computers and mobile phones. But what is essential to remember is that this year, 2009, exactly 130 years after the invention of the electric light bulb, the technology still hasn’t spread to all corners of the globe.
These students are studying under lamps at their city’s airport because it is the only place with stable electricity. Major African cities like Monrovia in Liberia, are powered completely by generators.
The Centralization of Electricity
Thomas Edison’s electric light bulb was really an ingenious invention, but from a business point of view it had a problem. You couldn’t use it without electricity. And so, to sell his electric lamps Edison realized that he would need to distribute electricity to the homes, offices, and warehouses that wanted electric light.
Pearl Street Station was Edison’s first power generating station. To the right you can see the small neighborhood in Manhattan that it was able to provide electricity to.
Edison came up with direct current to transmit electricity from power stations to nearby businesses and homes. The problem with direct current is that it is a very inefficient way of transmitting electricity over long distances. The direct current model would require a power station in every neighborhood. Still, Edison thought he came up with a master plan to provide electricity and electric light to the whole world. After all, who could challenge America’s greatest inventor?
Nikola Tesla is one of Europe’s and one of science’s most intriguing characters. A Serbian, Tesla was born in the tiny village of Smiljan, which today still has a population of less than 500 and is now part of Croatia. (As Danica Radovanovic recently reminded me on Facebook, Tesla was one of many brilliant Serbian scientists to leave his country for elsewhere.) Tesla questioned Edison’s use of direct current to transmit electricity and instead proposed alternating current, which is far more efficient as it travels over long distances.
What followed was the war of currents: Tesla and Edison went head to head. Europe’s greatest inventor of the time versus America’s greatest inventor of the time. And who lost? This poor elephant named Topsy.
Tesla’s system of alternating current raises the voltage to a very high level as it travels across distances. Edison electrocuted Topsy as a scare tactic to show the public what would happen if they touched a high voltage cable. But what sealed the deal was the Niagara Falls hydroelectric power project, the biggest power generator of the time. In 1883, the Niagara Falls Power Company hired Nikola Tesla and his business partner George Westinghouse to design a system to generate alternating current and transmit it throughout New York.
Had alternating current not won, then projects like China’s Three Gorges Dam, which will generate ten times as much electricity as Niagara Falls, would never exist. Alternating current led to the centralization of electricity. One of Edison’s arguments against alternating current is that a few major power generators are much more vulnerable than many spread all around the world. That remains true today, but the economics of centralized energy production and the allure of cheap energy won out in the end.
The Centralization of Computing
I wanted to briefly go over the history of electrification because its development so closely parallels what we are seeing today in the computing industry. This is not my own observation: it was first made in a book by Nicholas Carr called The Big Switch and the analogy is now commonly used by many when they explain the concept of cloud computing.
Phase 1: Mainframe Computing
This is the computer that the Internal Revenue Service used to process tax returns in the 1960’s. Mainframe computers at the time typically cost between $500,000 – $1 million. They were only available to programmers and researchers who had to wait hours if not days to use them because there was so much computing to be done and so few computers to do it.
Phase 2: Personal Computing
The second phase of computing began in 1977 with the Apple II computer, which brought computing into the home and personal office for the first time. Data was stored on audio cassette tapes. The first Apple II cost $1,300 with 4k of RAM and $2,600 with 48k of RAM. Today you can buy a gigabyte of RAM for $30 and a terabyte hard drive for $70. It had a one megahertz processor and the above ad from a 1977 issue of Scientific America touted its high resolution graphics; by which it meant a 280 by 192 pixels display with four colors: black, white, violet, and green.
This is Gary. From the tags on the Flickr picture I assume he works in the IT department of some startup company. In many ways Gary’s position exemplifies the era of personal computing. Gary is in charge of maintaining a network of computers for just one company. He goes around to each computer and upgrades new versions of software. He creates new email accounts for new employees. He answers questions as they come up. And he backs up all the data to make sure it isn’t lost in case of a hard drive failure. (Increasingly Gary is able to nap while online services do that work for him.)
Most of us still operate in the era of personal computing. We create content on our laptop and desktop computers. We store our information on our individual hard drives. And when we share documents it is usually with email.
Phase 3: Cloud Computing
The greatest evidence that we are entering the third chapter of computing is that laptop computers are becoming less powerful, not more. This is because today you don’t need a powerful computer; all you need is an internet connection. Google is even creating a free and open source operating system specifically targeted for cheap netbooks like the one above. In fact, increasingly we are leaving our laptops at home because smart phones are proving sufficient for most of our daily tasks.
This week the big talk of town is a rumored Apple tablet, which might be announced tomorrow, though much more likely sometime during the first half of 2010. There have, in fact, been rumors of an Apple tablet computer for years, but only now does it make sense to release such a powerful product with so few features. A tablet PC does little except connect to the internet. But today all you need is a keyboard, screen, camera, and microphone to connect to the cloud. Word processing, image editing, spreadsheets, video editing, audio recording: all the applications are available online.
So, what do we mean when we say “the cloud”? In many ways I think that the term is an unhelpful abstraction which masks the actual shift in infrastructure that is taking place. Increasingly computing and data storage are not taking place on our own computers, but rather at massive data centers like this one:
The data center in the above video has about 40,000 computers. Here is just one:
Machines like this process and store our emails, the photos we publish online, our blog posts, the videos we upload to YouTube, and even the voicemails we listen to from our mobile phones. DataCenterMap.org has a map-based directory of major data centers based around the world. When we speak of “the cloud” what we’re really referring to are these massive data centers, the thousands of computers they contain, and the countless software applications they make available to us through our browsers. And this visualization from New Scientist shows us how those applications interact with our laptops, cell phones, and tablets from the massive data centers that make up the cloud.
2009 is the 40th anniversary of the Internet, the 20th anniversary of the World Wide Web, and the 5th anniversary of what we call Web 2.0. The change is accelerating. Just look at these maps of internet users from 2002, 2004, 2006, and 2008. (Teddy points out that African internet users aren’t included on any of the maps.) Last year, for the first time ever, the United States was not the nation with the largest number of internet users. That now belongs to China. And, unless something drastic happens, China will continue to have the largest online presence for the rest of our lives.
Internet World Statistics estimates that there are over 1.5 billion internet users. Last year Google announced that it had indexed its trillionth web page. Researchers at Microsoft estimate that “if you spent just one minute reading every website in existence, you’d be kept busy for 31,000 years. Without any sleep.” (“That explains a lot,” says Georgia.) Some researchers estimate that global internet usage already makes up five percent of the world’s energy consumption.
Though this video has stirred up lots of debate about the statistics it cites (check out the 130+ comments on the post), its basic premise – that the internet has had a tremendous impact on human society – cannot be denied.
The Centralization of Intelligence
We have gone over the history of electrification and computing. I’d like to conclude with a brief history of collective intelligence. In the 17th century Thomas Hobbes was a controversial figure in part because he believed that intelligence came not from an all powerful god, but from each individual; and that if we could somehow bring each individual’s intelligence together to create a collective intelligence then we could shape society for the better. In 1938, HG Wells published World Brain, a collection of essays on the future organization of knowledge and education. Two years earlier the American Library Association had endorsed microfilm as a way to archive and store books, newspapers, manuscripts, and periodicals. Wells, inspired by advances in microfilm, imagined “a mental clearing house for the mind, a depot where knowledge and ideas are received, sorted, summarized, digested, clarified and compared.”
Even before Wells published World Brain, Belgian author and peace activist Paul Otlet was already envisioning his own pre-cursor to Google Books:
So how has collective intelligence changed in the era of cloud computing? What do we mean when we say “cloud intelligence?” For one thing, our relationship with software has become much more symbiotic. We depend on cloud software to make sense of the information around us; and cloud software depends on us to help it make sense of the ever-increasing amount of information we upload to the internet. Take Google Flu Trends, for instance, which can detect flu outbreaks faster than the Center for Disease Control by monitoring searches for symptoms. Another example of cloud intelligence is reCAPTCHA which has enlisted an army of millions of unknowing volunteers to help digitize books and the complete archive of the New York Times:
(If you want to leave a comment on this post you’re forced to join Von Ahn‘s mission.)
Another example of cloud intelligence is found in the active Astrometry group on Flickr. There are over 1,000 amateur astronomers in the group that help scientists keep an eye on the galaxies around us by regularly posting the photos they take from their telescopes. When members of the group publish a photograph of the nighttime sky an automated computer application scans their photos for recognizable stars, planets, and nebulae and labels them using Flickr’s notes function. Each photographer gains more information about the photograph that he or she took and Astrometry.net gets a new image of the nighttime sky to add to its ever-growing database. In an interview on the Flickr Developer Blog, project leader Christopher Stumm says that Astrometry.net is currently “using images from around the web to calculate the path comet Holmes took through the sky.”
Chris Messina from San Francisco brings us another example of cloud intelligence from a recent shopping trip to OfficeMax where he and his girlfriend were hoping to buy some dry erase boards for their home office.
The shopping trip wasn’t a success. Most of the boards were poor quality and when they did eventually find a product that suited their needs, it was damaged. Rather than moving on to the next office supply store Chris pulled out his iPhone and took a picture with the Amazon iphone application of the dry erase board they wanted to buy. The picture is then uploaded to one of Amazon’s massive data centers where it is posted on Mechanical Turk, a website that lists “human intelligence tasks” that pay anywhere from one penny to five dollars. Jeff, for example, will pay you one penny for every sermon time you find listed on a church website. Amazon is also willing to pay one penny to anyone who will look at the photo Chris uploaded from his iPhone application and identify that same product on the Amazon website. Chris will never know who did the work for him, but within minutes he received a message from Amazon with a link to the product he was looking for.
I would imagine that just about every tourist who has been to London has taken a picture of Big Ben. It must be the most photographed clock tower in the world. Ten years ago we put those photographs in a photo album to share with our family and friends. More recently we’ve become accustomed to sharing them on Picasa, Flickr, or Twitter. But now the tourist snapshots we take are helping create a 3D model of the world:
If the cloud is the world brain, then the cameras and microphones on our mobile phones and laptops are its eyes and ears. How many times have you been to a cafe or bar and heard a song that you liked but didn’t recognize? Today if you help the cloud listen, it will provide you with the information. Shazam identifies the song you are listening by sharing a small audio sample with your mobile phone. In turn Shazam is able to track the most popular songs as they come out.
Google’s crowdsourced traffic application also highlights the symbiotic relationship between human intelligence and cloud intelligence. When you start the application Google will show you traffic conditions on many of the streets around you. In turn, you agree to let Google track your speed as you travel; thus updating their maps with more real-time traffic data.
Just five years ago it would have cost at least hundreds of thousands of dollars to produce a video with actors on location based all around the world. Also, until very recently there was only one man in the world who could attract millions of views when dancing the moonwalk. Today we all can. All we have to do is upload a video to Eternal Moonwalk. The cloud is enabling a new possibilities of creative collaboration.
What is the future of collective intelligence? All that we can be sure of is that all intelligence is always collective; whether it is the collective intelligence of the nucleotides that make up our DNA or the billions of neurons in our brain as they produce the thoughts in our heads right now. Above is a visualization of the internet made by the Opte Project. It is a map of us – the internet would not exist were it not for us, and we would each be very different without the internet. Just as each individual neuron in our brain is unaware that collectively it is part of a larger self, it is too easy for each of us to forget that, as internet users, we too form parts of a collective whole. And that collective whole is much greater – and different – than the sum of its parts.
In many ways the cloud is leading to the centralization of power as it lures us into trusting just a few major corporations with our personal information. On the other hand it creates an architecture of participation that is enabling a larger percentage of the world population to become active citizens rather than just passive consumers.
This podium I am speaking at right now was once the sole symbol of power in the room. Today a cloud of conversation floats all around us that no one person can control. It is up to you if you would like to participate or not.
In the end, no matter how actively or passively we spend our time online, what we can all be sure of is that one day sooner or later our brain will stop functioning and our stay here on planet Earth will conclude. We will remain, of course, in the memories of our friends and family, and also in the bits and bytes of digital footprints that we leave in the cloud for the generations that follow. What they do with the information we leave behind – or, indeed, what the cloud itself does with the information – will depend on a new type of networked evolution that values sharing and community over proprietary protection.
| 3.333639 |
|Invasive red-eared sliders compete with native turtles for food, habitat and basking and nesting sites.
Oregon’s aquatic invasive species
Oregon terrestrial invasive species
Rick Boatner, ODFW Invasive Species coordinator
Martyne Reesman, ODFW Aquatic Invasive Species technician
Invasive species are animals and plants that are not native to an ecosystem and that cause economic or environmental harm. While not all non-native species are invasive, many become a serious problem. They damage Oregon’s habitats and can aggressively compete with native species for food, water and habitat. Choose from the species lists above to learn about specific species. Visit the Oregon Department of Agriculture website to learn about invasive plants
| 3.355876 |
Journal of Archaeological Science doi:10.1016/j.jas.2009.11.016
Radiocarbon evidence indicates that migrants introduced farming to Britain
Archaeologists disagree about how farming began in Britain. Some argue it was a result of indigenous groups adopting domesticates and cultigens via trade and exchange. Others contend it was the consequence of a migration of farmers from mainland Europe. To shed light on this debate, we used radiocarbon dates to estimate changes in population density between 8000 and 4000 cal BP. We found evidence for a marked and rapid increase in population density coincident with the appearance of cultigens around 6000 cal BP. We also found evidence that this increase occurred first in southern England and shortly afterwards in central Scotland. These findings are best explained by groups of farmers from the Continent independently colonizing England and Scotland, and therefore strongly support the migrant farmers hypothesis.
| 3.769375 |
Date: January 10, 2011
Creator: Ahearn, Raymond J.
Description: Seventeen of the European Union's 27 member states share an economic and monetary union (EMU) with the euro as a single currency. These countries are effectively referred to as the Eurozone. What has become known as the Eurozone crisis began in early 2010 when financial markets were shaken by heightened concerns that the fiscal positions of a number of Eurozone countries, beginning with Greece, were unsustainable. This report provides background information and analysis on the future of the Eurozone in six parts, including discussions on the origins and design challenges of the Eurozone, proposals to define the Eurozone crisis, possible scenarios for the future of the Eurozone, and the implications of the Eurozone crisis for U.S. economic and political interests.
Contributing Partner: UNT Libraries Government Documents Department
| 3.599811 |
I read a very interesting piece in today’s Philadelphia Inquirer about the history of violence in America: Violence Vanquished. Here’s just one part of it:
An even more intractable debate accompanied the rise and fall of lynching, one of the most gruesome forms of violence ever to take root in the United States. Today, we tend to remember lynching as a clandestine crime – a young black man pulled from his bed in the dark of night and brutalized or hanged in the Southern woods. For most of the late 19th and early 20th centuries, though, it was a community phenomenon of almost unthinkable cruelty, in which hundreds if not thousands of people gathered to watch a victim being disemboweled, castrated, tortured, or burned, and then killed.
To modern sensibilities, the injustice once again seems obvious, as do the solutions: Prosecute lynchers, fight for racial justice, strengthen the rule of law, and mobilize public opinion to condemn rather than excuse outbursts of brutality. And yet it took more than 100 years for lynching to begin to disappear from American life, and even longer for Americans to fully acknowledge its horror.
In the meantime, thousands of influential people, including many esteemed lawmakers, argued that lynching was a fact of life, a random act of violence about which nothing could be done. It was not until 2005 that the U.S. Senate, spearheaded by Mary Landrieu, apologized for failing to pass federal antilynching legislation, leaving hundreds of innocent people to be sacrificed to official inaction.
Maybe there is hope. Eventually.
| 3.21587 |
|Oracle® Database Concepts
11g Release 2 (11.2)
|PDF · Mobi · ePub|
This chapter contains the following sections:
A process is a mechanism in an operating system that can run a series of steps. The mechanism depends on the operating system. For example, on Linux an Oracle background process is a Linux process. On Windows, an Oracle background process is a thread of execution within a process.
Code modules are run by processes. All connected Oracle Database users must run the following modules to access a database instance:
Application or Oracle Database utility
A database user runs a database application, such as a precompiler program or a database tool such as SQL*Plus, that issues SQL statements to a database.
Oracle database code
Each user has Oracle database code executing on his or her behalf that interprets and processes the application's SQL statements.
A process normally runs in its own private memory area. Most processes can periodically write to an associated trace file (see "Trace Files").
Multiple-process Oracle (also called multiuser Oracle) uses several processes to run different parts of the Oracle Database code and additional processes for the users—either one process for each connected user or one or more processes shared by multiple users. Most databases are multiuser because a primary advantages of a database is managing data needed by multiple users simultaneously.
Each process in a database instance performs a specific job. By dividing the work of the database and applications into several processes, multiple users and applications can connect to an instance simultaneously while the system gives good performance.
A database instance contains or interacts with the following types of processes:
Client processes run the application or Oracle tool code.
Oracle processes run the Oracle database code. Oracle processes including the following subtypes:
Background processes start with the database instance and perform maintenance tasks such as performing instance recovery, cleaning up processes, writing redo buffers to disk, and so on.
Server processes perform work based on a client request.
Note:Server processes, and the process memory allocated in these processes, run in the instance. The instance continues to function when server processes terminate.
Slave processes perform additional tasks for a background or server process.
The process structure varies depending on the operating system and the choice of Oracle Database options. For example, the code for connected users can be configured for dedicated server or shared server connections. In a shared server architecture, each server process that runs database code can serve multiple client processes.
Figure 15-1 shows a system global area (SGA) and background processes using dedicated server connections. For each user connection, the application is run by a client process that is different from the dedicated server process that runs the database code. Each client process is associated with its own server process, which has its own program global area (PGA).
When a user runs an application such as a Pro*C program or SQL*Plus, the operating system creates a client process (sometimes called a user process) to run the user application. The client application has Oracle Database libraries linked into it that provide the APIs required to communicate with the database.
Client processes differ in important ways from the Oracle processes interacting directly with the instance. The Oracle processes servicing the client process can read from and write to the SGA, whereas the client process cannot. A client process can run on a host other than the database host, whereas Oracle processes cannot.
For example, assume that a user on a client host starts SQL*Plus and connects over the network to database
sample on a different host (the database instance is not started):
SQL> CONNECT SYS@inst1 AS SYSDBA Enter password: ********* Connected to an idle instance.
On the client host, a search of the processes for either
sample shows only the
sqlplus client process:
% ps -ef | grep -e sample -e sqlplus | grep -v grep clientuser 29437 29436 0 15:40 pts/1 00:00:00 sqlplus as sysdba
On the database host, a search of the processes for either
sample shows a server process with a nonlocal connection, but no client process:
% ps -ef | grep -e sample -e sqlplus | grep -v grep serveruser 29441 1 0 15:40 ? 00:00:00 oraclesample (LOCAL=NO)
A connection is a physical communication pathway between a client process and a database instance. A communication pathway is established using available interprocess communication mechanisms or network software. Typically, a connection occurs between a client process and a server process or dispatcher, but it can also occur between a client process and Oracle Connection Manager (CMAN).
A session is a logical entity in the database instance memory that represents the state of a current user login to a database. For example, when a user is authenticated by the database with a password, a session is established for this user. A session lasts from the time the user is authenticated by the database until the time the user disconnects or exits the database application.
A single connection can have 0, 1, or more sessions established on it. The sessions are independent: a commit in one session does not affect transactions in other sessions.
Note:If Oracle Net connection pooling is configured, then it is possible for a connection to drop but leave the sessions intact.
Multiple sessions can exist concurrently for a single database user. As shown in Figure 15-2, user
hr can have multiple connections to a database. In dedicated server connections, the database creates a server process on behalf of each connection. Only the client process that causes the dedicated server to be created uses it. In a shared server connection, many client processes access a single shared server process.
Figure 15-3 illustrates a case in which user
hr has a single connection to a database, but this connection has two sessions.
Generating an autotrace report of SQL statement execution statistics re-creates the scenario in Figure 15-3. Example 15-2 connects SQL*Plus to the database as user
SYSTEM and enables tracing, thus creating a new session (sample output included).
SQL> SELECT SID, SERIAL#, PADDR FROM V$SESSION WHERE USERNAME = USER; SID SERIAL# PADDR --- ------- -------- 90 91 3BE2E41C SQL> SET AUTOTRACE ON STATISTICS; SQL> SELECT SID, SERIAL#, PADDR FROM V$SESSION WHERE USERNAME = USER; SID SERIAL# PADDR --- ------- -------- 88 93 3BE2E41C 90 91 3BE2E41C ... SQL> DISCONNECT
DISCONNECT command in Example 15-1 actually ends the sessions, not the connection. Opening a new terminal and connecting to the instance as a different user, the query in Example 15-2 shows that the connection from Example 15-1 is still active.
SQL> CONNECT dba1@inst1 Password: ******** Connected. SQL> SELECT PROGRAM FROM V$PROCESS WHERE ADDR = HEXTORAW('3BE2E41C'); PROGRAM ------------------------------------------------ oracle@stbcs09-1 (TNS V1-V3)
See Also:"Shared Server Architecture"
Server processes created on behalf of a database application can perform one or more of the following tasks:
Execute PL/SQL code
Read data blocks from data files into the database buffer cache (the DBWn background process has the task of writing modified blocks back to disk)
Return results in such a way that the application can process the information
In dedicated server connections, the client connection is associated with one and only one server process (see "Dedicated Server Architecture"). On Linux, 20 client processes connected to a database instance are serviced by 20 server processes.
Each client process communicates directly with its server process. This server process is dedicated to its client process for the duration of the session. The server process stores process-specific information and the UGA in its PGA (see "PGA Usage in Dedicated and Shared Server Modes").
In shared server connections, client applications connect over a network to a dispatcher process, not a server process (see "Shared Server Architecture"). For example, 20 client processes can connect to a single dispatcher process.
The dispatcher process receives requests from connected clients and puts them into a request queue in the large pool (see "Large Pool"). The first available shared server process takes the request from the queue and processes it. Afterward, the shared server place the result into the dispatcher response queue. The dispatcher process monitors this queue and transmits the result to the client.
Like a dedicated server process, a shared server process has its own PGA. However, the UGA for a session is in the SGA so that any shared server can access session data.
A multiprocess Oracle database uses some additional processes called background processes. The background processes perform maintenance tasks required to operate the database and to maximize performance for multiple users.
Each background process has a separate task, but works with the other processes. For example, the LGWR process writes data from the redo log buffer to the online redo log. When a filled log file is ready to be archived,
LGWR signals another process to archive the file.
Oracle Database creates background processes automatically when a database instance starts. An instance can have many background processes, not all of which always exist in every database configuration. The following query lists the background processes running on your database:
SELECT PNAME FROM V$PROCESS WHERE PNAME IS NOT NULL ORDER BY PNAME;
This section includes the following topics:
See Also:Oracle Database Reference for descriptions of all the background processes
The mandatory background processes are present in all typical database configurations. These processes run by default in a database instance started with a minimally configured initialization parameter file (see Example 13-1).
This section describes the following mandatory background processes:
Oracle Database Reference for descriptions of other mandatory processes, including MMAN, DIAG, VKTM, DBRM, and PSP0
Oracle Real Application Clusters Administration and Deployment Guide and Oracle Clusterware Administration and Deployment Guide for more information about background processes specific to Oracle RAC and Oracle Clusterware
The process monitor (PMON) monitors the other background processes and performs process recovery when a server or dispatcher process terminates abnormally. PMON is responsible for cleaning up the database buffer cache and freeing resources that the client process was using. For example, PMON resets the status of the active transaction table, releases locks that are no longer required, and removes the process ID from the list of active processes.
PMON also registers information about the instance and dispatcher processes with the Oracle Net listener (see "The Oracle Net Listener"). When an instance starts, PMON polls the listener to determine whether it is running. If the listener is running, then PMON passes it relevant parameters. If it is not running, then PMON periodically attempts to contact it.
The system monitor process (SMON) is in charge of a variety of system-level cleanup duties. The duties assigned to SMON include:
Recovering terminated transactions that were skipped during instance recovery because of file-read or tablespace offline errors. SMON recovers the transactions when the tablespace or file is brought back online.
Cleaning up unused temporary segments. For example, Oracle Database allocates extents when creating an index. If the operation fails, then SMON cleans up the temporary space.
Coalescing contiguous free extents within dictionary-managed tablespaces.
SMON checks regularly to see whether it is needed. Other processes can call SMON if they detect a need for it.
The database writer process (DBWn) writes the contents of database buffers to data files. DBWn processes write modified buffers in the database buffer cache to disk (see "Database Buffer Cache").
Although one database writer process (DBW0) is adequate for most systems, you can configure additional processes—DBW1 through DBW9 and DBWa through DBWj—to improve write performance if your system modifies data heavily. These additional DBWn processes are not useful on uniprocessor systems.
The DBWn process writes dirty buffers to disk under the following conditions:
When a server process cannot find a clean reusable buffer after scanning a threshold number of buffers, it signals DBWn to write. DBWn writes dirty buffers to disk asynchronously if possible while performing other processing.
DBWn periodically writes buffers to advance the checkpoint, which is the position in the redo thread from which instance recovery begins (see "Overview of Checkpoints"). The log position of the checkpoint is determined by the oldest dirty buffer in the buffer cache.
In many cases the blocks that DBWn writes are scattered throughout the disk. Thus, the writes tend to be slower than the sequential writes performed by LGWR. DBWn performs multiblock writes when possible to improve efficiency. The number of blocks written in a multiblock write varies by operating system.
See Also:Oracle Database Performance Tuning Guide for advice on configuring, monitoring, and tuning DBWn
The log writer process (LGWR) manages the redo log buffer. LGWR writes one contiguous portion of the buffer to the online redo log. By separating the tasks of modifying database buffers, performing scattered writes of dirty buffers to disk, and performing fast sequential writes of redo to disk, the database improves performance.
In the following circumstances, LGWR writes all redo entries that have been copied into the buffer since the last time it wrote:
A user commits a transaction (see "Committing Transactions").
An online redo log switch occurs.
Three seconds have passed since LGWR last wrote.
The redo log buffer is one-third full or contains 1 MB of buffered data.
DBWn must write modified buffers to disk.
Before DBWn can write a dirty buffer, redo records associated with changes to the buffer must be written to disk (the write-ahead protocol). If DBWn finds that some redo records have not been written, it signals LGWR to write the records to disk and waits for LGWR to complete before writing the data buffers to disk.
Oracle Database uses a fast commit mechanism to improve performance for committed transactions. When a user issues a
COMMIT statement, the transaction is assigned a system change number (SCN). LGWR puts a commit record in the redo log buffer and writes it to disk immediately, along with the commit SCN and transaction's redo entries.
The redo log buffer is circular. When LGWR writes redo entries from the redo log buffer to an online redo log file, server processes can copy new entries over the entries in the redo log buffer that have been written to disk. LGWR normally writes fast enough to ensure that space is always available in the buffer for new entries, even when access to the online redo log is heavy.
The atomic write of the redo entry containing the transaction's commit record is the single event that determines the transaction has committed. Oracle Database returns a success code to the committing transaction although the data buffers have not yet been written to disk. The corresponding changes to data blocks are deferred until it is efficient for DBWn to write them to the data files.
Note:LGWR can write redo log entries to disk before a transaction commits. The redo entries become permanent only if the transaction later commits.
When activity is high, LGWR can use group commits. For example, a user commits, causing LGWR to write the transaction's redo entries to disk. During this write other users commit. LGWR cannot write to disk to commit these transactions until its previous write completes. Upon completion, LGWR can write the list of redo entries of waiting transactions (not yet committed) in one operation. In this way, the database minimizes disk I/O and maximizes performance. If commits requests continue at a high rate, then every write by LGWR can contain multiple commit records.
LGWR writes synchronously to the active mirrored group of online redo log files. If a log file is inaccessible, then LGWR continues writing to other files in the group and writes an error to the LGWR trace file and the alert log. If all files in a group are damaged, or if the group is unavailable because it has not been archived, then LGWR cannot continue to function.
Oracle Database Performance Tuning Guide for information about how to monitor and tune the performance of LGWR
The checkpoint process (CKPT) updates the control file and data file headers with checkpoint information and signals DBWn to write blocks to disk. Checkpoint information includes the checkpoint position, SCN, location in online redo log to begin recovery, and so on. As shown in Figure 15-4, CKPT does not write data blocks to data files or redo blocks to online redo log files.
See Also:"Overview of Checkpoints"
The manageability monitor process (MMON) performs many tasks related to the Automatic Workload Repository (AWR). For example, MMON writes when a metric violates its threshold value, taking snapshots, and capturing statistics value for recently modified SQL objects.
The manageability monitor lite process (MMNL) writes statistics from the Active Session History (ASH) buffer in the SGA to disk. MMNL writes to disk when the ASH buffer is full.
In a distributed database, the recoverer process (RECO) automatically resolves failures in distributed transactions. The RECO process of a node automatically connects to other databases involved in an in-doubt distributed transaction. When RECO reestablishes a connection between the databases, it automatically resolves all in-doubt transactions, removing from each database's pending transaction table any rows that correspond to the resolved transactions.
See Also:Oracle Database Administrator's Guide for more information about transaction recovery in distributed systems
An optional background process is any background process not defined as mandatory. Most optional background processes are specific to tasks or features. For example, background processes that support Oracle Streams Advanced Queuing (AQ) or Oracle Automatic Storage Management (Oracle ASM) are only available when these features are enabled.
This section describes some common optional processes:
The archiver processes (ARCn) copy online redo log files to offline storage after a redo log switch occurs. These processes can also collect transaction redo data and transmit it to standby database destinations. ARCn processes exist only when the database is in ARCHIVELOG mode and automatic archiving is enabled.
Oracle Database uses job queue processes to run user jobs, often in batch mode. A job is a user-defined task scheduled to run one or more times. For example, you can use a job queue to schedule a long-running update in the background. Given a start date and a time interval, the job queue processes attempt to run the job at the next occurrence of the interval.
Oracle Database manages job queue processes dynamically, thereby enabling job queue clients to use more job queue processes when required. The database releases resources used by the new processes when they are idle.
Dynamic job queue processes can run a large number of jobs concurrently at a given interval. The sequence of events is as follows:
The job coordinator process (CJQ0) is automatically started and stopped as needed by Oracle Scheduler (see "Oracle Scheduler"). The coordinator process periodically selects jobs that need to be run from the system
JOB$ table. New jobs selected are ordered by time.
The coordinator process dynamically spawns job queue slave processes (Jnnn) to run the jobs.
The job queue process runs one of the jobs that was selected by the CJQ0 process for execution. Each job queue process runs one job at a time to completion.
After the process finishes execution of a single job, it polls for more jobs. If no jobs are scheduled for execution, then it enters a sleep state, from which it wakes up at periodic intervals and polls for more jobs. If the process does not find any new jobs, then it terminates after a preset interval.
The initialization parameter
JOB_QUEUE_PROCESSES represents the maximum number of job queue processes that can concurrently run on an instance. However, clients should not assume that all job queue processes are available for job execution.
Note:The coordinator process is not started if the initialization parameter
JOB_QUEUE_PROCESSESis set to 0.
The flashback data archiver process (FBDA) archives historical rows of tracked tables into Flashback Data Archives. When a transaction containing DML on a tracked table commits, this process stores the pre-image of the rows into the Flashback Data Archive. It also keeps metadata on the current rows.
FBDA automatically manages the flashback data archive for space, organization, and retention. Additionally, the process keeps track of how far the archiving of tracked transactions has occurred.
The SMCO process coordinates the execution of various space management related tasks, such as proactive space allocation and space reclamation. SMCO dynamically spawns slave processes (Wnnn) to implement the task.
See Also:Oracle Database Advanced Application Developer's Guide to learn about Flashback Data Archive
Slave processes are background processes that perform work on behalf of other processes. This section describes some slave processes used by Oracle Database.
See Also:Oracle Database Reference for descriptions of Oracle Database slave processes
I/O slave processes (Innn) simulate asynchronous I/O for systems and devices that do not support it. In asynchronous I/O, there is no timing requirement for transmission, enabling other processes to start before the transmission has finished.
For example, assume that an application writes 1000 blocks to a disk on an operating system that does not support asynchronous I/O. Each write occurs sequentially and waits for a confirmation that the write was successful. With asynchronous disk, the application can write the blocks in bulk and perform other work while waiting for a response from the operating system that all blocks were written.
To simulate asynchronous I/O, one process oversees several slave processes. The invoker process assigns work to each of the slave processes, who wait for each write to complete and report back to the invoker when done. In true asynchronous I/O the operating system waits for the I/O to complete and reports back to the process, while in simulated asynchronous I/O the slaves wait and report back to the invoker.
The database supports different types of I/O slaves, including the following:
I/O slaves for Recovery Manager (RMAN)
When using RMAN to back up or restore data, you can make use of I/O slaves for both disk and tape devices.
Database writer slaves
If it is not practical to use multiple database writer processes, such as when the computer has one CPU, then the database can distribute I/O over multiple slave processes. DBWR is the only process that scans the buffer cache LRU list for blocks to be written to disk. However, I/O slaves perform the I/O for these blocks.
In parallel execution or parallel processing, multiple processes work together simultaneously to run a single SQL statement. By dividing the work among multiple processes, Oracle Database can run the statement more quickly. For example, four processes handle four different quarters in a year instead of one process handling all four quarters by itself.
Parallel execution reduces response time for data-intensive operations on large databases such as data warehouses. Symmetric multiprocessing (SMP) and clustered system gain the largest performance benefits from parallel execution because statement processing can be split up among multiple CPUs. Parallel execution can also benefit certain types of OLTP and hybrid systems.
In Oracle RAC systems, the service placement of a particular service controls parallel execution. Specifically, parallel processes run on the nodes on which you have configured the service. By default, Oracle Database runs the parallel process only on the instance that offers the service used to connect to the database. This does not affect other parallel operations such as parallel recovery or the processing of
Oracle Real Application Clusters Administration and Deployment Guide for considerations regarding parallel execution in Oracle RAC environments
In serial execution, a single server process performs all necessary processing for the sequential execution of a SQL statement. For example, to perform a full table scan such as
FROM employees, one server process performs all of the work, as shown in Figure 15-5.
In parallel execution, the server process acts as the parallel execution coordinator responsible for parsing the query, allocating and controlling the slave processes, and sending output to the user. Given a query plan for a SQL query, the coordinator breaks down each operator in a SQL query into parallel pieces, runs them in the order specified in the query, and integrates the partial results produced by the slave processes executing the operators.
Figure 15-6 shows a parallel scan of the
employees table. The table is divided dynamically (dynamic partitioning) into load units called granules. Each granule is a range of data blocks of the table read by a single slave process, called a parallel execution server, which uses
nnn as a name format.
The database maps granules to execution servers at execution time. When an execution server finishes reading the rows corresponding to a granule, and when granules remain, it obtains another granule from the coordinator. This operation continues until the table has been read. The execution servers send results back to the coordinator, which assembles the pieces into the desired full table scan.
The number of parallel execution servers assigned to a single operation is the degree of parallelism for an operation. Multiple operations within the same SQL statement all have the same degree of parallelism.
| 3.260821 |
Black cap; black throat and lower neck (like a bib); white cheek patches; white chest and belly; gray back, wings, and tail; buffy patches on the flanks. 12 cm (4.75 in) in length. The Carolina Chickadee's call is chick-a-dee-dee-dee or a shortened version of the same. The song of this species is usually four whistled notes, fee-bee fee-bay.
The breeding season begins in early April, peaks later that month until early May, and extends through mid-June. Breeding habitat includes a variety of wooded and forested areas. This species is a cavity nester and will nest in snags, trees, rotten fenceposts, or nest boxes. It will excavate its own cavity, use an old woodpecker cavity, or find a natural cavity. The cavity is lined with moss, plant material, down, and feathers. The female lays 5-8 (usually 6) eggs that both adults incubate for 11-12 days. The young are altricial and fledge 13-17 days after hatching.
The Carolina Chickadee prefers forested or wooded habitats. It eats primarily insects and also spiders, fruits, and seeds. It forages by searching among tree branches, trunks, pine cones, and dead leaf clusters. It also frequents bird feeders. This species is a year-round resident, and does not migrate.
The Carolina Chickadee is found mostly in the Southeast, but its range extends as far north as Delaware, central Ohio, Indiana, and Illinois and as far west as Oklahoma and central Texas. In the Southeast, this species is common to very common throughout except for extreme southern Florida.
This species is common in the Southeast, and is not targeted for any special attention.
The species most similar to the Carolina Chickadee is the Black-capped Chickadee. The Black-capped Chickadee is slightly larger and has a lower pitch to the chick-a-dee-dee-dee call. The Black-capped Chickadee also has a shorter song, being 2 or 3 whistled notes, fee-bee or fee-bee-bee. The song is the most reliable identifier, because their appearances are very similar. The bib of the Black-capped Chickadee is slightly larger and the buff color on the flanks is more extensive.
| 3.329579 |
Science Supplies and Services
Model for Mass Spectrometer
How many of high school or college students have seen the insides of a working mass spectrometer? How many know how and why it works? What is it used for? Is the print-out the only important feature of the instrument or should our students understand how the information was obtained? A model of a mass spectrometer can be used to encourage this understanding. The S17 Science Model Mass Spectrometer is designed to answer an number of questions, and to teach a number of very interesting concepts. The instruction sheets that accompany this kit outline, in great detail, some of the best ways of using this kit to:
The instruction sheets include information and diagrams to help you set up the apparatus. A set of sample questions, comments and answers that have been elicited and used in various classrooms is also included. Students in Kindergarten through College have marveled at the use of this kit in demonstrations. The kit has already been purchased by high schools and universities throughout the world for demonstrating the mass spectrometer to students of all academic backgrounds. You will want one of these in your school. It is available in two different kits, EQ 129 and EQ 131. The instructions sheets are the same in both kits.
Your IP Address is: 220.127.116.11
| 3.238546 |
Wiktionary:About Low German
|This is a Wiktionary policy, guideline or common practices page. This is a draft proposal. It is unofficial, and it is unknown whether it is widely accepted by Wiktionary editors.|
|Policies: CFI - ELE - BLOCK - REDIR - BOTS - QUOTE - DELETE - NPOV - AXX|
Low German/Saxon is a Germanic lect, a dialect continuum spoken in northern Germany, the eastern Netherlands, and numerous places outside central Europe. It has three main forms:
- (German) Low German, spoken in northern Germany
- Dutch Low Saxon, spoken in the eastern Netherlands
- Plautdietsch (also called Mennonite Low German), spoken in Canada, the United States, and elsewhere
Low German/Saxon is related to Dutch, to the Frisian languages, to English, and to German. In some cases, Low German expressions are intelligible to English speakers: He was en old Mann is one Low German sentence English-speakers can understand.
On Wiktionary, the variants of Low German spoken in Germany are represented by the code nds-de and are covered by the page Wiktionary:About German Low German.
The variants spoken in the Netherlands are represented by the code nds-nl and covered by Wiktionary:About Dutch Low Saxon.
Plautdietsch is represented by the code pdt and covered by Wiktionary:About Plautdietsch.
What to call Low German/Saxon on Wiktionary
Low German is the most common name of the dialect continuum, and is the name used on Wiktionary. It is a calque of Plattdüdesch (and its forms) or Nedderdüdesch. Platt means "flat" and is interpreted as "relating to the lowlands". At the time this name spread (in the Renaissance), however, plat had the general meaning of "intelligible". Nedder, on the other hand, actually means "nether" and relates to the lowlands in contrast to the German highlands, the Alps, Harz mountains, etc. Düdesch is related to the English word Dutch and the dutch word Duits, and referred (at the time it spread) to any continental West Germanic language.
Low Saxon is another name often used in English. This name derives from that of the Saxon tribe which spoke Old Saxon, the lect from which Low German evolved. This name (as Nedersaksisch) is the most common name of the language in the Netherlands. However, there is a dialect group called "Low Saxon" spoken in Lower Saxony in Germany, which Low German should not be confused with.
On Wiktionary, the form(s) of Low German spoken in Germany are called Low German, the form(s) spoken in the Netherlands are called Dutch Low Saxon, and the form spoken by Mennonites and others outside central Europe are called Plautdietsch.
Historical stages of the Saxon language
Low German developed from the language Old Saxon. The earliest predecessors of the language were the West Germanic dialects spoken by the Saxon tribes. Middle Low German was heavily influenced the languages of the Hanseatic League's trading partners: Old/Middle Danish, Swedish, Norwegian.
Key to pronunciation
About the nature of long vowels
Both Low German and Middle Low German have two kinds of vowel sounds that are traditionally called 'long', for all vowels but the closed ones (i.e. /uː/, /yː/, /iː/). The first are diphthongs that descend from earlier long vowels, and the second are the same as the equivalent short vowel but pronounced long. These latter ones are called "tonlang" (sound-long) in German. The sound-long vowels are often vowels which were short in Old Saxon but stood in an open syllable, and thus were lengthened by regular sound change.
Some confusion exists about the terminology of these vowels. Traditional grammars do not refer to the diphthongs as such, but call them simply "long vowels", and the speakers of most Low German dialects often think of them in those terms (much as the sound of the English eye is considered a ‘long i’ in traditional English grammar despite its diphthongal character). When speaking of "diphthongisation", especially regarding the dialects of Mecklenburg-Vorpommern, an author might refer to a more open version of the diphthong rather than the existence of a diphthong in contrast to a monophthong. For example, someone pronouncing the "long E" as /eɪ/ might refer to /ɛɪ/ and /aɪ/ as "diphthongs" but consider /eɪ/ a "normal E".
The following is not a complete depiction of all sounds in all dialects but is an exemplative overview.
|⟨A⟩||/a/ or /ɒ/||/oɒ/||/aː/ or /ɒː/|
|⟨E⟩||/ɛ/||/eɪ/ or /ɛɪ/||/ɛː/|
|⟨O⟩||/ɔ/||/oʊ/ or /ɔʊ/||/ɔː/|
|⟨Ö⟩ (and other spellings)||/œ/||/øʏ/ or /œʏ/||/œː/|
Some of the sound-long vowels have had special characters in some areas or in the writing of some authors. The most widespread are "ę" for /ɛː/, "æ" for /ɶː/ (and to a much lesser extent for /œː/) and "œ" for /œː/. "Ä" has been used for both /ɛː/ and /ɶː/.
Key to dialectal pronunciation
In general, the most closed version (on the left) is spoken in the west (e.g. Lower Saxony), while the most open (on the right) versions are from the east, especially rural (not urban) parts of w:Mecklenburg-Vorpommern.
E = /ɛɪ/ = /eɪ/~/ɛɪ/~/aɪ/ O = /ɔʊ/ = /oʊ/~/ɔʊ/~/ɒʊ/ Ö = /œʏ/ = /øʏ/~/œʏ/~/ɶʏ/; [eɪ] (w:Königsberg, Low Prussian) Ü etc. (long) = /yː/; [iː] (Low Prussian) Ü etc. (short)= [ʏ]; [ɪ] (Low Prussian) R = /r/ = [r]~[ɾ] (except in syllable coda) A = /ʌ/ = [a]~[æ]~[ʌ]~[ɒ]
The Merger of monophthongal A and O
Due to the relative similarity of the sounds of lengthened A and lengthened O, both were used somewhat interchangeably in Middle Low German writing. Later, "A" replaced the letter "O" in the quasi-standard that Middle Low German had developed. This was because, at some point in history, most Low German dialects merged the sound-long A with the sound-long O. Later many merged the long A with the sound-long A as well. Which sound was kept and which was lost was random throughout the dialects. In addition, Low German orthography became more varied and also more randomized in later periods, so that words might be written with either A or O in a region (e.g. apen and open), while not necessarily giving away the pronunciation.
Comparison of Low German and Dutch Low Saxon orthographies
Some important differences between Dutch-influenced orthography of Dutch Low Saxon and the German-influences orthography of Low German pertain to the representation of the following:
|IPA /s/||s||s, ss, ß, z|
|IPA /ø/, /œ/||eu||ö (rarely æ for /œ/)|
|vowel length in closed syllables||doubled vowel||doubled consonant or H|
|capitalisation of nouns||No||Yes|
- For example, compare Dutch Low Saxon zes (“six”) and kruus (“cross”) with German Low German sess (“six”) and Krüüz (“cross”).
- Dutch speakers usually use a double vowel (laand) to show the length of a vowel in a closed syllable, German speakers use an h (wahnen). Both use an e after an i. The difference can be seen in the spellings of the word which means "year", which is pronounced either /jɒːɾ/ or /jɔːɾ/ or with /-ɐ/ instead of /ɾ/: it is written as jaar and joar in Dutch Low Saxon, but as Jahr, Johr or some variant thereof in Germany.
- Influenced by standard High German, which capitalizes nouns, many Low German authors also capitalize nouns, and capitalized nouns are the norm (lemma form) for Low German. Many Dutch Low Saxon speakers do not capitalize nouns, and uncapitalized nouns are the norm (lemma form) for Dutch Low Saxon.
| 3.400891 |
In library science, authority control is a process that organizes library catalog and bibliographic information by using a single, distinct name for each topic. These one-of-a-kind headings are applied consistently throughout the catalog, and work with other organizing data such as linkages and cross references. Each heading is described briefly in terms of its scope and usage, and this organization helps the library staff maintain the catalog and make it user-friendly for researchers. The word authority in authority control derives from its initial use in identifying authors, and does not have the usual meaning of authority as a power relationship, although both senses of the word authority are related etymologically.
Cataloguers assign each subject—such as an author, book, series or corporation—a particular unique heading term which is then used consistently, uniquely, and unambiguously to describe all references to that same subject, even if there are variations such as different spellings, pen names, or aliases. The unique header can guide users to all relevant information including related or collocated subjects. Authority records can be combined into a database and called an authority file, and maintaining and updating these files as well as "logical linkages" to other files within them is the work of librarians and other information cataloguers. Accordingly, authority control is an example of controlled vocabulary and of bibliographic control.
While in theory any piece of information is amenable to authority control such as personal and corporate names, uniform titles, series, and subjects, library cataloguers typically focus on author names and book titles. Subject headings from the Library of Congress fulfill a function similar to authority records, although they are usually considered separately. As time passes, information changes, prompting needs for reorganization. According to one view, authority control is not about creating a perfect seamless system but rather it is an ongoing effort to keep up with these changes and try to bring "structure and order" to the task of helping users find information.
- Better researching. Authority control helps researchers get a handle on a specific subject with less wasted effort. A well-designed digital catalog/database enables a researcher to query a few words of an entry to bring up the already established term or phrase, thus improving accuracy and saving time.
- Makes searching more predictable. It can be used in conjunction with keyword searching using "and" or "not" or "or" or other Boolean operators on a web browser. It increases chances that a given search will return relevant items.
- Consistency of records.
- Organization and structure of information.
- Efficiency for cataloguers. The process of authority control is not only of great help to researchers searching for a particular subject to study, but it can help cataloguers organize information as well. Cataloguers can use authority records when trying to categorize new items, since they can see which records have already been catalogued and can therefore avoid unnecessary work.
- Maximises library resources.
- Easier to maintain the catalog. It enables cataloguers to detect and correct errors. In some instances, software programs support workers tasked with maintaining the catalog to do ongoing tasks such as automated clean-up. It helps creators and users of metadata.
- Fewer errors. It can help catch errors caused by typos or misspellings which can sometimes accumulate over time, sometimes known as quality drift. For example, machines can catch misspellings such as "Elementary school techers" and "Pumpkilns" which can then be corrected by library staff.
Differing names describe the same subject
Sometimes within a catalog there are different names or spellings for only one person or subject. This can bring confusion since researchers may miss some information. Authority control is used by cataloguers to collocate materials that logically belong together but which present themselves differently. Records are used to establish uniform titles which collocate all versions of a given work under one unique heading even when such versions are issued under different titles. With authority control, one unique preferred name represents all variations and will include different variations, spellings and misspellings, uppercase versus lowercase variants, differing dates, and so forth. For example, in Wikipedia, the subject of Princess Diana is described by an article Diana, Princess of Wales as well as numerous other descriptors, but both Princess Diana and Diana, Princess of Wales describe the same person; an authority record would choose one title as the preferred one for consistency. In an online library catalog, various entries might look like the following:
- Diana. (1)
- Diana, Princess of Wales. (1)
- Diana, Princess of Wales, 1961–1997 (13)
- Diana, Princess of Wales 1961–1997 (1)
- Diana, Princess of Wales, 1961–1997 (2)
- DIANA, PRINCESS OF WALES, 1961–1997. (1)
- Diana, Princess of Wales, — Iconography. (2)
These different terms describe the same person. Accordingly, authority control reduces these entries to one unique entry or official authorized heading, sometimes termed an access point:
- Diana, Princess of Wales, 1961–1997
|L i b r a r y||H e a d i n g|
|National Library of the Netherlands||Diana, prinses van Wales, 1961-1997|
|Virtual International Authority File||VIAF ID: 107032638|
|Wikipedia||Diana, Princess of Wales|
|WorldCat||Diana Princess of Wales 1961-1997|
|German National Library||Diana Wales, Prinzessin 1961-1997|
|U.S. Library of Congress||Diana, Princess of Wales, 1961-1997|
|Biblioteca Nacional de España||Windsor, Diana, Princess of Wales|
|Getty Union List of Artist Names||Diana, Princess of Wales English noble and patron, 1961-1997|
Generally there are different authority file headings chosen by different national libraries, possibly inviting confusion, but there are different approaches internationally to try to lessen the confusion. One international effort to prevent such confusion is the Virtual International Authority File which is a collaborative attempt to provide a single heading for a particular subject. It is a way to standardize information from different national libraries such as the German National Library and the United States Library of Congress. The idea is to create a single worldwide virtual authority file. For example, the German National Library's term for Princess Diana is Diana Wales, Prinzessin 1961-1997 while the United States Library of Congress prefers the term Diana, Princess of Wales, 1961-1997; other national libraries have other choices. The Virtual International Authority File choice for all of these variations is VIAF ID: 107032638—that is, a common number representing all of these variations.
Wikipedia prefers the term Diana, Princess of Wales, but at the bottom of Wikipedia's page about her, there are links to various international cataloguing efforts for reference purposes.
Same name describes two different subjects
Sometimes two different authors have been published under the same name. This can happen if there is a title which is identical to another title or to a collective uniform title. This, too, can cause confusion. Different authors can be distinguished correctly from each other by, for example, adding a middle initial to one of the names; in addition, other information can be added to one entry to clarify the subject, such as birth year, death year, range of active years such as 1918–1965 when the person flourished, or a brief descriptive epithet. When cataloguers come across different subjects with similar or identical headings, they can disambiguate them using authority control.
Authority records and files
A customary way of enforcing authority control in a bibliographic catalog is to set up a separate index of authority records, which relates to and governs the headings used in the main catalog. This separate index is often referred to as an "authority file." It contains an indexable record of all decisions made by cataloguers in a given library (or—as is increasingly the case—cataloguing consortium), which cataloguers consult when making, or revising, decisions about headings. As a result, the records contain documentation about sources used to establish a particular preferred heading, and may contain information discovered while researching the heading which may be useful.
While authority files provide information about a particular subject, their primary function is not to provide information but to organize it. They contain enough information to establish that a given author or title is unique, but that is all; irrelevant but interesting information is generally excluded. Although practices vary internationally, authority records in the English-speaking world generally contain:
- Headings show the preferred title chosen as the official and authorized version. It is important that the heading be unique; if there is a conflict with an identical heading, then one of the two will have to be chosen:
Since the headings function as access points, making sure that they are distinct and not in conflict with existing entries is important. For example, the English novelist William Collins (1824–89), whose works include the Moonstone and The Woman in White is better known as Wilkie Collins. Cataloguers have to decide which name the public would most likely look under, and whether to use a see also reference to link alternative forms of an individual's name.
- Cross references are other forms of the name or title that might appear in the catalog and include:
- see references are forms of the name or title that describe the subject but which have been passed over or deprecated in favor of the authorized heading form
- see also references point to other forms of the name or title that are also authorized. These see also references generally point to earlier or later forms of a name or title.
- Statement(s) of justification is a brief account made by the cataloguer about particular information sources used to determine both authorized and deprecated forms. Sometimes this means citing the title and publication date of the source, the location of the name or title on that source, and the form in which it appears on that source.
For example, the Irish writer Brian O'Nolan, who lived from 1911 to 1966, wrote under many pen names such as Flann O'Brien and Myles na Gopaleen. Catalogers at the United States Library of Congress chose one form—"O'Brien, Flann, 1911–1966"—as the official heading. The example contains all three elements of a valid authority record: the first heading O'Brien, Flann, 1911–1966 is the form of the name that the Library of Congress chose as authoritative. In theory, every record in the catalog that represents a work by this author should have this form of the name as its author heading. What follows immediately below the heading beginning with Na Gopaleen, Myles, 1911–1966 are the see references. These forms of the author's name will appear in the catalog, but only as transcriptions and not as headings. If a user queries the catalog under one of these variant forms of the author's name, he or she would receive the response: "See O’Brien, Flann, 1911–1966." There is an additional spelling variant of the Gopaleen name: "Na gCopaleen, Myles, 1911–1966" has an extra C inserted because the author also employed the non-anglicized Irish spelling of his pen-name, in which the capitalized C shows the correct root word while the preceding g indicates its pronunciation in context. So if a library user comes across this spelling variant, he or she will be led to the same author regardless. See also references, which point from one authorized heading to another authorized heading, are exceedingly rare for personal name authority records, although they often appear in name authority records for corporate bodies. The final four entries in this record beginning with His At Swim-Two-Birds ... 1939. constitute the justification for this particular form of the name: it appeared in this form on the 1939 edition of the author's novel At Swim-Two-Birds, whereas the author's other noms de plume appeared on later publications.
Access control
The act of choosing a single authorized heading to represent all forms of a name is often difficult, sometimes arbitrary and on occasion politically sensitive. An alternative is the idea of access control, where variant forms of a name are related without the endorsement of one particular form.
Authority control and cooperative cataloging
Before the advent of digital Online public access catalogs and the Internet, creating and maintaining a library's authority files was generally carried out by individual cataloging departments within each library—that is, if such cataloguing was done at all. This often resulted in substantial disagreement between different libraries over which form of a given name was considered authoritative. As long as a library's catalog was internally consistent, differences between catalogs did not matter greatly.
However, even before the Internet revolutionized the way libraries go about cataloging their materials, catalogers began moving toward the establishment of cooperative consortia, such as OCLC and RLIN in the United States, in which cataloging departments from libraries all over the world contributed their records to, and took their records from, a shared database. This development prompted to the need for national standards for authority work.
In the United States, the primary organization for maintaining cataloging standards with respect to authority work operates under the aegis of the Library of Congress, and is known as the Name Authority Cooperative Program, or NACO Authority.
There are various standards using different acronyms.
- ISAAR (CPF) – International Standard Archival Authority Record for Corporate Bodies, Persons, and Families.[dead link] Published by the International Council on Archives
- MARC standards for authority records in machine-readable format.
- Metadata Authority Description Schema (MADS), an XML schema for an authority element set that may be used to provide metadata about agents (people, organizations), events, and terms (topics, geographics, genres, etc.).
- Encoded Archival Context, an XML schema for authority records conforming to ISAAR (CPF)
See also
- Knowledge Organization Systems
- Library classification systems:
- Ontology (information science)
- Simple Knowledge Organization System (SKOS) for representation of thesauri, classification schemes, taxonomies, subject-heading systems, or any other type of structured controlled vocabulary.
- Universal Authority File (Gemeinsame Normdatei or GND), authority file by the German National Library
- Virtual International Authority File (VIAF), an aggregation of authority files currently focused on personal and corporate names.
- ORCID (Open Researcher and Contributor ID), a nonproprietary alphanumeric code to uniquely identify scientific and other academic authors. Authors - including Wikipedia editors - may obtain an ORCID by signing up at orcid.org.
- Block, Rick J. 1999. “Authority Control: What It Is and Why It Matters.”, accessed March 30, 2006
- "Why Does a Library Catalog Need Authority Control and What Is". government of Vermont. November 25, 2012.
- "Authority Control at the NMSU Library". New Mexico State University. November 25, 2012.
- "Authority Control in the Card Environment". government of Vermont. November 25, 2012. Retrieved November 25, 2012.
- Kathleen L. Wells of the University of Southern Mississippi Libraries (November 25, 2012). "Got Authorities? Why Authority Control Is Good for Your Library". Tennessee Libraries.
- "auctor". Online Etymology Dictionary. December 7, 2012. Retrieved 2012-12-07. "... Note: root words for both author and authority are words such as auctor or autor and autorite from the 13th century. -- author (n) c.1300, autor "father," from O.Fr. auctor, acteor "author, originator, creator, instigator (12c., Mod.Fr. auteur), from L. auctorem (nom. auctor) ... -- authority (n.) early 13c., autorite "book or quotation that settles an argument," from O.Fr. auctorité "authority, prestige, right, permission, dignity, gravity; the Scriptures" (12c.; Mod.Fr. autorité), ... (see author). ..."
- "authority (control)". Memidex. December 7, 2012. Retrieved 2012-12-07. "Etymology ... autorite "book or quotation that settles an argument", from Old French auctorité..."
- "authority". Merriam-Webster Dictionary. December 7, 2012. Retrieved 2012-12-07. "See "Origin of authority" -- Middle English auctorite, from Anglo-French auctorité, from Latin auctoritat-, auctoritas opinion, decision, power, from auctor First Known Use: 13th century..."
- "Cataloguing authority control policy". National Library of Australia. November 25, 2012. "The primary purpose of authority control is to assist the catalogue user in locating items of interest."
- "Authority Control at LTI". LTI. November 25, 2012.
- "brief guidelines on authority control decision-making". NCSU Libraries. November 25, 2012.
- "Authority Control in Unicorn WorkFlows August 2001". Rutgers University. November 25, 2012. "Why Authority Control?"
- Burger, Robert H. Authority Work: the Creation, Use, Maintenance and Evaluation of Authority Records and Files. Littleton, Colo. : Libraries Unlimited, 1985
- Clack, Doris Hargrett. Authority Control: Principles, Applications, and Instructions. Chicago : American Library Association, 1990.
- Maxwell, Robert L. Maxwell's Guide to Authority Work. Chicago : American Library Association, 2002.
- Calhoun, Karen (June 1998). "A Bird's Eye View of Authority Control in Cataloging". Cornell University Library. Retrieved November 25, 2012.
- Virtual International Authority File records for Princess Diana, retrieved March 12, 2013
- Note: this is the article title as of March 12, 2013
- Mason, Moya K (November 25, 2012), Purpose of Authority Work and Files
- Wynar, BS (1992), Introduction to Cataloguing and Classification (8th ed.), Littleton, CO: Libraries Unlimited.
- Authorities files, Library of Congress; the original record has been abbreviated for clarity.
- Calhoun, Karen, A Bird's Eye View of Authority Control in Cataloging, Cornell University Library.
- Note: See Linda Barnhart's Access Control Records: Prospects and Challenges from the 1996 OCLC conference 'Authority Control in the 21st Century' for more information.
- "NACO Home: NACO (Program for Cooperative Cataloging (PCC), Library of Congress)". Loc.gov. Retrieved 2011-12-18.
- http://www.ica.org/en/node/30230[dead link]
- "ICArchives : Page d'accueil : Accueil". Ica.org. Retrieved 2011-12-18.
- Library of Congress Network Development and MARC Standards Office. "MARC 21 Format for Authority Data: Table of Contents (Network Development and MARC Standards Office, Library of Congress)". Loc.gov. Retrieved 2011-12-18.
- Note: see orcid.org
| 3.4459 |
||This article needs additional citations for verification. (April 2008)|
A blackboard (UK English) or chalkboard (US English) is a reuseable writing surface on which text or drawings are made with sticks of calcium sulfate or calcium carbonate, known, when used for this purpose, as chalk. Blackboards were originally made of smooth, thin sheets of black or dark grey slate stone. Modern versions are often green because the color is considered easier on the eyes.
A blackboard can simply be a piece of board painted with matte dark paint (usually black or dark green). A more modern variation consists of a coiled sheet of plastic drawn across two parallel rollers, which can be scrolled to create additional writing space while saving what has been written. The highest grade blackboards are made of a rougher version porcelain enamelled steel (black, green, blue or sometimes other colours). Porcelain is very hard wearing and blackboards made of porcelain usually last 10–20 years in intensive use.
Lecture theatres may contain a number of blackboards in a grid arrangement. The lecturer then moves boards into reach for writing and then move them out of reach, allowing a large amount of material to be shown simultaneously.
The chalk marks can be easily wiped off with a damp cloth, a sponge or a special blackboard eraser consisting of a block of wood covered by a felt pad. However, chalk marks made on some types of wet blackboard can be difficult to remove. Blackboard manufacturers often advise that a new or newly resurfaced blackboard be completely covered using the side of a stick of chalk and then that chalk brushed off as normal to prepare it for use.
Chalk sticks
Sticks of processed "chalk" are produced especially for use with blackboards in white and also in various colours. These are not actually made from chalk rock but from calcium sulfate in its dihydrate form, gypsum.
Advantages and disadvantages
As compared to whiteboards, blackboards have a variety of advantages:
- Chalk requires no special care; whiteboard markers must be capped or else they dry out.
- Chalk is an order of magnitude cheaper than whiteboard markers for a comparable amount of writing.
- It is easier to draw lines of different weights and thicknesses with chalk than with whiteboard markers.
- Chalk has a mild smell, whereas whiteboard markers often have a pungent odor.
- Chalk writing often provides better contrast than whiteboard markers.
On the other hand, chalk produces dust, the amount depending on the quality of chalk used. Some people find this uncomfortable or may be allergic to it, and according to the American Academy of Allergy, Asthma and Immunology (AAAAI), there are links between chalk dust and allergy and asthma problems. The dust also precludes the use of chalk in areas shared with dust-sensitive equipment such as computers.
The scratching of fingernails on a blackboard, as well as other pointed, especially metal objects against blackboards, produces a sound that is well known for being extremely irritating to most people. Many are averse also to merely the sight or thought of this sort of contact.
Etymology and history
They use black tablets for the children in the schools, and write upon them along the long side, not the broadside, writing with a white material from the left to the right.
The first classroom uses of large blackboards are difficult to date, but they were used for music education and composition in Europe as far back as the sixteenth century.
The term "blackboard" is attested in English from the mid-eighteenth century; the Oxford English Dictionary provides a citation from 1739, to write "with Chalk on a black-Board". The term "chalkboard" was used interchangeably with "blackboard" in the United Kingdom in the early nineteenth century, but by the twentieth century had become primarily restricted to North American English.
The blackboard was introduced into the US education system from Europe in 1801. This occurred at West Point, where George Baron, an English mathematician, used chalk and blackboard in a lecture on September 21. James Pillans has been credited with the invention of coloured chalk (1814): he had a recipe with ground chalk, dyes and porridge.
See also
|Look up blackboard or chalkboard in Wiktionary, the free dictionary.|
|Wikimedia Commons has media related to: Blackboards|
- Chalkboard gag from The Simpsons
- Interactive whiteboard
- Sidewalk chalk
- Sound of fingernails scraping chalkboard
- WebMD. "Reading, Writing, and Wheezing? Not Necessarily". Asthma Health Center. WebMD. Retrieved Sept. 19, 2000.
- "Full text of "Alberuni's India. An account of the religion, philosophy, literature, geography, chronology, astronomy, customs, laws and astrology of India about A.D. 1030"".
- Owens, Jessie Ann. Composers at Work: The Craft of Musical Composition, 1450-1600. Oxford University Press, 1998.
- Entry for blackboard, n, in the Oxford English Dictionary (Third ed., 2011)
- Entry for chalkboard, n, in the Oxford English Dictionary (Third ed., 2011)
- Stephen E. Ambrose (1 December 1999). Duty, Honor, Country: A History of West Point. JHU Press. p. 19. ISBN 978-0-8018-6293-9. Retrieved 14 February 2013.
- Jo Swinnerton (30 September 2005). The History of Britain Companion. Anova Books. p. 128. ISBN 978-1-86105-914-7. Retrieved 14 February 2013.
| 3.547309 |
A textbook or coursebook is a manual of instruction in any branch of study. Textbooks are produced according to the demands of educational institutions. Although most textbooks are only published in printed format, many are now available as online electronic books and increasingly, although illegally, in scanned format on file sharing networks.
The ancient Greeks wrote texts intended for education. The modern textbook has its roots in the standardization made possible by the printing press. Johannes Gutenberg himself may have printed editions of Ars Minor, a schoolbook on Latin grammar by Aelius Donatus. Early textbooks were used by tutors and teachers, who used the books as instructional aids (e.g., alphabet books), as well as individuals who taught themselves.
The Greek philosopher Socrates (469-399 B.C.) lamented the loss of knowledge because the media of transmission were changing. Before the invention of the Greek alphabet 2,500 years ago, knowledge and stories were recited aloud, much like Homer's epic poems. The new technology of writing meant stories no longer needed to be memorized, a development Socrates feared would weaken the Greeks' mental capacities for memorizing and retelling. (Paradoxically, we know about Socrates' concerns only because they were written down by his student Plato in his famous Dialogues.)
The next revolution for books came with the 15th-century invention of printing with changeable type. The invention is attributed to German metalsmith Johannes Gutenberg, who cast type in molds using a melted metal alloy and constructed a wooden-screw printing press to transfer the image onto paper.
Gutenberg's first and only large-scale printing effort was the now iconic Gutenberg Bible in the 1450s — a Latin translation from the Hebrew Old Testament and the Greek New Testament[disambiguation needed], copies of which can be viewed on the British Library website www.bl.uk. Gutenberg's invention made mass production of texts possible for the first time. Although the Gutenberg Bible itself was stratospherically expensive, printed books began to spread widely over European trade routes during the next 50 years, and by the 16th century printed books had become more widely accessible and less costly.
Compulsory education and the subsequent growth of schooling in Europe led to the printing of many standardized texts for children. Textbooks have become the primary teaching instrument for most children since the 19th century. Two textbooks of historical significance in United States schooling were the 18th century New England Primer and the 19th century McGuffey Readers.
Technological advances change the way people interact with textbooks. Online and digital materials are making it increasingly easy for students to access materials other than the traditional print textbook. Students now have access to electronic and PDF books, online tutoring systems and video lectures. An example of e-book publishing is Principles of Biology from Nature Publishing.
Most notably, an increasing number of authors are foregoing commercial publishers and offering their textbooks under a creative commons or other open license. The New York Times recently endorsed the use of free, open, digital textbooks in the editorial "That book costs how much?"
The "broken market"
The textbook market does not operate in exactly the same manner as most consumer markets. First, the end consumers (students) do not select the product, and the product is not purchased by faculty or professors. Therefore, price is removed from the purchasing decision, giving the producer (publishers) disproportionate market power to set prices high. Similarities are found in the pharmaceutical industry, which sells its wares to doctors, rather than the ultimate end-user (i.e. patient).
This fundamental difference in the market is often cited as the primary reason that prices are out of control. The term "Broken Market" first appeared in Economist James Koch's analysis of the market commissioned by the Advisory Committee on Student Financial Assistance.
This situation is exacerbated by the lack of competition in the textbook market. Consolidation in the past few decades[when?] has reduced the number of major textbook companies from around 30 to just a handful. Consequently, there is less competition than there used to be, and the high cost of starting up 'keeps new companies from entering.
New editions & the used book market
Students seek relief from rising prices through the purchase of used copies of textbooks, which tend to be less expensive. Most college bookstores offer used copies of textbooks at lower prices. Most bookstores will also buy used copies back from students at the end of a term if the book is going to be re-used at the school. Books that are not being re-used at the school are often purchased by an off-campus wholesaler for 0-30% of the new cost, for distribution to other bookstores where the books will be sold. Textbook companies have countered this by encouraging faculty to assign homework that must be done on the publisher's website. If a student has a new textbook then he or she can use the pass code in the book to register on the site. If the student has purchased a used textbook then he or she must pay money directly to the publisher in order to access the website and complete assigned homework.
Students who look beyond the campus bookstore can typically find lower prices. With the ISBN or title, author and edition, most textbooks can be located through online used book sellers or retailers.
Most leading textbook companies publish a new edition every 3 or 4 years, more frequently in math & science. Harvard economics chair James K. Stock has stated that new editions are often not about significant improvements to the content. "New editions are to a considerable extent simply another tool used by publishers and textbook authors to maintain their revenue stream, that is, to keep up prices," A study conducted by The Student PIRGs found that a new edition costs 12% more than a new copy of previous edition, and 58% more than a used copy of the previous edition. Textbook publishers maintain these new editions are driven by faculty demand. The Student PIRGs' study found that 76% of faculty said new editions were justified “half of the time or less” and 40% said they were justified “rarely” or “never.” The PIRG study has been criticized by publishers, who argue that the report contains factual inaccuracies regarding the annual average cost of textbooks per student.
The Student PIRGs also point out that recent emphasis on electronic textbooks, or "eTextbooks," does not always save students money. Even though the book costs less up-front, the student will not recover any of the cost through resale.
Another publishing industry practice that has been highly criticized is "bundling," or shrink-wrapping supplemental items into a textbook. Supplemental items range from CD-ROMs and workbooks to online passcodes and bonus material. Students do not always have the option to purchase these items separately, and often the one-time-use supplements destroy the resale value of the textbook.
According to the Student PIRGs, the typical bundled textbook is 10%-50% more than an unbundled textbook, and 65% of professors said they “rarely” or “never” use the bundled items in their courses.
A 2005 Government Accountability Office (GAO) Report found that the production of these supplemental items was the primary cause of rapidly increasing prices:
While publishers, retailers, and wholesalers all play a role in textbook pricing, the primary factor contributing to increases in the price of textbooks has been the increased investment publishers have made in new products to enhance instruction and learning...While wholesalers, retailers, and others do not question the quality of these materials, they have expressed concern that the publishers’ practice of packaging supplements with a textbook to sell as one unit limits the opportunity students have to purchase less expensive used books....If publishers continue to increase these investments, particularly in technology, the cost to produce a textbook is likely to continue to increase in the future.
Bundling has also been used as a means of segmenting the used book market. Each combination of a textbook and supplemental items receives a separate ISBN. A single textbook could therefore have dozens of ISBNs that denote different combinations of supplements packaged with that particular book. When a bookstore attempts to track down used copies of textbooks, they will search for the ISBN the course instructor orders, which will locate only a subset of the copies of the textbook.
Legislation on the state and federal level seeks to limit the practice of bundling, by requiring publishers to offer all components separately. Publishers have testified in favor of bills including this provision, but only in the case that the provision exempts the loosely defined category of "integrated textbooks." The Federal bill only exempts 3rd party materials in integrated textbooks, however publisher lobbyists have attempted to create a loophole through this definition in state bills.
Price disclosure
Given that the problem of high textbook prices is linked to the "broken" economics of the market, requiring publishers to disclose textbook prices to faculty is a solution pursued by a number of legislatures. By inserting price into sales interactions, this regulation will supposedly make the economic forces operate more normally.
No data suggests that this is in fact true. However, The Student PIRGs have found that publishers actively withhold pricing information from faculty, making it difficult to obtain. Their most recent study found that 77% of faculty say publisher sales representatives do not volunteer prices, and only 40% got an answer when they directly asked. Furthermore, the study found that 23% of faculty rated publisher websites as “informative and easy to use” and less than half said they typically listed the price.
The US Congress passed a law in the 2008 Higher Education Opportunity Act that would require price disclosure. Legislation requiring price disclosure has passed in Connecticut, Washington, Minnesota, Oregon, Arizona, Oklahoma, and Colorado. Publishers are currently supporting price disclosure mandates, though they insist that the "suggested retail price" should be disclosed, rather than the actual price the publisher would get for the book.
Used textbook market
Once a textbook is purchased from a retailer for the first time, there are several ways a student can sell his/her textbooks back at the end of the semester. Students can sell to 1) the college/university bookstore; 2) fellow students; or 3) a number of online Web sites or student swap service.
Campus buyback
As for buyback on a specific campus, faculty decisions largely determine how much a student receives. If a professor chooses to use the same book the following semester, even if it is a custom text, designed specifically for an individual instructor, bookstores often buy the book back. The GAO report found that, generally, if a book is in good condition and will be used on the campus again the next term, bookstores will pay students 50 percent of the original price paid. If the bookstore has not received a faculty order for the book at the end of the term and the edition is still current, they may offer students the wholesale price of the book, which could range from 5 to 35 percent of the new retail price, according to the GAO report.
When students resell their textbooks during campus “buyback” periods, these textbooks are often sold into the national used textbook distribution chain. If a textbook is not going to be used on campus for the next semester of courses then many times the college bookstore will sell that book to a national used book company. The used book company then resells the book to another college bookstore. Finally, that book is sold as used to a student at another college at a price that is typically 75% of the new book price. At each step, a markup is applied to the book to enable the respective companies to continue to operate.
Student to student sales
Students can also sell or trade textbooks among themselves. After completing a course, sellers will often seek out members of the next enrolling class, people who are likely to be interested in purchasing the required books. This may be done by posting flyers to advertise the sale of the books or simply soliciting individuals who are shopping in the college bookstore for the same titles. Many larger schools have independent websites set up for the purpose of facilitating such trade. These often operate much like digital classified ads, enabling students to list their items for sale and browse for those they wish to acquire. Also, at the US Air Force Academy, it is possible to e-mail entire specific classes, allowing for an extensive network of textbook sales to exist.
Student online marketplaces
Online marketplaces are one of the two major types of online websites students can use to sell used textbooks. Online marketplaces may have an online auction format or may allow the student to list their books for a fixed price. In either case, the student must create the listing for each book themselves and wait for a buyer to order, making the use of marketplaces a more passive way of selling used textbooks. Unlike campus buyback and online book, students are unlikely to sell all their books to one buyer using online marketplaces, and will likely have to send out multiple books individually.
Online book buyers
Online book buyers buy textbooks, and sometimes other types of books, with the aim of reselling them for a profit. Like online marketplaces, online book buyers operate year-round, giving students the opportunity to sell their books even when campus "buyback" periods are not in effect. Students enter the ISBN numbers of the books they wish to sell and receive a price quote or offer. These online book buyers often offer "free shipping" (which in actuality is built into the offer for the book), and allow students to sell multiple books to the same source. Because online book buyers are buying books for resale, the prices they offer may be lower than students can get on online marketplaces. However, their prices are competitive, and they tend to focus on the convenience of their service. Some even claim that buying used textbooks online and selling them to online book buyers has a lower total cost than even textbook rental services.
Textbook exchanges
In response to escalating textbook prices, limited competition, and to provide a more efficient system to connect buyers and sellers together, online textbook exchanges were developed. Most of today's sites handle buyer and seller payments, and usually deduct a small commission only after the sale is completed.
According to textbook author Henry L. Roediger (and Wadsworth Publishing Company senior editor Vicki Knight), the used textbook market is illegitimate, and entirely to blame for the rising costs of textbooks. As methods of "dealing with this problem", he recommends making previous editions of textbooks obsolete, binding the textbook with other materials, and passing laws to prevent the sale of used books. The concept is not unlike the limited licensing approach for computer software, which places rigid restrictions on resale and reproduction. The intent is to make users understand that the content of any textbook is the intellectual property of the author and/or the publisher, and that as such, subject to copyright. Obviously, this idea is completely opposed to the millennia-old tradition of the sale of used books, and would make that entire industry illegal.
Rental programs
In-store rentals are processed by either using a kiosk and ordering books online with a third party facilitator or renting directly from the store's inventory. Some stores use a hybrid of both methods, opting for in-store selections of the most popular books and the online option for more obscure titles or books they consider too risky to put in the rental system.
Open textbooks
The latest trend in textbooks is "open textbooks." An open textbook is a free, openly licensed textbook offered online by its author(s). According to PIRG, a number of textbooks already exist, and are being used at schools such as the MIT and Harvard. A 2010 study published found that open textbooks offer a viable and attractive means to meet faculty and student needs while offering savings of approximately 80% compared to traditional textbook options.
Although the largest question seems to be who is going to pay to write them, several state policies suggest that public investment in open textbooks might make sense. To offer another perspective, any jurisdiction might find itself challenged to find sufficient numbers of credible academics who would be willing to undertake the effort of creating an open textbook without realistic compensation, in order to make such a proposal work.
The other challenge involves the reality of publishing, which is that textbooks with good sales and profitability subsidize the creation and publication of low demand but believed to be necessary textbooks. Subsidies skew markets and the elimination of subsidies is disruptive; in the case of low demand textbooks the possibilities following subsidy removal include any or all of the following: higher retail prices, a switch to open textbooks, a reduction of the number of titles published.
On the other hand, independent open textbook authoring and publishing models are developing. Most notably, the startup publisher Flat World Knowledge already has dozens of college-level open textbooks that are used by more than 900 institutions in 44 countries. Their innovative business model was to offer the open textbook free online, and then sell ancillary products that students are likely to buy if prices are reasonable - print copies, study guides, ePub, .Mobi (Kindle), PDF download, etc. Flat World Knowledge compensates its authors with royalties on these sales. With the generated revenue Flat World Knowledge funded high-quality publishing activities with a goal of making the Flat World financial model sustainable. However, in January, 2013 Flat World Knowledge announced their financial model could no longer sustain their free-to-read options for students. Flat World Knowledge intends to have open textbooks available for the 125 highest-enrolled courses on college campuses within the next few years.
CK-12 FlexBooks are the open textbooks designed for United States K-12 courses. CK-12 FlexBooks are designed to facilitate conformance to national and United States and individual state textbook standards. CK-12 FlexBooks are licensed under a Creative Commons BY-NC-SA license, are easy to update, and easy to customize. CK-12 FlexBooks are free to use online and offer formats suitable for use on portable personal reading devices and computers - both online and offline. Formats for both iPad and Kindle are offered. School districts may select a title as is or customize the open textbook to meet local instructional standards. The file may be then accessed electronically or printed using any print on demand service without paying a royalty, saving 80% or more when compared to traditional textbook options. An example print on demand open textbook title, "College Algebra" by Stitz & Zeager through Lulu is 608 pages, royalty free, and costs about $20 ordered one at a time (March, 2011). (Any print on demand service could be used - this is just an example. School districts could easily negotiate even lower prices for bulk purchases to be printed in their own communities.) Teacher's editions are available for educators and parents. Titles have been authored by various individuals and organizations and are vetted for quality prior to inclusion in the CK-12 catalog. An effort is underway to map state educational standards correlations. Stanford University provided a number of titles in use. CK-12 Foundation is a non-profit organization with a mission to reduce the cost of textbook materials for the K-12 market both in the U.S. and worldwide using a standards driven, open-licensed, web-based, collaborative content aggregation model.
Curriki is another modular K-12 content non-profit "empowering educators to deliver and share curricula." Selected Curriki materials are also correlated to U.S. state educational standards. Some Curriki content has been collected into open textbooks and some may be used for modular lessons or special topics.
|This section may rely excessively on sources too closely associated with the subject, preventing the article from being verifiable and neutral. (March 2012)|
Wikibooks is a Wikimedia project that aims to provide and promote the editing of open-content textbooks. Wikibooks is for textbooks, annotated texts, instructional guides, and manuals. These materials can be used in a traditional classroom, an accredited or respected institution, a home-school environment, as part of a Wikiversity course or for self-learning. As a general rule only instructional books are suitable for inclusion. Most types of books, both fiction and non-fiction, are not allowed on Wikibooks, unless they are instructional. The use of literary elements, such as allegory or fables as instructional tools can be permitted in some situations.
Although the project does not permit verbatim copies of pre-existing works (those would be covered by WikiSource), it does permit annotated texts, which are a kind of text that includes an original text within it and serves as a guide to reading or studying that text. Annotated editions of previously published source texts may only be written if the source text is compatible with the project's license.
MIT OpenCourseWare
Provides several open textbooks.
International market pricing
Similar to the issue of reimportation of pharmaceuticals into the U.S. market, the GAO report also highlights a similar phenomenon in textbook distribution. Retailers and publishers have expressed concern about the reimportation of lower-priced textbooks from international locations. Specifically, they cited the ability students have to purchase books from online distribution channels outside the United States at lower prices, which may result in a loss of sales for U.S. retailers. Additionally, the availability of lower-priced textbooks through these channels has heightened distrust and frustration among students regarding textbook prices, and college stores find it difficult to explain why their textbook prices are higher, according to the National Association of College Stores. Retailers and publishers have also been concerned that some U.S. retailers may have engaged in reimportation on a large scale by ordering textbooks for entire courses at lower prices from international distribution channels. While the 1998 Supreme Court decision Quality King v. L'anza protects the reimportation of copyrighted materials under the first-sale doctrine, textbook publishers have still attempted to prevent the U.S. sale of international editions by enforcing contracts which forbid foreign wholesalers from selling to American distributors. Concerned about the effects of differential pricing on college stores, the National Association of College Stores has called on publishers to stop the practice of selling textbooks at lower prices outside the United States. For example, some U.S. booksellers arrange for drop-shipments in foreign countries which are then re-shipped to America where the books can be sold online at used prices (for a "new" unopened book). The authors often getting half-royalties instead of full-royalties, minus the charges for returned books from bookstores.
Cost distribution
According to the National Association of College Stores, the entire cost of the book is justified by expenses, with typically 11.7% of the price of a new book going to the author's royalties (or a committee of editors at the publishing house), 22.7% going to the store, and 64.6% going to the publisher. The store and publisher amounts are slightly higher for Canada. Bookstores and used-book vendors profit from the resale of textbooks on the used market, with publishers only earning profits on sales of new textbooks.
According to the GAO study published July 2005:
Following closely behind annual increases in tuition and fees at postsecondary institutions, college textbook and supply prices have risen at twice the rate of annual inflation over the last two decades.
Rising at an average of 6 percent each year since academic year 1987-1988, compared with overall average price increases of 3 percent per year, college textbook and supply prices trailed tuition and fee increases, which averaged 7 percent per year. Since December 1986, textbook and supply prices have nearly tripled, increasing by 186 percent, while tuition and fees increased by 240 percent and overall prices grew by 72 percent. While increases in textbook and supply prices have followed increases in tuition and fees, the cost of textbooks and supplies for degree-seeking students as a percentage of tuition and fees varies by the type of institution attended. For example, the average estimated cost of books and supplies per first-time, full-time student for academic year 2003-2004 was $898 at 4-year public institutions, or about 26 percent of the cost of tuition and fees. At 2-year public institutions, where low-income students are more likely to pursue a degree program and tuition and fees are lower, the average estimated cost of books and supplies per first-time, full-time student was $886 in academic year 2003-2004, representing almost three-quarters of the cost of tuition and fees.
According to the 2nd edition of a study by the United States Public Interest Research Group (US PIRG) published in February 2005: "Textbook prices are increasing at more than four times the inflation rate for all finished goods, according to the Bureau of Labor Statistics Producer Price Index. The wholesale prices charged by textbook publishers have jumped 62 percent since 1994, while prices charged for all finished goods increased only 14 percent. Similarly, the prices charged by publishers for general books increased just 19 percent during the same time period."
According to the 2007 edition of the College Board’s Trend in College Pricing Report published October 2007: "College costs continue to rise and federal student aid has shown slower growth when adjusted for inflation, while textbooks, as a percentage of total college costs, have remained steady at about 5 percent."
K-12 textbooks
In most U.S. K-12 public schools, a local school board votes on which textbooks to purchase from a selection of books that have been approved by the state Department of Education. Teachers receive the books to give to the students for each subject. Teachers are usually not required to use textbooks, however, and many prefer to use other materials instead. Textbook publishing in the U.S. is a business primarily aimed at large states. This is due to state purchasing controls over the books. The Texas State Board of Education spends in excess of $600 million annually on its central purchasing of textbooks.
High school
In recent years, high school textbooks of United States history have come under increasing criticism. Authors such as Howard Zinn (A People's History of the United States), Gilbert T. Sewall (Textbooks: Where the Curriculum Meets the Child) and James W. Loewen (Lies My Teacher Told Me: Everything Your American History Textbook Got Wrong), make the claim that U.S. history textbooks contain mythical untruths and omissions, which paint a whitewashed picture that bears little resemblance to what most students learn in universities. Inaccurately retelling history, through textbooks or other literature, has been practiced in many societies, from ancient Rome to the Soviet Union (USSR) and the People's Republic of China. The content of history textbooks is often determined by the political forces of state adoption boards and ideological pressure groups.
Science textbooks have been the source of ongoing debates and have come under scrutiny from several organizations. The presentation or inclusion of controversial scientific material has been debated in several court cases. Poorly designed textbooks have been cited as contributing to declining grades in mathematics and science in the United States and organizations such as the American Academy of Arts and Sciences (AAAS) have criticized the layout, presentation, and amount of material given in textbooks.
Discussions of textbooks have been included on creation and evolution in the public education debate. The Smith v. Board of School Commissioners of Mobile County case brought forward a debate about scientific fact being presented in textbooks.
In his book, Surely You're Joking, Mr. Feynman!, the late physics Nobel Prize laureate Richard P. Feynman described his experiences as a member of a committee that evaluated science textbooks. At some instances, there were nonsensical examples to illustrate physical phenomena; then a company sent — for reasons of timing — a textbook that contained blank pages, which even got good critiques. Feynman himself experienced attempts at bribery.
Largely in the US, but increasingly in other nations, K-12 Mathematics textbooks have reflected the controversies of new math and reform mathematics which have sought to replace traditional mathematics in what have been called the math wars. Traditional texts, still favored in Asia and other areas, merely taught the same time-tested mathematics that most adults have learned. By contrast "progressive" approaches seek to address problems in social inequity with approaches that often incorporate principles of constructivism and discovery. Texts such as TERC and CMP discourage or omit standard mathematics methods and concepts such as long division and lowest common denominators. For example an index entry to multiply fractions would lead to "devise your own method to multiply fractions which work on these examples", and the formula for the area of a circle would be an exercise for a student to derive rather than including it in the student text. By the 2000s, while some districts were still adopting the more novel methods, others had abandoned them as unworkable.
Higher education
In the U.S., college and university textbooks are chosen by the professor teaching the course, or by the department as a whole. Students are typically responsible for obtaining their own copies of the books used in their courses, although alternatives to owning textbooks, such as textbook rental services and library reserve copies of texts, are available in some instances.
In some European countries, such as Sweden or Spain, students attending institutions of higher education pay for textbooks themselves, although higher education is free of charge otherwise.
With higher education costs on the rise, many students are becoming sensitive to every aspect of college pricing, including textbooks, and in many cases amount to one tenth of tuition costs. The 2005 Government Accountability Office report on college textbooks said that since the 1980s, textbook and supply prices have risen twice the rate of inflation in the past two decades. A 2005 PIRG study found that textbooks cost students $900 per year, and that prices increased four times the rate of inflation over the past decade. A June 2007 Advisory Committee on Student Financial Assistance (ACSFA) report, “Turn the Page,” reported that the average U.S. student spends $700–$1000 per year on textbooks.
While many groups have assigned blame to publishers, bookstores or faculty, the ACSFA also found that assigning blame to any one party—faculty, colleges, bookstores or publishers—for current textbook costs is unproductive and without merit. The report called on all parties within the industry to work together to find productive solutions, which included a movement toward open textbooks and other lower-cost digital solutions.
Textbook prices are considerably higher in Law School. Students ordinarily pay close to $200 for case books consisting of cases available free online.
Textbook bias on controversial topics
In cases of history, science, current events, and political textbooks, the writer might be biased towards one way or another. Topics such as actions of a country, presidential actions, and scientific theories are common potential biases.
See also
- Casebook - A special type of textbook used in law schools in the United States.
- Japanese textbook controversy
- Pakistani textbooks controversy
- Kanawha County textbook controversy
- Sourcebook – collection of texts, often used in social sciences and humanities in the United States
- Wikibooks - A sister project to Wikipedia whose goal is to create textbooks.
- Workbook - Usually filled with practice problems, where the answers can be written directly in the book.
- Problem book - A textbook, usually graduate level, organized as a series of problems and full solutions.
- Open textbook - More information about open textbook options.
- http://wondermark.com/socrates-vs-writing/ True Stuff: Socrates vs. the Written Word, January 27th, 2011. By David Malki
- Marcia Clemmitt, "Learning Online Literacy," in "Reading Crisis?" CQ Researcher, Feb. 22, 2008, pp. 169-192.
- British Library, “Treasures in Full: Gutenberg Bible,” www.bl.uk/treasures/gutenberg/background.html.
- Koch, James P. "An Economic Analysis of Textbook Prices and the Textbook Market", 2006-09. Retrieved on 2012-06-12. (Alternative location (PDF))
- Rose, Marla Matzer. City at the head of the class: Consolidation, talent pool have made Columbus a hotbed for educational publishers. August 5, 2007. Retrieved 2/14/09. Archived from the original on 23 May 2011.
- D'Gama, Alissa and Benjamin Jaffe. "Professors Find Differents Uses for Textbook Profits." The Harvard Crimson, 4 March 2008. Retrieved on 7 October 2011.
- Rip-off 101: How the Current Practices of the Textbook Industry Drive Up the Cost of College Textbooks The Student PIRGs (2004)
- Capriccioso, Rob. Throwing Down the Book. Inside Higher Ed, August 29, 2006. Retrieved 2/14/09.
- Allen, Nicole. Course Correction: How Digital Textbooks Are Off Track and How to Set Them Straight. The Student PIRGs (2008)
- Required Reading: A Look at the Words Publishing Tactics at Work, The Student PIRGs (2006)
- "College Textbooks: Enhanced Offerings Appear to Drive Recent Price Increases." U.S. Government Accountability Office, Washington, DC, 2005. Abstract. Retrieved 7 October 2011.
- Analysis of Textbook Affordability Provisions in H.R. 4137, The Student PIRGs
- "Higher Education Opportunity Act." H.R.4137, U.S. House of Representatives, 110th Congress (2007-2008.) Public Law No. 110-315. Retrieved 7 October 2011.
- HB 2048. Missouri House of Representatives, 28 August 2008. Retrieved 7 October 2011.
- Summarized History for Bill Number SB08-073. Colorado General Assembly, 2008. Last updated 04 August 2008. Retrieved 07 October 2011.
- Zomer, Saffron. Exposing the Textbook Industry, The Student PIRGs (2007)
- Washington Governor Signs College Textbook Transparency Act, The Student PIRGs (Press Release)
- See PIRG's Catalog of Open Textbooks for examples of open textbooks
- A Cover to Cover Solution by Nicole Allen of the Student PIRGs. 2010.
- Flat World Knowledge President Eric Frank Addresses Oregon Legislators on Solving Textbook Affordability. Pressitt. February 21, 2011.
- Open-source textbook co. Flat World goes back to school with 40,000 new customers - Venture Beat 8/20/09
- 150,000 College Students Save $12 Million Using Flat World Knowledge Open Textbooks. Marketwire. August 23, 2010.
- Flat World Knowledge: Open College Textbooks by Sanford Forte. Opensource.com. February 23, 2010.
- Organizational Behavior v1.1 by Talya Bauer & Berrin Erdogan. Irvington, NY: Flat World Knowledge. 2010. (Free online open textbook format sample - PDF view)
- Introduction to Psychology by Charles Stangor. Irvington, NY: Flat World Knowledge. 2010. (Free online open textbook format sample - web view)
- See Flat World Knowledge's website
- Flat World Knowledge Website.
- Flat World Knowledge gets $15 million in Funding. Publishers Weekly. January 20, 2011.
- CK-12 FlexBooks. Homepage.
- Carl Stitz/Jeff Zeager on Ohio Textbook HQ 2010.
- CK-12 - Standards Correlations United States.
- Human Biology - Genetics CK-12 FlexBook by The Program in Human Biology, Stanford University. (sample of free web access format)
- About CK-12 Foundation
- Curriki.org Homepage.
- Lewin, Tamar (21 October 2003). "Students Find $100 Textbooks Cost $50, Purchased Overseas". The New York Times. Retrieved 24 September 2009.
- "Testimony of Marc L. Fleischaker, Counsel, National Association of College Stores". Hearing on "Are College Textbooks Priced Fairly?". U.S. House of Representatives, Committee on Education and the Workforce, Subcommittee on 21st Century Competitiveness. 20 July 2004. Archived from the original on 7 October 2011. Retrieved 24 September 2009.
- Rip-off 101: Second Edition, The Student PIRGs (2005)
Further reading
- Slatalla, Michelle (August 30, 2007), "Knowledge Is Priceless but Textbooks Are Not", New York Times.
|Wikimedia Commons has media related to: Textbooks|
| 3.55705 |
Food vs. fuel
Food vs. fuel is the dilemma regarding the risk of diverting farmland or crops for biofuels production in detriment of the food supply on a global scale. The "food vs. fuel" or "food or fuel" debate is international in scope, with valid arguments on all sides of the issue. There is disagreement about how significant the issue is, what is causing it, and what can or should be done about it.
Biofuel production has increased in recent years. Some commodities like maize (corn), sugar cane or vegetable oil can be used either as food, feed, or to make biofuels. For example, since 2006, a portion of land that was also formerly used to grow other crops in the United States is now used to grow corn for biofuels, and a larger share of corn is destined to ethanol production, reaching 25% in 2007. A major debate exists on the extent to which biofuels policies contributed to high agricultural prices levels and volatility. A recent study for the International Centre for Trade and Sustainable Development shows that market-driven expansion of ethanol in the US increased maize prices by 21 percent in 2009, in comparison with what prices would have been had ethanol production been frozen at 2004 levels. Lester R. Brown claimed that, since converting the entire grain harvest of the US would only produce 16% of its auto fuel needs, energy markets are effectively placed in competition with food markets for scarce arable land, resulting in higher food prices. A lot of R&D efforts are currently being put into the production of second generation biofuels from non-food crops, crop residues and waste. Second generation biofuels could hence potentially combine farming for food and fuel and moreover, electricity could be generated simultaneously, which could be beneficial for developing countries and rural areas in developed countries. With global demand for biofuels on the increase due to the oil price increases taking place since 2003 and the desire to reduce oil dependency as well as reduce GHG emissions from transportation, there is also fear of the potential destruction of natural habitats by being converted into farmland. Environmental groups have raised concerns about this trade-off for several years, but now the debate reached a global scale due to the 2007–2008 world food price crisis. On the other hand, several studies do show that biofuel production can be significantly increased without increased acreage. Therefore stating that the crisis in hand relies on the food scarcity.
Brazil has been considered to have the world's first sustainable biofuels economy and its government claims Brazil's sugar cane based ethanol industry has not contributed to the 2008 food crisis. A World Bank policy research working paper released in July 2008 concluded that "...large increases in biofuels production in the United States and Europe are the main reason behind the steep rise in global food prices", and also stated that "Brazil's sugar-based ethanol did not push food prices appreciably higher". However, a 2010 study also by the World Bank concluded that their previous study may have overestimated the contribution of biofuel production, as "the effect of biofuels on food prices has not been as large as originally thought, but that the use of commodities by financial investors (the so-called "financialisation of commodities") may have been partly responsible for the 2007/08 spike." A 2008 independent study by OECD also found that the impact of biofuels on food prices are much smaller.
Food price inflation
From 1974 to 2005 real food prices (adjusted for inflation) dropped by 75%. Food commodity prices were relatively stable after reaching lows in 2000 and 2001. Therefore, recent rapid food price increases are considered extraordinary. A World Bank policy research working paper published on July 2008 found that the increase in food commodities prices was led by grains, with sharp price increases in 2005 despite record crops worldwide. From January 2005 until June 2008, maize prices almost tripled, wheat increased 127 percent, and rice rose 170 percent. The increase in grain prices was followed by increases in fats and oil prices in mid-2006. On the other hand, the study found that sugar cane production has increased rapidly, and it was large enough to keep sugar price increases small except for 2005 and early 2006. The paper concluded that biofuels produced from grains have raised food prices in combination with other related factors between 70 to 75 percent, but ethanol produced from sugar cane has not contributed significantly to the recent increase in food commodities prices.
An economic assessment report published by the OECD in July 2008 found that "...the impact of current biofuel policies on world crop prices, largely through increased demand for cereals and vegetable oils, is significant but should not be overestimated. Current biofuel support measures alone are estimated to increase average wheat prices by about 5 percent, maize by around 7 percent and vegetable oil by about 19 percent over the next 10 years."
Corn is used to make ethanol and prices went up by a factor of three in less than 3 years (measured in US dollars). Reports in 2007 linked stories as diverse as food riots in Mexico due to rising prices of corn for tortillas, and reduced profits at Heineken the large international brewer, to the increasing use of corn (maize) grown in the US Midwest for ethanol production. (In the case of beer, the barley area was cut in order to increase corn production. Barley is not currently used to produce ethanol.) Wheat is up by almost a factor of 3 in 3 years, while soybeans are up by a factor of 2 in 2 years (both measured in US dollars).
As corn is commonly used as feed for livestock, higher corn prices lead to higher prices in animal source foods. Vegetable oil is used to make biodiesel and has about doubled in price in the last couple years. The price is roughly tracking crude oil prices. The 2007–2008 world food price crisis is blamed partly on the increased demand for biofuels. During the same period rice prices went up by a factor of 3 even though rice is not directly used in biofuels.
The USDA expects the 2008/2009 wheat season to be a record crop and 8% higher than the previous year. They also expect rice to have a record crop. Wheat prices have dropped from a high over $12/bushel in May 2008 to under $8/bushel in May. Rice has also dropped from its highs.
According to a 2008 report from the World Bank the production of biofuel pushed food prices up. These conclusions were supported by the Union of Concerned Scientists in their September 2008 newsletter in which they remarked that the World Bank analysis "contradicts U.S. Secretary of Agriculture Ed Schaffer's assertion that biofuels account for only a small percentage of rising food prices."
According to the October Consumer Price Index released Nov. 19, 2008, food prices continued to rise in October 2008 and were 6.3 percent higher than October 2007.[dubious ] Since July 2008 fuel costs dropped by nearly 60 percent.
Proposed causes
Ethanol fuel as an oxygenate additive
The demand for ethanol fuel produced from field corn was spurred in the U.S. by the discovery that methyl tertiary butyl ether (MTBE) was contaminating groundwater. MTBE use as an oxygenate additive was widespread due to mandates of the Clean Air Act amendments of 1992 to reduce carbon monoxide emissions. As a result, by 2006 MTBE use in gasoline was banned in almost 20 states. There was also concern that widespread and costly litigation might be taken against the U.S. gasoline suppliers, and a 2005 decision refusing legal protection for MTBE, opened a new market for ethanol fuel, the primary substitute for MTBE. At a time when corn prices were around US$ 2 a bushel, corn growers recognized the potential of this new market and delivered accordingly. This demand shift took place at a time when oil prices were already significantly rising.
Other factors
That food prices went up at the same time fuel prices went up is not surprising and should not be entirely blamed on biofuels. Energy costs are a significant cost for fertilizer, farming, and food distribution. Also, China and other countries have had significant increases in their imports as their economies have grown. Sugar is one of the main feedstocks for ethanol and prices are down from 2 years ago. Part of the food price increase for international food commodities measured in US dollars is due to the dollar being devalued. Protectionism is also an important contributor to price increases. 36% of world grain goes as fodder to feed animals, rather than people.
Over long time periods population growth and climate change could cause food prices to go up. However, these factors have been around for many years and food prices have jumped up in the last 3 years, so their contribution to the current problem is minimal.
Government regulations of food and fuel markets
France, Germany, the United Kingdom and the United States governments have supported biofuels with tax breaks, mandated use, and subsidies. These policies have the unintended consequence of diverting resources from food production and leading to surging food prices and the potential destruction of natural habitats.
Fuel for agricultural use often does not have fuel taxes (farmers get duty-free petrol or diesel fuel). Biofuels may have subsidies and low/no retail fuel taxes. Biofuels compete with retail gasoline and diesel prices which have substantial taxes included. The net result is that it is possible for a farmer to use more than a gallon of fuel to make a gallon of biofuel and still make a profit. Some argue that this is a bad distortion of the market[who?]. There have been thousands of scholarly papers analyzing how much energy goes into making ethanol from corn and how that compares to the energy in the ethanol.
A World Bank policy research working paper concluded that food prices have risen by 35 to 40 percent between 2002–2008, of which 70 to 75 percent is attributable to biofuels. The "month-by-month" five-year analysis disputes that increases in global grain consumption and droughts were responsible for significant price increases, reporting that this had had only a marginal impact. Instead the report argues that the EU and US drive for biofuels has had by far the biggest impact on food supply and prices, as increased production of biofuels in the US and EU were supported by subsidies and tariffs on imports, and considers that without these policies, price increases would have been smaller. This research also concluded that Brazil's sugar cane based ethanol has not raised sugar prices significantly, and recommends removing tariffs on ethanol imports by both the US and EU, to allow more efficient producers such as Brazil and other developing countries, including many African countries, to produce ethanol profitably for export to meet the mandates in the EU and the US.
An economic assessment published by the OECD in July 2008 agrees with the World Bank report recommendations regarding the negative effects of subsidies and import tariffs, but found that the estimated impact of biofuels on food prices are much smaller. The OECD study found that trade restrictions, mainly through import tariffs, protect the domestic industry from foreign competitors but impose a cost burden on domestic biofuel users and limits alternative suppliers. The report is also critical of limited reduction of GHG emissions achieved from biofuels based on feedstocks used in Europe and North America, finding that the current biofuel support policies would reduce greenhouse gas emissions from transport fuel by no more than 0.8% by 2015, while Brazilian ethanol from sugar cane reduces greenhouse gas emissions by at least 80% compared to fossil fuels. The assessment calls for the need for more open markets in biofuels and feedstocks in order to improve efficiency and lower costs.
Oil price increases
Oil price increases since 2003 resulted in increased demand for biofuels. Transforming vegetable oil into biodiesel is not very hard or costly so there is a profitable arbitrage situation if vegetable oil is much cheaper than diesel. Diesel is also made from crude oil, so vegetable oil prices are partially linked to crude oil prices. Farmers can switch to growing vegetable oil crops if those are more profitable than food crops. So all food prices are linked to vegetable oil prices, and in turn to crude oil prices. A World Bank study concluded that oil prices and a weak dollar explain 25-30% of total price rise between January 2002 until June 2008.
Demand for oil is outstripping the supply of oil and oil depletion is expected to cause crude oil prices to go up over the next 50 years. Record oil prices are inflating food prices worldwide, including those crops that have no relation to biofuels, such as rice and fish.
In Germany and Canada it is now much cheaper to heat a house by burning grain than by using fuel derived from crude oil. With oil at $120/barrel a savings of a factor of 3 on heating costs is possible. When crude oil was at $25/barrel there was no economic incentive to switch to a grain fed heater.
US government policy
Some argue that the US government policy of encouraging ethanol from corn is the main cause for food price increases. US Federal government ethanol subsidies total $7 billion per year, or $1.90 per gallon. Ethanol provides only 55% as much energy as gasoline per gallon, realizing about a $3.45 per gallon gasoline trade off. Corn is used to feed chickens, cows, and pigs, so higher corn prices lead to higher prices for chicken, beef, pork, milk, cheese, etc.
U.S. Senators introduced the BioFuels Security Act in 2006. "It's time for Congress to realize what farmers in America's heartland have known all along - that we have the capacity and ingenuity to decrease our dependence on foreign oil by growing our own fuel," said U.S. Senator for Illinois Barack Obama.
Two-thirds of U.S. oil consumption is due to the transportation sector. The Energy Independence and Security Act of 2007 has a significant impact on U.S. Energy Policy. With the high profitability of growing corn, more and more farmers switch to growing corn until the profitability of other crops goes up to match that of corn. So the ethanol/corn subsidies drive up the prices of other farm crops.
The US - an important export country for food stocks - will convert 18% of its grain output to ethanol in 2008. Across the US, 25% of the whole corn crop went to ethanol in 2007. The percentage of corn going to biofuel is expected to go up.
Since 2004 a US subsidy has been paid to companies that blend biofuel and regular fuel. The European biofuel subsidy is paid at the point of sale. Companies import biofuel to the US, blend 1% or even 0.1% regular fuel, and then ship the blended fuel to Europe, where it can get a second subsidy. These blends are called B99 or B99.9 fuel. The practice is called "splash and dash". The imported fuel may even come from Europe to the US, get 0.1% regular fuel, and then go back to Europe. For B99.9 fuel the US blender gets a subsidy of $0.999 per gallon. The European biodiesel producers have urged the EU to impose punitive duties on these subsidized imports. US lawmakers are also looking at closing this loophole.
Proposed action
Freeze on first generation biofuel production
Environmental campaigner George Monbiot has argued for a 5-year freeze on biofuels while their impact on poor communities and the environment is assessed. It has been suggested that a problem with Monbiot's approach is that economic drivers may be required in order to push through the development of more sustainable second-generation biofuel processes: it is possible that these could be stalled if biofuel production decreases. Some environmentalists[who?] are suspicious that second-generation biofuels may not solve the problem of a potential clash with food as they also use significant agricultural resources such as water.[who?]
A recent UN report on biofuel also raises issues regarding food security and biofuel production. Jean Ziegler, then UN Special Rapporteur on food, concluded that while the argument for biofuels in terms of energy efficiency and climate change are legitimate, the effects for the world's hungry of transforming wheat and maize crops into biofuel are "absolutely catastrophic," and terms such use of arable land a "crime against humanity." Ziegler also calls for a 5-year moratorium on biofuel production. Ziegler's proposal for a five-year ban was rejected by the U.N. Secretary Ban Ki-moon, who called for a comprehensive review of the policies on biofuels, and said that "just criticising biofuel may not be a good solution".
Food surpluses exist in many developed countries. For example, the UK wheat surplus was around 2 million tonnes in 2005. This surplus alone could produce sufficient bioethanol to replace around 2.5% of the UK's petroleum consumption, without requiring any increase in wheat cultivation or reduction in food supply or exports. However, above a few percent, there would be direct competition between first generation biofuel production and food production. This is one reason why many view second generation biofuels as increasingly important.
Non-food crops for biofuel
There are different types of biofuels and different feedstocks for them, and it has been proposed that only non-food crops be used for biofuel. This avoids direct competition for commodities like corn and edible vegetable oil. However, as long as farmers are able to derive a greater profit by switching to biofuels, they will. The law of supply and demand predicts that if fewer farmers are producing food the price of food will rise.
Second generation biofuels use lignocellulosic raw material such as forest residues (sometimes referred to as brown waste and black liquor from Kraft process or sulfite process pulp mills). Third generation biofuels (biofuel from algae) use non-edible raw materials sources that can be used for biodiesel and bioethanol.
Soybean oil, which only represents half of the domestic raw materials available for biodiesel production in the United States, is one of many raw materials that can be used to produce biodiesel.
Non-food crops like Camelina, Jatropha, seashore mallow and mustard, used for biodiesel, can thrive on marginal agricultural land where many trees and crops won't grow, or would produce only slow growth yields. Camelina is virtually 100 percent efficient. It can be harvested and crushed for oil and the remaining parts can be used to produce high quality omega-3 rich animal feed, fiberboard, and glycerin. Camelina does not take away from land currently being utilized for food production. Most camelina acres are grown in areas that were previously not utilized for farming. For example, areas that receive limited rainfall that can not sustain corn or soybeans without the addition of irrigation can grow camelina and add to their profitability.
Jatropha cultivation provides benefits for local communities:
Cultivation and fruit picking by hand is labour-intensive and needs around one person per hectare. In parts of rural India and Africa this provides much-needed jobs - about 200,000 people worldwide now find employment through jatropha. Moreover, villagers often find that they can grow other crops in the shade of the trees. Their communities will avoid importing expensive diesel and there will be some for export too.
NBB’s Feedstock Development program is addressing production of arid variety crops, algae, waste greases, and other feedstocks on the horizon to expand available material for biodiesel in a sustainable manner.
Cellulosic ethanol is a type of biofuel produced from lignocellulose, a material that comprises much of the mass of plants. Corn stover, switchgrass, miscanthus and woodchip are some of the more popular non-edible cellulosic materials for ethanol production. Commercial investment in such second-generation biofuels began in 2006/2007, and much of this investment went beyond pilot-scale plants. Cellulosic ethanol commercialization is moving forward rapidly. The world’s first commercial wood-to-ethanol plant began operation in Japan in 2007, with a capacity of 1.4 million liters/year. The first wood-to-ethanol plant in the United States is planned for 2008 with an initial output of 75 million liters/year.
Biofuel from food byproducts and coproducts
Biofuels can also be produced from the waste byproducts of food-based agriculture (such as citrus peels or used vegetable oil) to manufacture an environmentally sustainable fuel supply, and reduce waste disposal cost.
A growing percentage of U.S. biodiesel production is made from waste vegetable oil (recycled restaurant oils) and greases.
Collocation of a waste generator with a waste-to-ethanol plant can reduce the waste producer's operating cost, while creating a more-profitable ethanol production business. This innovative collocation concept is sometimes called holistic systems engineering. Collocation disposal elimination may be one of the few cost-effective, environmentally sound, biofuel strategies, but its scalability is limited by availability of appropriate waste generation sources. For example, millions of tons of wet Florida-and-California citrus peels cannot supply billions of gallons of biofuels. Due to the higher cost of transporting ethanol, it is a local partial solution, at best.
More firms are investigating the potential of fractionating technology to remove corn germ (i.e. the portion of the corn kernel that contains oil) prior to the ethanol process. Furthermore, some ethanol plants[who?] have already announced their intention to employ technology to remove the remaining vegetable oil from dried distillers grains, a coproduct of the ethanol process. Both of these technologies would add to the biodiesel raw material supply.
Biofuel subsidies and tariffs
Some people have claimed that ending subsidies and tariffs would enable sustainable development of a global biofuels market. Taxing biofuel imports while letting petroleum in duty-free does not fit with the goal of encouraging biofuels. Ending mandates, subsidies, and tariffs would end the distortions that current policy is causing. Some US senators[who?] advocate reducing subsidies for corn based ethanol. The US ethanol tariff and some US ethanol subsidies are currently set to expire over the next couple years. The EU is rethinking their biofuels directive due to environmental and social concerns. On January 18, 2008 the UK House of Commons Environmental Audit Committee raised similar concerns, and called for a moratorium on biofuel targets. Germany ended their subsidy of biodiesel on Jan 1 2008 and started taxing it.
Reduce farmland reserves and set asides
To avoid overproduction and to prop up farmgate prices for agricultural commodities, some countries[who?] have farm subsidy programs to encourage farmers not to produce and leave productive acres fallow. The 2008 crisis prompted proposals to bring some of the reserve farmland back into use.
In Europe about 8% of the farmland is in set aside programs. Farmers have proposed freeing up all of this for farming. Two-thirds of the farmers who were on these programs in the UK are not renewing when their term expires.
Sustainable production of biofuels
Second generation biofuels are now being produced from the cellulose in dedicated energy crops (such as perennial grasses), forestry materials, the co-products from food production, and domestic vegetable waste. Advances in the conversion processes will almost certainly improve the sustainability of biofuels, through better efficiencies and reduced environmental impact of producing biofuels, from both existing food crops and from cellulosic sources.
Lord Ron Oxburgh suggests that responsible production of biofuels has several advantages:
Produced responsibly they are a sustainable energy source that need not divert any land from growing food nor damage the environment; they can also help solve the problems of the waste generated by Western society; and they can create jobs for the poor where previously were none. Produced irresponsibly, they at best offer no climate benefit and, at worst, have detrimental social and environmental consequences. In other words, biofuels are pretty much like any other product.
Far from creating food shortages, responsible production and distribution of biofuels represents the best opportunity for sustainable economic prospects in Africa, Latin America and impoverished Asia. Biofuels offer the prospect of real market competition and oil price moderation. Crude oil would be trading 15 per cent higher and gasoline would be as much as 25 per cent more expensive, if it were not for biofuels. A healthy supply of alternative energy sources will help to combat gasoline price spikes.
Continuation of the status quo
An additional policy option is to continue the current trends of government incentive for these types of crops to further evaluate the effects on food prices over a longer period of time due to the relatively recent onset of the biofuel production industry. Additionally, by virtue of the newness of the industry we can assume that like other startup industries techniques and alternatives will be cultivated quickly if there is sufficient demand for the alternative fuels and biofuels. What could result from the shock to food prices is a very quick move toward some of the non-food biofuels as are listed above amongst the other policy alternatives.
Impact on developing countries
Demand for fuel in rich countries is now competing against demand for food in poor countries. The increase in world grain consumption in 2006 happened due to the increase in consumption for fuel, not human consumption. The grain required to fill a 25 US gallons (95 L) fuel tank with ethanol will feed one person for a year.
Several factors combine to make recent grain and oilseed price increases impact poor countries more:
- Poor people buy more grains (e.g. wheat), and are more exposed to grain price changes.
- Poor people spend a higher portion of their income on food, so increasing food prices influence them more.
- Aid organizations which buy food and send it to poor countries see more need when prices go up but are able to buy less food on the same budget.
The impact is not all negative. The Food and Agriculture Organization (FAO) recognizes the potential opportunities that the growing biofuel market offers to small farmers and aquaculturers around the world and has recommended small-scale financing to help farmers in poor countries produce local biofuel.
On the other hand, poor countries that do substantial farming have increased profits due to biofuels. If vegetable oil prices double, the profit margin could more than double. In the past rich countries have been dumping subsidized grains at below cost prices into poor countries and hurting the local farming industries. With biofuels using grains the rich countries no longer have grain surpluses to get rid of. Farming in poor countries is seeing healthier profit margins and expanding.
Interviews with local peasants in southern Ecuador provide strong anecdotal evidence that the high price of corn is encouraging the burning of tropical forests. The destruction of tropical forests now account for 20% of all greenhouse gas emmisons.
National Corn Growers Association
US government subsidies for making ethanol from corn have been attacked as the main cause of the food vs fuel problem. To defend themselves, the US corn growers association has published their views on this issue. They consider the "food vs fuel" argument to be a fallacy that is "fraught with misguided logic, hyperbole and scare tactics."
Claims made by the NCGA include:
- Corn growers have been and will continue to produce enough corn so that supply and demand meet and there is no shortage. Farmers make their planting decisions based on signals from the marketplace. If demand for corn is high and projected revenue-per-acre is strong relative to other crops, farmers will plant more corn. In 2007 US farmers planted 92,900,000 acres (376,000 km2) with corn, 19% more acres than they did in 2006.
- The U.S. has doubled corn yields over the last 40 years and expects to double them again in the next 20 years. With twice as much corn from each acre, corn can be put to new uses without taking food from the hungry or causing deforestation.
- US consumers buy things like corn flakes where the cost of the corn per box is around 5 cents. Most of the cost is packaging, advertising, shipping, etc. Only about 19% of the US retail food prices can be attributed to the actual cost of food inputs like grains and oilseeds. So if the price of a bushel of corn goes up, there may be no noticeable impact on US retail food prices. The US retail food price index has gone up only a few percent per year and is expected to continue to have very small increases.
- Most of the corn produced in the US is field corn, not sweet corn, and not digestible by humans in its raw form. Most corn is used for livestock feed and not human food, even the portion that is exported.
- Only the starch portion of corn kernels is converted to ethanol. The rest (protein, fat, vitamins and minerals) is passed through to the feed coproducts or human food ingredients.
- One of the most significant and immediate benefits of higher grain prices is a dramatic reduction in federal farm support payments. According to the U.S. Department of Agriculture, corn farmers received $8.8 billion in government support in 2006. Because of higher corn prices, payments are expected to drop to $2.1 billion in 2007, a 76 percent reduction.
- While the EROEI and economics of corn based ethanol are a bit weak, it paves the way for cellulosic ethanol which should have much better EROEI and economics.
- While basic nourishment is clearly important, fundamental societal needs of energy, mobility, and energy security are too. If farmers crops can help their country in these areas also, it seems right to do so.
Since reaching record high prices in June 2008, corn prices fell 50% by October 2008, declining sharply together with other commodities, including oil. As ethanol production from corn has continue at the same levels, some have argued[who?] that this trend shows the belief that the increased demand for corn to produce ethanol was mistaken. "Analysts, including some in the ethanol sector, say ethanol demand adds about 75 cents to $1.00 per bushel to the price of corn, as a rule of thumb. Other analysts say it adds around 20 percent, or just under 80 cents per bushel at current prices. Those estimates hint that $4 per bushel corn might be priced at only $3 without demand for ethanol fuel.". These industry sources consider that a speculative bubble in the commodity markets holding positions in corn futures was the main driver behind the observed hike in corn prices affecting food supply.
Controversy within the international system
The United States and Brazil lead the industrial world in global ethanol production, with Brazil as the world's largest exporter and biofuel industry leader. In 2006 the U.S. produced 18.4 billion liters (4.86 billion gallons), closely followed by Brazil with 16.3 billion liters (4.3 billion gallons), producing together 70% of the world's ethanol market and nearly 90% of ethanol used as fuel. These countries are followed by China with 7.5%, and India with 3.7% of the global market share.
Since 2007, the concerns, criticisms and controversy surrounding the food vs biofuels issue has reached the international system, mainly heads of states, and inter-governmental organizations (IGOs), such as the United Nations and several of its agencies, particularly the Food and Agriculture Organization (FAO) and the World Food Programme (WFP); the International Monetary Fund; the World Bank; and agencies within the European Union.
The 2007 controversy: Ethanol diplomacy in the Americas
In March 2007, "ethanol diplomacy" was the focus of President George W. Bush's Latin American tour, in which he and Brazil's president, Luiz Inácio Lula da Silva, were seeking to promote the production and use of sugar cane based ethanol throughout Latin America and the Caribbean. The two countries also agreed to share technology and set international standards for biofuels. The Brazilian sugar cane technology transfer will permit various Central American countries, such as Honduras, Nicaragua, Costa Rica and Panama, several Caribbean countries, and various Andean Countries tariff-free trade with the U.S. thanks to existing concessionary trade agreements. Even though the U.S. imposes a USD 0.54 tariff on every gallon of imported ethanol, the Caribbean nations and countries in the Central American Free Trade Agreement are exempt from such duties if they produce ethanol from crops grown in their own countries. The expectation is that using Brazilian technology for refining sugar cane based ethanol, such countries could become exporters to the United States in the short-term. In August 2007, Brazil's President toured Mexico and several countries in Central America and the Caribbean to promote Brazilian ethanol technology.
This alliance between the U.S. and Brazil generated some negative reactions. While Bush was in São Paulo as part of the 2007 Latin American tour, Venezuela's President Hugo Chavez, from Buenos Aires, dismissed the ethanol plan as "a crazy thing" and accused the U.S. of trying "to substitute the production of foodstuffs for animals and human beings with the production of foodstuffs for vehicles, to sustain the American way of life." Chavez' complaints were quicky followed by then Cuban President Fidel Castro, who wrote that "you will see how many people among the hungry masses of our planet will no longer consume corn." "Or even worse," he continued, "by offering financing to poor countries to produce ethanol from corn or any other kind of food, no tree will be left to defend humanity from climate change."' Daniel Ortega, Nicaragua's President, and one of the preferencial recipients of Brazil technical aid, said that "we reject the gibberish of those who applaud Bush's totally absurd proposal, which attacks the food security rights of Latin Americans and Africans, who are major corn consumers", however, he voiced support for sugar cane based ethanol during Lula's visit to Nicaragua.
The 2008 controversy: Global food prices
As a result of the international community's concerns regarding the steep increase in food prices, on April 14, 2008, Jean Ziegler, the United Nations Special Rapporteur on the Right to Food, at the Thirtieth Regional Conference of the Food and Agriculture Organization (FAO) in Brasília, called biofuels a "crime against humanity", a claim he had previously made in October 2007, when he called for a 5-year ban for the conversion of land for the production of biofuels. The previous day, at their Annual International Monetary Fund and World Bank Group meeting at Washington, D.C., the World Bank's President, Robert Zoellick, stated that "While many worry about filling their gas tanks, many others around the world are struggling to fill their stomachs. And it's getting more and more difficult every day."
Luiz Inácio Lula da Silva gave a strong rebuttal, calling both claims "fallacies resulting from commercial interests", and putting the blame instead on U.S. and European agricultural subsidies, and a problem restricted to U.S. ethanol produced from maize. He also said that "biofuels aren't the villain that threatens food security." In the middle of this new wave of criticism, Hugo Chavez reaffirmed his opposition and said that he is concerned that "so much U.S.-produced corn could be used to make biofuel, instead of feeding the world's poor", calling the U.S. initiative to boost ethanol production during a world food crisis a "crime."
German Chancellor Angela Merkel said the rise in food prices is due to poor agricultural policies and changing eating habits in developing nations, not biofuels as some critics claim. On the other hand, British Prime Minister Gordon Brown called for international action and said Britain had to be "selective" in supporting biofuels, and depending on the U.K.'s assessment of biofuels' impact on world food prices, "we will also push for change in EU biofuels targets". Stavros Dimas, European Commissioner for the Environment said through a spokeswoman that "there is no question for now of suspending the target fixed for biofuels", though he acknowledged that the EU had underestimated problems caused by biofuels.
On April 29, 2008, U.S. President George W. Bush declared during a press conference that "85 percent of the world's food prices are caused by weather, increased demand and energy prices", and recognized that "15 percent has been caused by ethanol". He added that "the high price of gasoline is going to spur more investment in ethanol as an alternative to gasoline. And the truth of the matter is it's in our national interests that our farmers grow energy, as opposed to us purchasing energy from parts of the world that are unstable or may not like us." Regarding the effect of agricultural subsidies on rising food prices, Bush said that "Congress is considering a massive, bloated farm bill that would do little to solve the problem. The bill Congress is now considering would fail to eliminate subsidy payments to multi-millionaire farmers", he continued, "this is the right time to reform our nation's farm policies by reducing unnecessary subsidies".
Just a week before this new wave of international controversy began, U.N. Secretary General Ban Ki-moon had commented that several U.N. agencies were conducting a comprehensive review of the policy on biofuels, as the world food price crisis might trigger global instability. He said "We need to be concerned about the possibility of taking land or replacing arable land because of these biofuels", then he added "While I am very much conscious and aware of these problems, at the same time you need to constantly look at having creative sources of energy, including biofuels. Therefore, at this time, just criticising biofuel may not be a good solution. I would urge we need to address these issues in a comprehensive manner." Regarding Jean Ziegler's proposal for a five-year ban, the U.N. Secretary rejected that proposal.
A report released by Oxfam in June 2008 criticized biofuel policies of high-income countries as neither a solution to the climate crisis nor the oil crisis, while contributing to the food price crisis. The report concluded that from all biofuels available in the market, Brazilian sugarcane ethanol is not very effective, but it is the most favorable biofuel in the world in term of cost and greenhouse gas balance. The report discusses some existing problems and potential risks, and asks the Brazilian government for caution to avoid jeopardizing its environmental and social sustainability. The report also says that: "Rich countries spent up to $15 billion last year supporting biofuels while blocking cheaper Brazilian ethanol, which is far less damaging for global food security."
A World Bank research report published on July 2008 found that from June 2002 to June 2008 "biofuels and the related consequences of low grain stocks, large land use shifts, speculative activity and export bans" pushed prices up by 70 percent to 75 percent. The study found that higher oil prices and a weak dollar explain 25-30% of total price rise. The study said that "...large increases in biofuels production in the United States and Europe are the main reason behind the steep rise in global food prices" and also stated that "Brazil's sugar-based ethanol did not push food prices appreciably higher". The Renewable Fuels Association (RFA) published a rebuttal based on the version leaked before its formal release. The RFA critique considers that the analysis is highly subjective and that the author "estimates the impact of global food prices from the weak dollar and the direct and indirect effect of high petroleum prices and attributes everything else to biofuels."
An economic assessment by the OECD also published on July 2008 agrees with the World Bank report regarding the negative effects of subsidies and trade restrictions, but found that the impact of biofuels on food prices are much smaller. The OECD study is also critical of the limited reduction of GHG emissions achieved from biofuels produced in Europe and North America, concluding that the current biofuel support policies would reduce greenhouse gas emissions from transport fuel by no more than 0.8 percent by 2015, while Brazilian ethanol from sugar cane reduces greenhouse gas emissions by at least 80 percent compared to fossil fuels. The assessment calls on governments for more open markets in biofuels and feedstocks in order to improve efficiency and lower costs. The OECD study concluded that "...current biofuel support measures alone are estimated to increase average wheat prices by about 5 percent, maize by around 7 percent and vegetable oil by about 19 percent over the next 10 years."
Another World Bank research report published on July 2010 found their previous study may have overestimated the contribution of biofuel production, as the paper concluded that "the effect of biofuels on food prices has not been as large as originally thought, but that the use of commodities by financial investors (the so-called "financialization of commodities") may have been partly responsible for the 2007/08 spike."
See also
- Biofuel advocacy groups
- methanol economy
- methanol fuel
- Commodity price shocks
- Corn stoves
- Distillers grains
- Ethanol economy
- Ethanol fuel in Australia
- Ethanol fuel in Brazil
- Ethanol fuel in Sweden
- Ethanol fuel in the Philippines
- Ethanol fuel in the United States
- Food security
- Oil depletion
- Vegetable oil economy
- World Agricultural Supply and Demand Estimates (monthly report)
- 2007–2008 world food price crisis
- Congressional Budget Office (April 2009), The Impact of Ethanol Use on Food Prices and Greenhouse-Gas Emissions, Congress of the United States
- Bruce A. Babcock (June 2011), The Impact of US Biofuel Policies on Agricultural Price Levels and Volatility, International Centre for Trade and Sustainable Development
- Goettemoeller, Jeffrey; Adrian Goettemoeller (2007), Sustainable Ethanol: Biofuels, Biorefineries, Cellulosic Biomass, Flex-Fuel Vehicles, and Sustainable Farming for Energy Independence, Prairie Oak Publishing, Maryville, Missouri, ISBN 978-0-9786293-0-4 . See Chapter 7. Food, Farming, and Land Use.
- Neves, Marcos Fava, Mairun Junqueira Alves Pinto, Marco Antonio Conejero and Vinicius Gustavo Trombin (2011), Food and fuel: The example of Brazil, Wageningen Academic Publishers, ISBN 978-90-8686-166-8 .
- The Worldwatch Institute (2007), Biofuels for Transport: Global Potential and Implications for Energy and Agriculture, Earthscan Publications Ltd., London, U.K., ISBN 978-1-84407-422-8 . Global view, includes country study cases of Brazil, China, India and Tanzania.
- "Biofuels are not to blame for high food prices, study finds". CleanBeta. 2008-10-24. Retrieved 2009-01-20.[dead link]
- Maggie Ayre (2007-10-03). "Will biofuel leave the poor hungry?". BBC News. Retrieved 2008-04-28.
- Mike Wilson (2008-02-08). "The Biofuel Smear Campaign". Farm Futures. Retrieved 2008-04-28.
- Michael Grundwald (2008-03-27). "The Clean Energy Scam". Time Magazine. Retrieved 2008-04-28.
- The Impact of US Biofuel Policies on Agricultural Price Levels and Volatility, By Bruce A. Babcock, Center for Agricultural and Rural Development, Iowa State University, for ICTSD, Issue Paper No. 35. June 2011.
- Kathleen Kingsbury (2007-11-16). "After the Oil Crisis, a Food Crisis?". Time Magazine. Retrieved 2008-04-28.
- Lester R. Brown (2007-06-13). "Biofuels Blunder:Massive Diversion of U.S. Grain to Fuel Cars is Raising World Food Prices, Risking Political Instability". Testimony before U.S. Senate Committee on Environment and Public Works. Archived from the original on 2009-04-04. Retrieved 2008-12-20.
- See for example the Biomass Program under the US Department of Energy: http://www1.eere.energy.gov/biomass/
- Oliver R. Inderwildi, David A. King (2009). "Quo Vadis Biofuels". Energy & Environmental Science 2: 343. doi:10.1039/b822951c.
- Andrew Bounds (2007-09-10). "OECD Warns Against Biofuels Subsidies". Financial Times.
- George Monbiot (2004-11-23). "Feeding Cars, Not People". Monbiot.com. Retrieved 2008-04-28.
- European Environmental Bureau (2006-02-08). "Biofuels no panacea" (PDF). Archived from the original on 2008-04-10. Retrieved 2008-04-28.
- Planet Ark (2005-09-26). "Food Security Worries Could Limit China Biofuels". Retrieved 2008-04-28.
- Greenpeace UK (2007-05-09). "Biofuels: green dream or climate change nightmare". Retrieved 2008-04-28.
- See for example: the US (DOE and USDA) "Billion Ton Report": http://feedstockreview.ornl.gov/pdf/billion_ton_vision.pdf or an EU (Refuel) report http://www.refuel.eu/fileadmin/refuel/user/docs/REFUEL_D19a_flyer_feedstock_potentials.pdf
- "FOOD AND FUEL II - Biofuels will help fight hunger". International Herald Tribune. 2007-08-06. Retrieved 2008-04-15.
- Through biofuels we can reap the fruits of our labours
- Inslee, Jay; Bracken Hendricks (2007), Apollo's Fire, Island Press, Washington, D.C., pp. 153–155, 160–161, ISBN 978-1-59726-175-3 . See Chapter 6. Homegrown Energy.
- Larry Rother (2006-04-10). "With Big Boost From Sugar Cane, Brazil Is Satisfying Its Fuel Needs". The New York Times. Retrieved 2008-04-28.
- "Biofuels in Brazil: Lean, green and not mean". The Economist. 2008-06-26. Retrieved 2008-07-30.From The Economist print edition
- Julia Duailibi (2008-04-27). "Ele é o falso vilão" (in Portuguese). Veja Magazine. Retrieved 2008-04-28.
- Donald Mitchell (July 2008). "A note on Rising Food Crisis" (PDF). The World Bank. Retrieved 2008-07-29.Policy Research Working Paper No. 4682. Disclaimer: This paper reflects the findings, interpretation, and conclusions of the authors, and do not necessarily represent the views of the World Bank
- Veja Magazine (2008-07-28). "Etanol não influenciou nos preços dos alimentos" (in Portuguese). Editora Abril. Retrieved 2008-07-29.
- "Biofuels major driver of food price rise-World Bank". Reuters. 2008-07-28. Retrieved 2008-07-29.
- John Baffes and Tassos Haniotis (July 2010). "Placing the 2006/08 Commodity Price Boom into Perspective". World Bank. Retrieved 2010-08-09. Policy Research Working Paper 5371
- Directorate for Trade and Agriculture, OECD (2008-07-16). "Economic Assessment of Biofuel Support Policies" (PDF). OECD. Retrieved 2008-08-01. Disclaimer: This work was published under the responsibility of the Secretary-General of the OECD. The views expressed and conclusions reached do not necessarily correspond to those of the governments of OECD member countries.
- "The Economist – The End Of Cheap Food". 2007-12-06.
- Directorate for Trade and Agriculture, OECD (2008-07-16). "Biofuel policies in OECD countries costly and ineffective, says report". OECD. Retrieved 2008-08-01.
- Corn (C, CBOT): Monthly Price Chart
- The Costs of Rising Tortilla Prices in Mexico — Enrique C. Ochoa, February 3, 2007.
- Financial Times, London, February 25, 2007, quoting Jean-François van Boxmeer, chief executive.
- For an explanation of this ripple effect that pushes up not only the price of corn, but also that of other farming products, see this excerpt from a speech by Paul Roberts at the Commonwealth Club of California (video).
- Wheat (W, CBOT): Monthly Price Chart
- Soybeans (S, CBOT): Monthly Price Chart
- Why ethanol production will drive world food prices even higher in 2008 | Cleantech.com
- Biofuel demand makes fried food expensive in Indonesia - ABC News (Australian Broadcasting Corporation)
- The other oil shock: Vegetable oil prices soar - International Herald Tribune
- Light Crude Oil (CL, NYMEX): Monthly Price Chart
- Biofuel: the burning question
- Understanding the Global Rice Crisis
- "U.S. sees record world food crops easing crisis". Reuters. 2008-05-10.
- Wheat (W, CBOT): Daily Commodity Futures Price Chart: July, 2008
- "Eliminating MTBE in Gasoline in 2006" (PDF). Environmental Information Administration. 2006-02-22. Retrieved 2008-08-10.
- Goettemoeller, Jeffrey; Adrian Goettemoeller (2007), Sustainable Ethanol: Biofuels, Biorefineries, Cellulosic Biomass, Flex-Fuel Vehicles, and Sustainable Farming for Energy Independence, Prairie Oak Publishing, Maryville, Missouri, p. 42, ISBN 978-0-9786293-0-4
- ‘Weak correlation' between food and fuel prices Farm and Ranch Guide: Regional News
- It's not food, it's not fuel, it's China: Grain trends in China 1995-2008
- World sugar supply to expand - 3/6/2008 6:38:00 AM - Purchasing
- Sugar #11 (SB, NYBOT): Monthly Price Chart
- PINR - Soaring Commodity Prices Point Toward Dollar Devaluation
- Forex Trader Top 3: G7, Worldwide Food Prices, and Consumer Sentiment: Financial News - Yahoo! Finance
- "Crop Prospects and Food Situation - Global cereal supply and demand brief". Food and Agriculture Organization of the United Nations. Retrieved 2008-04-21.
- Timmons, Heather (2008-05-14). "Indians Find U.S. at Fault in Food Cost". The New York Times. Retrieved 2010-05-02.
- Foreign Affairs - How Biofuels Could Starve the Poor - C. Ford Runge and Benjamin Senauer
- Alternative Fuels & Advanced Vehicles Data Center
- Biofuels are part of the solution
- Energie: Heizen mit Weizen - Wirtschaft - stern.de
- "Grain gaining steam as home-heating option". CBC News. 2005-10-14.
- Grain Burning Stoves by Prairie Fire Grain Energy. Renewable Fuel - Virtually No Waste - Low Emissions
- Economic analysis: Ethanol policy is driving up food costs 03/16/08 - Grand Island Independent: News
- IBDeditorials.com: Editorials, Political Cartoons, and Polls from Investor's Business Daily - Ethanol Lobby Is Perpetrating A Cruel Hoax
- Ethanol really takes the cake - NJVoices: Paul Mulshine
- Today in Investor's Business Daily stock analysis and business news
- The Ethanol Scam: One of America's Biggest Political Boondoggles : Rolling Stone
- Food prices | Cheap no more | Economist.com
- Baltimore, Chris. "New U.S. Congress looks to boost alternate fuels," The Boston Globe, January 5, 2007. Retrieved on August 23, 2007
- After the Oil Runs Out, washingtonpost.com
- "Energy Independence and Security Act of 2007 (Enrolled as Agreed to or Passed by Both House and Senate)". Retrieved 2008-01-18.
- Evans-Pritchard, Ambrose (2008-04-14). "Global warming rage lets global hunger grow". The Daily Telegraph (London). Retrieved 2010-05-02.
- Green Car Advisor - U.S. Taxpayers Subsidizing Biodiesel Sold in Europe
- Splash And Dash | Greenbang
- European Biodiesel Board warns on Argentine biodiesel; says fuel is subsidized in Argentina and US, then dumped in Europe : Biofuels Digest
- Trade war brewing over US biofuel subsidies « Food Crisis
- Splash dash and a lot of biodiesel pain (The Big Biofuels Blog)
- TheHill.com - Finance panel set to close ‘splash and dash’ loophole
- George Monbiot (2007-03-27). "If we want to save the planet, we need a five-year freeze on biofuels". London: The Guardian. Retrieved 2008-01-15.
- Monbiot.com » An Agricultural Crime Against Humanity
- Monbiot.com » Feeding Cars, Not People
- Julian Borger, (2008-04-05). "UN chief calls for review of biofuels policy". London: The Guardian. Retrieved 2008-05-01.
- Defra figures after exports, .
- (i.e. if the UK wanted to replace more than around 5% of its fuel with biofuel).
- Food versus fuel debate escalates
- How Food and Fuel Compete for Land by Lester Brown - The Globalist > > Global Energy
- Through biofuels we can reap the fruits of our labours
- REN21 (2008). Renewables 2007 Global Status Report (PDF) p. 19.
- Biomass Magazine
- The Biofuel Dilemma « The Global Warming Hoax
- Brownfield Network: Lawmakers square-off over Renewable Fuels Standard
- "Bush budget doesn't alter ethanol import tariff". Reuters. 2008-02-04.
- EU rethinks biofuels guidelines By Roger Harrabin bbc.co.uk Monday, 14 January 2008 http://news.bbc.co.uk/2/hi/europe/7186380.stm
- Committee calls for Moratorium on Biofuels http://www.parliament.uk/parliamentary_committees/environmental_audit_committee/eac_210108.cfm
- Subsidy loss threatens German bio-fuel industry | EnerPub - Energy Publisher
- ABA Band of Bakers March on Washington, D.C. Announce Action Plan for Wheat Crisis
- let us farm set aside land Europe farmers say (The Big Biofuels Blog)
- FarmPolicy.com » Blog Archives » EU CAP: Set Aside Requirement Draws Focus
- Davies, Caroline (2008-05-25). "Eco-farming ditched as food prices soar". The Guardian (London). Retrieved 2010-05-02.
- Hydrogen injection could boost biofuel production
- Sustainable biofuels: prospects and challenges p. 2.
- Oxburgh, Ron. Fuelling hope for the future, Courier Mail, 15 August 2007.
- Starving for Fuel: How Ethanol Production Contributes to Global Hunger by Lester Brown - The Globalist > > Global Briefing
- Family food expenditures around the world
- Time. 2007-07-19 http://www.time.com/time/photogallery/0,29307,1645016_1408103,00.html
|url=missing title (help). Retrieved 2010-05-02.
- "NGO has biofuel concerns". BBC News. 2007-11-01. Retrieved 2008-01-20.
- Record rise in wheat price prompts UN official to warn that surge in food prices may trigger social unrest in developing countries
- The Associated Press: UN Warns About High Fuel, Food Costs[dead link]
- Interviews with local peasants in southern Ecuador
- Tropical deforestation and greenhouse gas emissions Holly K Gibbs et al. 2007 Environ. Res. Lett. 2 045021 (2pp) doi:10.1088/1748-9326/2/4/045021
- The Farmer
- National Corn Growers Association - NCGA
- Sam Nelson (2008-10-23). "Ethanol no longer seen as big driver of food price". Reuters UK. Retrieved 2008-11-26.
- Sanchez, Marcela (2007-02-23). "Latin America -- the 'Persian Gulf' of Biofuels?". The Washington Post. Retrieved 2008-04-28.
- "Biofuels: The Promise and the Risks, in World Development Report 2008" (PDF). The Worl Bank. 2008. pp. 70–71. Retrieved 2008-05-04.
- "Industry Statistics: Annual World Ethanol Production by Country". Renewable Fuels Association. Archived from the original on 2008-04-08. Retrieved 2008-04-28.
- Edmund L. Andrews and Larry Rother (2007-03-03). "U.S. and Brazil Seek to Promote Ethanol in West". The New York Times. Retrieved 2008-04-28.
- Diana Renée (2007-08-10). "Diplomacia de biocombustibles" de Lula no genera entusiasmo" (in Spanish). La Nación. Retrieved 2008-04-28.
- Jim Rutenberg and Larry Rohter (2007-03-10). "Bush and Chávez Spar at Distance Over Latin Visit". The Washington Post. Retrieved 2008-04-28.
- "Americas: Cuba: Castro Criticizes U.S. Biofuel Policies". The New York Times. 2007-03-30. Retrieved 2008-04-28.
- Xinhua News (2007-08-09). "Nicaragua president backs sugar-made biofuel as Lula visits". People's Daily Online. Retrieved 2008-04-28.
- AFP (2007-08-09). "Lula ofrece cooperación y energía eléctrica" (in Spanish). La Nación. Retrieved 2008-04-28.
- "ONU diz que biocombustíveis são crime contra a humanidade" (in Portuguese). Folha de Sao Pãulo Online. 2008-04-14. Retrieved 2008-04-28.
- Emilio San Pedro (2008-04-17). "Brazil president defends biofuels". BBC News. Retrieved 2008-04-28.
- Lederer, Edith (2007-10-27). "Production of biofuels 'is a crime'". London: The Independent. Retrieved 2008-04-22.
- "UN rapporteur calls for biofuel moratorium". Swissinfo. 2007-10-11. Retrieved 2008-05-01.
- Larry Elliott and Heather Stewart (2008-04-11). "Poor go hungry while rich fill their tanks". London: The Guardian. Retrieved 2008-04-30.
- Steven Mufson (2008-04-30). "Siphoning Off Corn to Fuel Our Cars". The Washington Post. Retrieved 2008-04-30.
- "FMI e Bird pedem ação urgente contra alta alimentar" (in Portuguese). Folha de Sao Pãulo Online. 2008-04-13. Retrieved 2008-04-28.
- Raymond Colitt (2008-04-16). "Brazil Lula defends biofuels from growing criticism". Reuters UK. Retrieved 2008-04-28.
- "Chavez calls ethanol production 'crime'". The Washington Post. Associated Press. 2008-04-22. Retrieved 2008-04-28.[dead link]
- Gernot Heller (2008-04-17). "Bad policy, not biofuel, drive food prices: Merkel". Reuters. Retrieved 2008-05-01.
- "Brown's biofuels caution welcomed". BBC News. 2008-04-22. Retrieved 2008-05-01.
- "Europe Defends Biofuels as Debate Rages". Deutsche Welle. 2008-04-14. Retrieved 2008-05-01.
- "Press Conference by the President". The White House. 2008-04-29. Retrieved 2008-05-01.
- Oxfam (2008-06-25). "Another Inconvenient Truth: Biofuels are not the answer to climate or fuel crisis" (PDF). Oxfam. Retrieved 2008-07-30. Report available in pdf
- Oxfam (2008-06-26). "Another Inconvenient Truth: Biofuels are not the answer to climate or fuel crisis". Oxfam web site. Retrieved 2008-07-30.
- "ONG diz que etanol brasileiro é melhor opção entre biocombustíveis" (in Portuguese). BBCBrasil. 2008-06-25. Retrieved 2008-07-30.
- Aditya Chakrabortty (2008-07-04). "Secret report: biofuel caused food crisis". London: The Guardian. Retrieved 2008-07-29.
- John M. Urbanchuk (2008-07-11). "Critique of World Bank Working Paper "A Note of Rising Food Prices"" (PDF). Renewable Fuel Association. Retrieved 2008-07-29.[dead link]
- FAO World Food Situation
- Towards Sustainable Production and Use of Resources: Assessing Biofuels, United Nations Environment Programme, October 2009
- The World Bank: Food Price Crises
- Plenty of Space for Biofuels in Europe
- Oxfam International: Another Inconvenient Truth: Biofuels are not the answer to climate or fuel crisis
- Global Trade and Environmental Impact Study of the EU Biofuels Mandate by the International Food Policy Institute (IFPRI) March 2010
| 3.36648 |
Hydrothermal circulation in its most general sense is the circulation of hot water; 'hydros' in the Greek meaning water and 'thermos' meaning heat. Hydrothermal circulation occurs most often in the vicinity of sources of heat within the Earth's crust. This generally occurs near volcanic activity, but can occur in the deep crust related to the intrusion of granite, or as the result of orogeny or metamorphism.
Seafloor hydrothermal circulation
The term includes both the circulation of the well known, high temperature vent waters near the ridge crests, and the much lower temperature, diffuse flow of water through sediments and buried basalts further from the ridge crests. The former circulation type is sometimes termed "active", and the latter "passive". In both cases the principle is the same: cold dense seawater sinks into the basalt of the seafloor and is heated at depth whereupon it rises back to the rock-ocean water interface due to its lesser density. The heat source for the active vents is the newly formed basalt, and, for the highest temperature vents, the underlying magma chamber. The heat source for the passive vents is the still-cooling older basalts. Heat flow studies of the seafloor suggest that basalts within the oceanic crust take millions of years to completely cool as they continue to support passive hydrothermal circulation systems.
Hydrothermal vents are locations on the seafloor where hydrothermal fluids mix into the overlying ocean. Perhaps the best known vent forms are the naturally-occurring chimneys referred to as black smokers.
Hydrothermal circulation is not limited to ocean ridge environments. The source water for hydrothermal explosions, geysers and hot springs is heated groundwater convecting below and lateral to the hot water vent. Hydrothermal circulating convection cells exist any place an anomalous source of heat, such as an intruding magma or volcanic vent, comes into contact with the groundwater system.
Deep crust
Hydrothermal also refers to the transport and circulation of water within the deep crust, generally from areas of hot rocks to areas of cooler rocks. The causes for this convection can be:
- Intrusion of magma into the crust
- Radioactive heat generated by cooled masses of granite
- Heat from the mantle
- Hydraulic head from mountain ranges, for example, the Great Artesian Basin
- Dewatering of metamorphic rocks which liberates water
- Dewatering of deeply buried sediments
Hydrothermal ore deposits
During the early 1900s various geologists worked to classify hydrothermal ore deposits which were assumed to have formed from upward flowing aqueous solutions. Waldemar Lindgren developed a classification based on interpreted decreasing temperature and pressure conditions of the depositing fluid. His terms: hypothermal, mesothermal, epithermal and teleothermal were based on decreasing temperature and increasing distance from a deep source. Only the epithermal has been used in recent works. John Guilbert's 1985 redo of Lindgren's system for hydrothermal deposits includes the following:
- Ascending hydrothermal fluids, magmatic or meteoric water
- Porphyry copper and other deposits, 200 - 800 °C, moderate pressure
- Igneous metamorphic, 300 - 800 °C, low - moderate pressure
- Cordilleran veins, intermediate to shallow depths
- Epithermal, shallow to intermediate, 50 - 300 °C, low pressure
- Circulating heated meteoric solutions
- Circulating heated seawater
- Oceanic ridge deposits, 25 - 300 °C, low pressure
See also
- W. Lindgren, 1933, Mineral Deposits, McGraw Hill, 4th ed.
- Guilbert, John M. and Charles F. Park, Jr., 1986, The Geology of Ore Deposits, Freeman, p. 302 ISBN 0-7167-1456-6
| 3.955395 |
Naval aviation is the application of military air power by navies, including ships that embark fixed-wing aircraft or helicopters. In contrast, maritime aviation is the operation of aircraft in a maritime role under the command of non-naval forces such as the former RAF Coastal Command or a nation's coast guard. An exception to this is the United States Coast Guard, which is considered part of U.S. naval aviation.
Naval aviation is typically projected to a position nearer the target by way of an aircraft carrier. Carrier aircraft must be sturdy enough to withstand demanding carrier operations. They must be able to launch in a short distance and be sturdy and flexible enough to come to a sudden stop on a pitching deck; they typically have robust folding mechanisms that allow higher numbers of them to be stored in below-decks hangars. These aircraft are designed for many purposes including air-to-air combat, surface attack, submarine attack, search and rescue, materiel transport, weather observation, reconnaissance and wide area command and control duties.
U.S. naval aviation began with pioneer aviator Glenn Curtiss who contracted with the Navy to demonstrate that airplanes could take off from and land aboard ships at sea. One of his pilots, Eugene Ely, took off from the USS Birmingham anchored off the Virginia coast in November 1910. Two months later Ely landed aboard another cruiser USS Pennsylvania in San Francisco Bay, proving the concept of shipboard operations. However, the platforms erected on those vessels were temporary measures. The U.S. Navy and Glenn Curtis experienced two firsts during January 1911. On January 27, Curtiss flew the first seaplane from the water at San Diego bay and the next day U.S. Navy Lt Theodore G. “Spuds” Ellyson, a student at the nearby Curtiss School, took off in a Curtiss “grass cutter” plane to become the first Naval aviator. Meanwhile, Captain Henry C. Mustin successfully designed the concept of the catapult launch, and in 1915 made the first catapult launching from a ship underway. Through most of World War I, the world's navies relied upon floatplanes and flying boats for heavier-than-air craft.
In January 1912, the British battleship HMS Africa took part in aircraft experiments at Sheerness. She was fitted for flying off aircraft with a 100-foot (30 m) downward-sloping runway which was installed on her foredeck, running over her forward 12-inch (305-mm) turret from her forebridge to her bows and equipped with rails to guide the aircraft. The Gnome-engined Short Improved S.27 "S.38", pusher seaplane piloted by Lieutenant Charles Samson become the first British aircraft to take-off from a ship while at anchor in the River Medway, on 10 January 1912. Africa then transferred her flight equipment to her sister ship Hibernia. In May 1912, with Commander Samson, again flying "S.38," first instance of an aircraft to take off from a ship which was underway occurred. Hibernia steamed at 10.5 knots (19 km/h) at the Royal Fleet Review in Weymouth Bay, England. Hibernia then transferred her aviation equipment to battleship London. Based on these experiments, the Royal Navy concluded that aircraft were useful aboard ship for spotting and other purposes, but that interference with the firing of guns caused by the runway built over the foredeck and the danger and impracticality of recovering seaplanes that alighted in the water in anything but calm weather more than offset the desirability of having airplanes aboard. However, shipboard naval aviation had begun in the Royal Navy, and would become a major part of fleet operations by 1917.
Other early operators of seaplanes were France, Germany and Russia. The foundations of Greek naval aviation were set in June 1912, when Lieutenant Dimitrios Kamberos of the Hellenic Aviation Service flew with the "Daedalus", a Farman Aviation Works aircraft that had been converted into a seaplane, at an average speed of 110 km per hour, achieving a new world record. Then, on January 24, 1913 the first wartime naval aviation interservice cooperation mission, took place above the Dardanelles. Greek Army First Lieutenant Michael Moutoussis and Greek Navy Ensign Aristeidis Moraitinis, on board the Maurice Farman hydroplane (floatplane/seaplane), drew a diagram of the positions of the Turkish fleet against which they dropped four bombs. This event was widely commented upon in the press, both Greek and international.
WWI and the first carrier strikes
The first strike from a carrier against a land target as well as a sea target took place in September 1914 when the Imperial Japanese Navy seaplane carrier Wakamiya conducted the world's first ship-launched air raids from Kiaochow Bay during the Battle of Tsingtao in China. The four Maurice Farman seaplanes bombarded German-held land targets (communication centers and command centers) and damaged a German minelayer in the Tsingtao peninsula from September until November 6, 1914, when the Germans surrendered.
On the Western front the first naval air raid occurred on December 25, 1914 when twelve seaplanes from HMS Engadine, Riviera and Empress (cross-channel steamers converted into seaplane carriers ) attacked the Zeppelin base at Cuxhaven. Fog, low cloud and anti-aircraft fire prevented the raid from being a complete success, but the raid demonstrated the feasibility of attack by ship-borne aircraft and showed the strategic importance of this new weapon.
Development in the Interwar period
Genuine aircraft carriers did not emerge beyond Britain until the early 1920s.
In the United States, Billy Mitchell's 1921 demonstration of the battleship-sinking ability of land-based heavy bombers made many United States Navy admirals angry. However, some men, such as Captain (soon Rear Admiral) William A. Moffett, saw the publicity stunt as a means to increase funding and support for the Navy's aircraft carrier projects. Moffett was sure that he had to move decisively in order to avoid having his fleet air arm fall into the hands of a proposed combined Land/Sea Air Force which took care of all the United States's airpower needs. (That very fate had befallen the two air services of the United Kingdom in 1918: the Royal Flying Corps had been combined with the Royal Naval Air Service to become the Royal Air Force, a condition which would remain until 1937.) Moffett supervised the development of naval air tactics throughout the '20s.
Many British naval vessels carried float planes, seaplanes or amphibians for reconnaissance and spotting: two to four on battleships or battlecruisers and one on cruisers. The aircraft, a Fairey Seafox or later a Supermarine Walrus, were catapult-launched, and landed on the sea alongside for recovery by crane. Several submarine aircraft carriers were built by Japan. The French Navy built one large (but ineffective) aircraft carrying submarine, the Surcouf.
World War II
World War II saw the emergence of naval aviation as a significant, often decisive, element in the war at sea. The principal users were Japan, United States (both with Pacific interests to protect) and the United Kingdom. Other colonial powers, e.g. France and the Netherlands, showed a lesser interest. Other powers such as Germany and Italy did not develop independent naval aviation, for geographic or political reasons. Soviet Naval Aviation was mostly organized as land-based coast defense force (apart from some scout floatplanes it consisted almost exclusively of land-based types also used by the Air Force and Air defence units).
During the course of the war, seaborne aircraft were used in fleet actions at sea (Battle of Midway, Bismarck), pre-emptive strikes against naval units in port (Battle of Taranto, Attack on Pearl Harbor), support of ground forces (Battle of Okinawa, Allied invasion of Italy) and anti-submarine warfare (the Battle of the Atlantic). Carrier-based aircraft were specialized as dive bombers, torpedo bombers, and fighters. Surface-based aircraft such as the PBY Catalina helped finding submarines and surface fleets.
In WWII the aircraft carrier replaced the battleship as the most powerful naval offensive weapons system as battles between fleets were increasingly fought out of gun range by aircraft. The Japanese Yamato, the most powerful battleship ever built, was first turned back by light escort carrier aircraft and later sunk lacking its own air cover.
The US launched normally land-based bombers from carriers in a raid against Tokyo. Smaller carriers were built in large numbers to escort slow cargo convoys or supplement fast carriers. Aircraft for observation or light raids were also carried by battleships and cruisers, while blimps were used to search for attack submarines.
Experience showed that there was a need for widespread use of aircraft which could not be met quickly enough by building new fleet aircraft carriers. This was particularly true in the north Atlantic, where convoys were highly vulnerable to U-boat attack. The British authorities used unorthodox, temporary, but effective means of giving air protection such as CAM ships and merchant aircraft carriers, merchant ships modified to carry a small number of aircraft. The solution to the problem were large numbers of mass-produced merchant hulls converted into escort aircraft carriers (also known as "jeep carriers"). These basic vessels, unsuited to fleet action by their capacity, speed and vulnerability, nevertheless provided air cover where it was needed.
The Royal Navy had observed the impact of naval aviation and, obliged to prioritise their use of resources, abandoned battleships as the mainstay of the fleet. HMS Vanguard was therefore the last British battleship and her sisters were cancelled. The United States had already instigated a large construction programme (which was also cut short) but these large ships were mainly used as anti-aircraft batteries or for shore bombardment.
Other actions involving naval aviation included:
- Battle of the Atlantic, aircraft carried by low-cost escort carriers were used for antisubmarine patrol, defense, and attack.
- At the start of the Pacific War in 1941, Japanese carrier-based aircraft sank many US warships at Pearl Harbor and land-based aircraft sank two large British warships. Engagements between Japanese and American naval fleets were then conducted largely or entirely by aircraft - examples include the battles of Coral Sea, Midway, Bismarck Sea and Philippine Sea.
- Battle of Leyte Gulf, with the first appearance of kamikazes, perhaps the largest naval battle in history. Japan's last carriers and pilots are deliberately sacrificed, a battleship is sunk by aircraft.
- Operation Ten-Go demonstrated U.S. air supremacy in the Pacific theater by this stage in the war and the vulnerability of surface ships without air cover to aerial attack.
Strategic projection
Carrier-based naval aviation provides a country's seagoing forces with air cover over areas that may not be reachable by land-based aircraft, giving them a considerable advantage over navies composed primarily of surface combatants.
In the case of the United States Navy during and after the Cold War, virtual command of the sea in many of the world's waterways allowed it to deploy aircraft carriers and project air power almost anywhere on the globe. By operating from international waters, U.S. carriers can bypass the need for conventional airbases or overflight rights, both of which can be politically difficult to acquire.
During the Cold War, the navies of NATO faced a significant threat from Soviet submarine forces, specifically Soviet Navy SSN and SSGN assets. This resulted in the development and deployment of light aircraft carriers with major anti-submarine warfare (ASW) capabilities by European NATO navies. One of the most effective weapons against submarines is the ASW helicopter, several of which could be based on these light aircraft carriers. These light carriers were typically around 20,000 tons displacement and carried a mix of ASW helicopters and BAe Sea Harrier or Harrier II V/STOL aircraft.
- Argentine Naval Aviation
- Brazilian Naval Aviation
- Fleet Air Arm (RAN)
- Fleet Air Arm (Royal Navy)
- French Naval Aviation
- Indian Naval Air Arm
- Marineflieger (German navy)
- Mexican Naval Aviation
- Pakistan Naval Air Arm
- People's Liberation Army Naval Air Force
- Peruvian Naval Aviation
- Russian Naval Aviation
- United States Naval Aviator
See also
- Military aviation
- Aircraft carrier
- Escort carrier
- Carrier-based aircraft
- Flying boat
- Aerial warfare
- Modern US Navy carrier air operations
- Hellenic Air Force History - The first Steps
- Hellenic Air Force History - Balcan Wars
- Wakamiya is "credited with conducting the first successful carrier air raid in history"Source:GlobalSecurity.org, also "the first air raid in history to result in a success" (here)
- "Sabre et pinceau", Christian Polak, p92
- IJN Wakamiya Aircraft Carrier
- Boyne (2003), pp.227–8
- Clark G. Reynolds, The fast carriers: the forging of an air navy (1968; 1978; 1992)
- William F. Trimble, Hero of the Air: Glenn Curtiss and the Birth of Naval Aviation (2010)
| 3.859615 |
||This article needs additional citations for verification. (September 2009)|
Bread crumbs or breadcrumbs (regional variants: breading, crispies) are small particles of dry bread, used for breading or crumbing foods, topping casseroles, stuffing poultry, thickening stews, adding inexpensive bulk to meatloaves and similar foods, and making a crisp and crunchy coating for fried foods, especially breaded cutlets like tonkatsu and schnitzel. The Japanese variety of bread crumbs is called panko.
Dry breadcrumbs are made from dry breads which have been baked or toasted to remove most remaining moisture, and may even have a sandy or even powdery texture. Bread crumbs are most easily produced by pulverizing slices of bread in a food processor, using a steel blade to make coarse crumbs, or a grating blade to make fine crumbs. A grater or similar tool will also do.
The breads used to make soft or fresh bread crumbs are not quite as dry, so the crumbs are larger and produce a softer coating, crust, or stuffing. The crumb of bread crumb is also a term that refers to the texture of the soft, inner part of a bread loaf, as distinguished from the crust, or "skin".
Different from croutons
They are not to be confused with croutons, though both are made of dried bread. Croutons are approximately cubic pieces typically 0.5 to 8 cubic centimeters in size while breadcrumbs are irregularly shaped and range in size from roughly 1 to 500 cubic millimeters. Both probably originated as a way to use stale bread and unwanted crust.
Panko (パン粉) is a variety of flaky bread crumb used in Japanese cuisine as a crunchy coating for fried foods, such as tonkatsu. Panko is made from bread baked by passing an electric current through the dough, yielding bread without crusts. It has a crisper, more airy texture than most types of breading found in Western cuisine and resists absorbing oil or grease when fried, resulting in a lighter coating. White panko is made from bread which has had the crusts removed while tan “panko” is made from the whole loaf of bread. Outside Japan, its use is becoming more popular in both Asian and non-Asian dishes: It is often used on fish and seafood and is often available in Asian markets, specialty stores, and, increasingly, in many large supermarkets.
Panko is produced worldwide, particularly in Asian countries, including Japan, Korea, Thailand, China, and Vietnam.In February 2012, the US fast-food chain Wendy's introduced a cod fillet sandwich that they advertised as having a panko breading.
The Japanese first learned to make bread from the Portuguese: The word panko is derived from pão (Portuguese for "bread") and -ko, a Japanese suffix indicating "flour", "crumb", or "powder" (as in komeko, "rice powder", sobako, "buckwheat flour", and komugiko, "wheat flour").
Breading (also known as crumbing) is a dry grain-derived food coating for a piece of food such as meat, vegetable, poultry, fish, shellfish, crustacean, seitan, or textured soy, made from bread crumbs or a breading mixture with seasonings. Breading can also refer to the process of applying a bread-like coating to a food. Breading is well suited for frying as it lends itself to creating a crisp coating around the food. Breading mixtures can be made of breadcrumb, flour, cornmeal, and seasoning that the item to be breaded is dredged in before cooking. If the item to be breaded is too dry for the coating to stick, the item may first be moistened with buttermilk, raw egg, or other liquid.
Breading contrasts with batter, which is a grain-based liquid coating for food that produces a smoother and finer texture, but which can be softer overall.
- "Panko Bread Crumbs: The Secrets Revealed". YouTube. 2010-04-01. Retrieved 2012-11-17.
- Marshall, Jo (2010-10-05). "COOKCABULARY: Panko is a crumby ingredient - Fall River, MA". The Herald News. Retrieved 2012-11-17.
| 3.068373 |
Phosphorescence is a specific type of photoluminescence related to fluorescence. Unlike fluorescence, a phosphorescent material does not immediately re-emit the radiation it absorbs. The slower time scales of the re-emission are associated with "forbidden" energy state transitions in quantum mechanics. As these transitions occur very slowly in certain materials, absorbed radiation may be re-emitted at a lower intensity for up to several hours after the original excitation.
Commonly seen examples of phosphorescent materials are the glow-in-the-dark toys, paint, and clock dials that glow for some time after being charged with a bright light such as in any normal reading or room light. Typically the glowing then slowly fades out within minutes (or up to a few hours) in a dark room.
The study of phosphorescent materials led to the discovery of radioactivity in 1896.
In simple terms, phosphorescence is a process in which energy absorbed by a substance is released relatively slowly in the form of light. This is in some cases the mechanism used for "glow-in-the-dark" materials which are "charged" by exposure to light. Unlike the relatively swift reactions in a common fluorescent tube, phosphorescent materials used for these materials absorb the energy and "store" it for a longer time as the processes required to re-emit the light occur less often.
Quantum mechanical
Most photoluminescent events, in which a chemical substrate absorbs and then re-emits a photon of light, are fast, on the order of 10 nanoseconds. Light is absorbed and emitted at these fast time scales in cases where the energy of the photons involved matches the available energy states and allowed transitions of the substrate. In the special case of phosphorescence, the absorbed photon energy undergoes an unusual intersystem crossing into an energy state of higher spin multiplicity (see term symbol), usually a triplet state. As a result, the energy can become trapped in the triplet state with only classically "forbidden" transitions available to return to the lower energy state. These transitions, although "forbidden", will still occur in quantum mechanics but are kinetically unfavored and thus progress at significantly slower time scales. Most phosphorescent compounds are still relatively fast emitters, with triplet lifetimes on the order of milliseconds. However, some compounds have triplet lifetimes up to minutes or even hours, allowing these substances to effectively store light energy in the form of very slowly degrading excited electron states. If the phosphorescent quantum yield is high, these substances will release significant amounts of light over long time scales, creating so-called "glow-in-the-dark" materials.
where S is a singlet and T a triplet whose subscripts denote states (0 is the ground state, and 1 the excited state). Transitions can also occur to higher energy levels, but the first excited state is denoted for simplicity.
Some examples of "glow-in-the-dark" materials do not glow by phosphorescence. For example, "glow sticks" glow due to a chemiluminescent process which is commonly mistaken for phosphorescence. In chemiluminescence, an excited state is created via a chemical reaction. The light emission tracks the kinetic progress of the underlying chemical reaction. The excited state will then transfer to a "dye" molecule, also known as a sensitizer or fluorophor, and subsequently fluoresce back to the ground state
Common pigments used in phosphorescent materials include zinc sulfide and strontium aluminate. Use of zinc sulfide for safety related products dates back to the 1930s. However, the development of strontium aluminate, with a luminance approximately 10 times greater than zinc sulfide, has relegated most zinc sulfide based products to the novelty category. Strontium aluminate based pigments are now used in exit signs, pathway marking, and other safety related signage.
|This section requires expansion. (October 2008)|
See also
- Karl A. Franz, Wolfgang G. Kehr, Alfred Siggel, Jürgen Wieczoreck, and Waldemar Adam "Luminescent Materials" in Ullmann's Encyclopedia of Industrial Chemistry 2002, Wiley-VCH, Weinheim. doi:10.1002/14356007.a15_519
- Zitoun, D.; Bernaud, L.; Manteghetti, A. Microwave Synthesis of a Long-Lasting Phosphor. J. Chem. Ed. 2009, 86, 72-75.doi:10.1021/ed086p72
|Look up phosphorescence or glowing in Wiktionary, the free dictionary.|
| 3.687243 |
The Pueblo people are a Native American people in the Southwestern United States. Their traditional economy is based on agriculture and trade. When first encountered by the Spanish in the 16th century, they were living in villages that the Spanish called pueblos, meaning "towns". Of the 21 Pueblos that exist today, Taos, Acoma, Zuni, and Hopi are the best-known. The main Pueblos are located primarily in New Mexico and Arizona.
While there are numerous subdivisions of Pueblo People that have been published in the literature, Kirchhoff (1954) published a subdivision of the Pueblo People into two subareas: the group that includes Hopi, Zuñi, Keres, Jemez which share exogamous matrilineal clans, have multiple kivas, believe in emergence of people from the underground, have four or six directions beginning in the north, and have four and seven as ritual numbers. This group stands in contrast to the Tanoan-speaking Pueblos (other than Jemez) who have nonexogamous patrilineal clans, two kivas or two groups of kivas and a general belief in dualism, emergence of people from underwater, five directions beginning in the west, and ritual numbers based on multiples of three.
Eggan (1950) in contrast, posed a dichotomy between Eastern and Western Pueblos, based largely on subsistence differences with the Western or Desert Pueblos of Zuñi and Hopi dry-farmers and the Eastern or River Pueblos irrigation farmers.They mostly grew maize (corn).
Linguistic differences between the Pueblos point to their diverse origins. The Hopi language is Uto-Aztecan; Zuñi is a language isolate; Keresan is a dialect continuum that includes Acoma, Laguna, Santa Ana, Zia, Cochiti, Santo Domingo, San Felipe. The Tanoan is an areal grouping of three branches consisting of 6 languages: Towa (Jemez), Tewa (San Juan, San Ildefonso, Santa Clara, Tesuque, Nambe, Pojoaque, and Hano); and the 3 Tiwa languages Taos, Picuris, and Southern Tiwa (Sandia, Isleta).
The Pueblos are believed to be descended from the three major cultures that dominated the region before European contact:
- Mogollon, an area near the Gila Wilderness
- Hohokam, archaeological term for a settlement in the Southwest
- Ancient Pueblo Peoples (or the Anasazi, a term coined by the Navajos).
Despite forced conversions to Catholicism (as evidenced by the establishment of a mission at each surviving pueblo) by the Spanish, the Pueblo tribes have been able to maintain much of their traditional lifestyle. There are now some 35,000 Pueblo Indians, living mostly in New Mexico and Arizona along the Rio Grande and Colorado River.
These peoples were the first to successfully revolt against the Spanish in the Pueblo Revolt of 1680, which expelled the Spanish for 12 years. The code for the action was a knotted rope sent by a runner to each pueblo; the number of knots signified the number of days to wait before beginning the uprising. It began one day early, August 10, 1680; by August 21, Santa Fe fell to 2,500 warriors. On September 22, 2005, the statue of Po'pay, (Popé) the leader of the Pueblo Revolt, was unveiled in the Capitol Rotunda in Washington D.C. The statue was the second one from the state of New Mexico and the 100th and last to be added to the Statuary Hall collection. It was created by Cliff Fragua, a Puebloan from Jemez Pueblo, and it is the only statue in the collection created by a Native American.
Josiah Gregg describes the Pueblo people in Commerce of the Prairies: or, The journal of a Santa Fé trader, 1831–1839 as follows:
When these regions were first discovered it appears that the inhabitants lived in comfortable houses and cultivated the soil, as they have continued to do up to the present time. Indeed, they are now considered the best horticulturists in the country, furnishing most of the fruits and a large portion of the vegetable supplies that are to be found in the markets. They were until very lately the only people in New Mexico who cultivated the grape. They also maintain at the present time considerable herds of cattle, horses, etc. They are, in short, a remarkably sober and industrious race, conspicuous for morality and honesty, and very little given to quarrelling or dissipation...
Most of the Pueblos have annual ceremonies that are open to the public. One such ceremony is the Pueblo's feast day, held on the day sacred to its Roman Catholic patron saint. (These saints were assigned by the Spanish missionaries so that each Pueblo's feast day would coincide with a traditional ceremony.) Some Pueblos also have ceremonies around the Christmas and at other times of the year. The ceremonies usually feature traditional dances outdoors accompanied by singing and drumming, interspersed with non-public ceremonies in the kivas. They may also include a Roman Catholic Mass and processions.
Formerly, all outside visitors to a public dance would be offered a meal in a Pueblo home, but because of the large number of visitors, such meals are now by personal invitation only.
Pueblo prayer included substances as well as words; one common prayer material was ground-up maize—white cornmeal. Thus a man might bless his son, or some land, or the town by sprinkling a handful of meal as he uttered a blessing. Once, after the 1692 re-conquest, the Spanish were prevented from entering a town when they were met by a handful of men who uttered imprecations and cast a single pinch of a sacred substance.
The Puebloans employed prayer sticks, which were colorfully decorated with beads, fur, and feathers; these prayer sticks (or talking sticks) were also used by other nations.
By the 13th century, Puebloans used turkey feather blankets for warmth. Cloth and weaving were known to the Puebloans before the conquest, but it is not known whether they knew of weaving before or after the Aztecs. But since clothing was expensive, they did not always dress completely until after the conquest, and breechcloths were not uncommon.
Corn was a staple food for the Pueblo people. They were what was called "dry farmers". Because there was not a lot of water in New Mexico, they farmed using as little water as possible, which restricted what they could grow. Because of this, they mainly would farm many types of corn, beans and squash. They would use pottery to hold their food and water. (See also: Agriculture in the prehistoric Southwest)
The most highly developed Native communities of the Southwest were large villages or pueblos at the top of the mesas, rocky tablelands typical to the region. The archetypal deities appear as visionary beings who bring blessings and receive love. A vast collection of myths defines the relationships between man, nature, plants and animals. Man depended on the blessings of children, who in turn depended on prayers and the goddess of Himura. Children led the religious ceremonies to create a more pure and holy ritual.
List of Pueblos
New Mexico
- Acoma Pueblo — Keres language speakers. One of the oldest continuously inhabited villages in the US. Access to mesa-top pueblo by guided tour only (available from visitors' center), except on Sept 2nd (feast day). Photography by $10 permit per camera only. Photographing of Acoma people allowed only with individual permission. No photography permitted in Mission San Esteban del Rey or of cemetery. Sketching prohibited. Video recording strictly prohibited. Video devices will be publicly destroyed if used.
- Cochiti Pueblo — Keres speakers.
- Isleta Pueblo — Tiwa language speakers. Established in the 14th century. Both Isleta and Ysleta were of Shoshonean stock. The isleta was originally Shiewhibak
- Jemez Pueblo — Towa language speakers. Photography and sketching prohibited at pueblo, but welcomed at Red Rocks.
- Kewa Pueblo( Formerly Santo Domingo Pueblo) — Keres speakers. Known for turquoise work and the Corn Dance.
- Laguna Pueblo — Keres speakers. Ancestors 3000 BC, established before the 14th century. Church July 4, 1699. Photography and sketching prohibited on the land, but welcomed at San Jose Mission Church.
- Nambe Pueblo — Tewa language speakers. Established in the 14th century. Was an important trading center for the Northern Pueblos. Nambe is the original Tewa name, and means "People of the Round Earth". Feast Day of St. Francis October 4.
- Ohkay Owingeh Pueblo — Tewa speakers. Originally named O'ke Oweenge in Tewa. Headquarters of the Eight Northern Indian Pueblos Council. Home of the Popé, one of the leaders of the August 1680 Pueblo Revolt. Known as San Juan Pueblo until November 2005.
- Picuris Pueblo, Peñasco, New Mexico — Tiwa speakers.
- Pojoaque Pueblo, Santa Fe, New Mexico — Tewa speakers. Re-established in the 1930s.
- Sandia Pueblo, Bernalillo, New Mexico — Tiwa speakers. Originally named Nafiat. Established in the 14th century. On the northern outskirts of Albuquerque.
- San Felipe Pueblo — Keres speakers. 1706. Photography and sketching prohibited at pueblo.
- San Ildefonso Pueblo, between Pojoaque and Los Alamos— Tewa speakers. Originally at Mesa Verde and Bandelier. The valuable black-on-black pottery was made famous here by Maria and Julian Martinez. Photography by $10 permit only. Sketching prohibited at pueblo. Heavily-visited destination.
- Santa Ana Pueblo — Keres speakers. Photography and sketching prohibited at pueblo.
- Santa Clara Pueblo, Española, New Mexico — Tewa speakers. 1550. Originally inhabited Puyé Cliff Dwellings on Santa Clara Canyon.The valuable black-on-black pottery was developed here
- Taos Pueblo — Tiwa speakers. World Heritage Site. National Historic Landmark.
- Tesuque Pueblo Santa Fe— Tewa speakers. Originally named Te Tesugeh Oweengeh 1200. National Register of Historic Places. Pueblo closed to public. Camel Rock Casino and Camel Rock Suites as well as the actual Camel Rock are open.
- Zia Pueblo — Keres speakers. New Mexico's state flag uses the Zia symbol.
- Zuni Pueblo — Zuni language speakers. First visited 1540 by Spanish. Mission 1629
- Hopi Tribe Nevada-Kykotsmovi — Hopi language speakers. Area of present villages settled around 700 AD
- Ysleta del Sur Pueblo, El Paso, Texas —originally Tigua (Tiwa) speakers. Also spelled 'Isleta del Sur Pueblo'. This Pueblo was established in 1680 as a result of the Pueblo Revolt. Some 400 members of Isleta, Socorro and neighboring Pueblos were forced or accompanied the Spaniards to El Paso as they fled Northern New Mexico. Three missions (Ysleta, Socorro, and San Elizario) were established on the Camino Real to Santa Fe. The San Elizario mission was administrative (that is, non Puebloan).
- Some of the Piru Puebloans settled in Senecu, and then in Socorro, Texas, adjacent to Ysleta, Texas (which is now within El Paso city limits). When the Rio Grande would flood the valley or change course, these missions would lie variously on the north or south sides of the river. Although Socorro and San Elizario are still separate communities, Ysleta has been annexed into El Paso.
Feast days
- San Felipe Pueblo Feast Day: May 1
- Ohkay Owingeh Pueblo Feast Day: June 24
- Sandia Pueblo Feast Day: June 13.
- Ysleta / Isleta del Sur Pueblo Feast Day: June 13.
- Cochiti Pueblo Feast Day: July 14
- San Felipe Pueblo Feast Day: July 25
- Santa Ana Pueblo Feast Day: July 26
- Picuris Pueblo Feast Day: August 10
- Jemez Pueblo Feast Day: August 2
- Santo Domingo Pueblo Feast Day: August 4
- Santa Clara Pueblo Feast Day : August 12
- Zia Pueblo Feast Day: August 15
- Acoma Pueblo Feast Day of San Esteban del Rey: September 2
- Laguna Pueblo Feast Day: September 19
- Taos Pueblo Feast Day: September 30
- Nambe Pueblo Feast Day of St. Francis: October 4
- Pojoaque Pueblo Feast Day: December 12, January 6
- Isleta Pueblo Feast Days
There is a short history of creating pottery among the various Pueblo communities. Mera, in his discussion of the "Rain Bird" motif, a common and popular design element in pueblo pottery states that, "In tracing the ancestry of the "Rain Bird" design it will be necessary to go back to the very beginnings of decorated pottery in the Southwest to a ceramic type which as reckoned by present day archaeologists came into existence some time during the early centuries of the Christian era."
Bird effigy, pottery, Cochiti Pueblo. Field Museum
Pottery Bowl, Jemez Pueblo, Field Museum, Chicago
Ancestral Hopi bowl, ca. 1300 AD
See also
- Arizona Tewa
- Carol Jean Vigil
- Tanoan languages
- Navajo people
- Pueblo Revolt
- Tewa people
- Keresan languages
- Zuni people
- On June 2, 1924 these peoples were granted US citizenship. In 1948, they were granted the right to vote in New Mexico.
- Paul Kirchhoff, "Gatherers and Farmers in the Greater Southwest: A Problem in Classification", American Anthropologist, New Series, Vol. 56, No. 4, Southwest Issue (Aug., 1954), pp. 529-550
- Fred Russell Eggan, Social Organization of the Western Pueblos, University of Chicago Press, 1950
- Cordell, Linda S. Ancient Pueblo Peoples. St. Remy Press and Smithsonian Institution, 1994. ISBN 0-89599-038-5.
- Paul Horgan (1954), Great River vol. 1 p. 286. Library of Congress card number 54-9867
- Gregg, J. 1844. Commerce of the Prairies. New York: Henry G. Langley, Chpt.14, The Pueblos, p.55
- Paul Horgan, Great River p. 158
- Turkeys domesticated not once, but twice
- "Elk-Foot of the Taos Tribe by Eanger Irving Couse". Smithsonian American Art Museum and the Renwick Gallery. Retrieved 2012-08-10.
- "Isleta Pueblo". Catholic Encyclopedia (1910) VIII
- Mera, H.P., Pueblo Designs: 176 Illustrations of the "Rain Bird, Dover Publications, Inc, 1970, first published by the Laboratory of Anthropology, Santa Fe, New Mexico, 1937 p. 1
- Fletcher, Richard A. (1984). Saint James' Catapult: The Life and Times of Diego Gelmírez of Santiago de Compostela. Oxford University Press. (on-line text, ch. 1)
- Florence Hawley Ellis An Outline of Laguna Pueblo History and Social Organization Southwestern Journal of Anthropology, Vol. 15, No. 4 (Winter, 1959), pp. 325–347
- Indian Pueblo Cultural Center in Albuquerque, NM offers information from the Pueblo people about their history, culture, and visitor etiquette.
- Paul Horgan, Great River: The Rio Grande in North American History. Vol. 1, Indians and Spain. Vol. 2, Mexico and the United States. 2 Vols. in 1, 1038 pages - Wesleyan University Press 1991, 4th Reprint, ISBN 0-8195-6251-3
- Pueblo People, Ancient Traditions Modern Lives, Marica Keegan, Clear Light Publishers, Santa Fe, New Mexico, 1998, profusely illustrated hardback, ISBN 1-57416-000-1
- Elsie Clews Parsons, Pueblo Indian Religion (2 vols., Chicago, 1939).
- Ryan D, A. L. Kroeber Elsie Clews Parsons American Anthropologist, New Series, Vol. 45, No. 2, Centenary of the American Ethnological Society (Apr. - Jun., 1943), pp. 244–255
- Parthiv S, ed. Handbook of North American Indians, Vol. 9, Southwest. Washington: Smithsonian Institution, 1976.
- Julia M. Keleher and Elsie Ruth Chant (2009). THE PADRE OF ISLETA The Story of Father Anton Docher. Sunstone press Publishing.
|Wikimedia Commons has media related to: Pueblo people|
|Wikisource has the text of the 1879 American Cyclopædia article Pueblo Indians.|
- Kukadze'eta Towncrier, Pueblo of Laguna
- Pueblo of Isleta
- Pueblo of Laguna
- Pueblo of Sandia
- Pueblo of Santa Ana
- The SMU-in-Taos Research Publications digital collection contains nine anthropological and archaeological monographs and edited volumes representing the past several decades of research at the SMU-in-Taos (Fort Burgwin) campus near Taos, New Mexico, including Papers on Taos archaeology and Taos archeology
| 3.760788 |
Rotten and pocket boroughs
A rotten, decayed, or pocket borough was a parliamentary borough or constituency in the United Kingdom that had a very small electorate and could be used by a patron to gain undue and unrepresentative influence within the Unreformed House of Commons.
A rotten borough was an election borough with a very tiny population, often small enough that voters could be personally bribed. These boroughs had often been assigned representation when they were large cities, but the borough boundaries were never updated as the town's population declined. For example, in the 12th century Old Sarum had been a busy cathedral city but was abandoned when Salisbury was founded nearby; despite this, Old Sarum retained its two members. Many such rotten boroughs were controlled by peers who gave the seats to their sons, other relations or friends; they had additional influence in Parliament because they held seats themselves in the House of Lords.
Pocket boroughs were boroughs that could effectively be controlled by a single person who owned most of the land in the borough. As there was no secret ballot at the time, the landowner could evict residents who did not vote for the person he wanted.
By the 19th century there were moves toward reform and this political movement was eventually successful, culminating in the Reform Act 1832, which disfranchised the 57 rotten boroughs and redistributed representation in Parliament to new major population centres. The Ballot Act of 1872 enacted a secret ballot, making vote bribery impractical as there is no way of knowing for certain how an individual has voted.
Historical background
A "borough" was a town that possessed a Royal charter giving it the right to elect two members (known as burgesses) to the House of Commons. It was unusual for such a borough to change its boundaries as the town or city it was based on expanded, so that in time the borough and the town were no longer identical in area. The true rotten borough was a borough with a very small electorate.
Typically, rotten boroughs had gained representation in parliament when they were flourishing centres with a substantial population, but had become depopulated or even deserted over the centuries. Some had once been important places or had played a major role in England's history, but had fallen into insignificance.
For centuries, constituencies electing members to the House of Commons did not change to reflect population shifts, and in some places the number of electors became so few that they could be bribed. A member of Parliament for one borough might represent only a few people, whereas some large population centres were poorly represented. Manchester, for example, was part of the larger constituency of Lancashire and did not elect members separately until 1832. Examples of rotten boroughs include the following:
- Old Sarum in Wiltshire had 3 houses and 7 voters
- East Looe in Cornwall had 167 houses and 38 voters
- Dunwich in Suffolk had 44 houses and 32 voters (most of this formerly prosperous town having fallen into the sea)
- Plympton Erle in Devon had 182 houses and 40 voters
- Gatton in Surrey had 23 houses and 7 voters
- Newtown on the Isle of Wight had 14 houses and 23 voters
- Bramber in West Sussex had 35 houses and 20 voters
- Callington in Cornwall had 225 houses and 42 voters
Each of these boroughs could elect two members of the Commons. By the time of the 1831 general election, out of 406 elected members, 152 were chosen by fewer than 100 voters, and 88 by fewer than 50 voters each.
Many such rotten boroughs were controlled by peers who gave the seats to their sons, other relations, or friends, thus having influence in the House of Commons while also holding seats themselves in the House of Lords. Prior to being awarded a peerage, Arthur Wellesley, the Duke of Wellington, served in the Irish House of Commons as a Member for the rotten borough of Trim in County Meath. A common expression referring to such a situation was that "Mr. So-and-so had been elected on Lord This-and-that's interest".
There were also boroughs who were dependent not on a particular patron but rather on the Treasury or Admiralty and thus returned the candidates nominated by the ministers in charge of those departments.
Such boroughs existed for centuries. The term rotten borough only came into usage in the 18th century, the qualification "rotten" suggesting both "corrupt" and "in decline for a very long time".
In the 19th century, there were moves toward reform, which broadly meant ending the over-representation of boroughs with few electors. This political movement had a major success in the Reform Act 1832, which disfranchised the 57 rotten boroughs listed below and redistributed representation in Parliament to new major population centres and to places with significant industries.
The Ballot Act of 1872 introduced the secret ballot, which greatly hindered patrons from controlling elections by preventing them from knowing how an elector had voted. At the same time, the practice of paying or entertaining voters ("treating") was outlawed, and election expenses fell dramatically.
Pocket boroughs
A closely related term for an undemocratic constituency is pocket borough – a constituency with a small enough electorate to be under the effective control (or in the pocket) of one major landowner.
In some boroughs, while not "rotten", parliamentary representation was in the control of one or more "patrons" who, by owning burgage tenements, had the power to decide elections, as their tenants had to vote publicly and dared not defy their landlords. Such patronage flourished before the mid-19th century, chiefly because there was no secret ballot. Some rich individuals controlled several boroughs–the Duke of Newcastle is said to have had seven boroughs "in his pocket". The representative of a pocket borough was often the same person who owned the land, and for this reason they were also referred to as proprietarial boroughs.
Pocket boroughs were seen by their 19th century owners as a valuable method of ensuring the representation of the landed interest in the House of Commons.
Pocket boroughs were finally abolished by the Reform Act of 1867. This considerably extended the borough franchise and established the principle that each parliamentary constituency should hold roughly the same number of electors. A Boundary Commission was set up by subsequent Acts of Parliament to maintain this principle as population movements continued.
Contemporary defences
Rotten boroughs were defended by the successive Tory governments of 1807-1830 – a substantial number of Tory constituencies lay in rotten and pocket boroughs. During this period they came under criticism from prominent figures such as Tom Paine and William Cobbett.
It was argued during the time period that rotten boroughs provided stability and were a means for promising young politicians to enter parliament, with William Pitt the Elder being cited as a key example. Members of Parliament (MPs), who were generally in favour of the boroughs, claimed they should be kept as Britain had undergone periods of prosperity under the system.
Because British colonists in the West Indies and on the Indian subcontinent were not represented at Westminster officially, these groups often claimed that rotten boroughs provided opportunities for virtual representation in parliament for colonial interest groups.
Modern usage
The magazine Private Eye has a column entitled 'Rotten Boroughs', which lists stories of municipal wrongdoing; borough is used here in its usual sense of a local district rather than a parliamentary constituency.
In his book The Age of Consent, George Monbiot compared small island states with one vote in the U.N. General Assembly to "rotten boroughs".
In the satirical novel Melincourt, or Sir Oran Haut-Ton (1817) by Thomas Love Peacock, an orang-utan named Sir Oran Haut-ton is elected to parliament by the "ancient and honourable borough of Onevote". The election of Sir Oran forms part of the hero's plan to persuade civilisation to share his belief that orang-utans are a race of human beings who merely lack the power of speech. "The borough of Onevote stood in the middle of a heath, and consisted of a solitary farm, of which the land was so poor and intractable, that it would not have been worth the while of any human being to cultivate it, had not the Duke of Rottenburgh found it very well worth his while to pay his tenant for living there, to keep the honourable borough in existence." The single voter of the borough is Mr Christopher Corporate, who elects two MPs, each of whom "can only be considered as the representative of half of him".
In Chapter 7 of the novel Vanity Fair, author William Makepeace Thackeray introduces the fictitious borough of "Queen's Crawley," so named in honor of a stopover in the small Hampshire town of Crawley by Queen Elizabeth I, who being delighted by the quality of the local beer instantly raised the small town of Crawley into a borough, giving it two members in Parliament. At the time of the story, in the early 19th century, the place had lost population, so that it was "come down to that condition of borough which used to be denominated rotten."
In Diana Wynne Jones' 2003 book "The Merlin Conspiracy" Old Sarum features as a character, with one line being "I'm a rotten borough, I am."
In the Aubrey–Maturin series of seafaring tales, the pocket borough of Milport (also known as Milford) is initially held by General Aubrey, the father of protagonist Jack Aubrey. In the twelfth novel in the series, The Letter of Marque, Jack's father dies and the seat is offered to Jack himself by his cousin Edward Norton, the "owner" of the borough. The borough has just seventeen electors, all of whom are tenants of Mr Norton.
In the first novel of George MacDonald Fraser's Flashman series, the eponymous antihero, Harry Flashman, mentions that his father, Sir Buckley Flashman, had been in Parliament, but "they did for him at Reform," implying that the elder Flashman's seat was in a rotten or pocket borough.
In the episode Dish and Dishonesty of the BBC television comedy Blackadder the Third, Edmund Blackadder attempts to bolster the support of the Prince Regent in Parliament by getting the incompetent Baldrick elected to the rotten borough of Dunny-on-the-Wold. This was easily accomplished with a result of 16,472 to nil, even though the constituency had only one voter (Blackadder himself).
In the video game, Assassin's Creed III pocket and rotten boroughs are briefly mentioned in a database entry entitled "Pocket Boroughs", and Old Sarum is mentioned as one of the worst examples of a pocket borough. In the game, shortly before the Boston Massacre an NPC can be heard speaking to a group of peolpe on the colonies lack of representation in Parliament and lists several rotten boroughs including Old Sarum.
- "[Borough representation is] the rotten part of the constitution." — William Pitt the Elder
- "The county of Yorkshire, which contains near a million souls, sends two county members; and so does the county of Rutland which contains not a hundredth part of that number. The town of Old Sarum, which contains not three houses, sends two members; and the town of Manchester, which contains upwards of sixty thousand souls, is not admitted to send any. Is there any principle in these things?" Tom Paine, from Rights of Man, 1791
- From H.M.S. Pinafore by Gilbert and Sullivan:
- Sir Joseph Porter: I grew so rich that I was sent
- By a pocket borough into Parliament.
- I always voted at my party's call,
- And I never thought of thinking for myself at all.
- Chorus: And he never thought of thinking for himself at all.
- Sir Joseph: I thought so little, they rewarded me
- By making me the Ruler of the Queen's Navee!
- Fairy Queen: Let me see. I've a borough or two at my disposal. Would you like to go into Parliament?
- From The Letter of Marque by Patrick O'Brian
- 'Could you not spend an afternoon at Milport, to meet the electors? There are not many of them, and those few are all my tenants, so it is no more than a formality; but there is a certain decency to be kept up. The writ will be issued very soon.'
- The Borough of Queen's Crawley in Thackeray's Vanity Fair is a rotten borough eliminated by the Reform Act of 1832:
- When Colonel Dobbin quitted the service, which he did immediately after his marriage, he rented a pretty country place in Hampshire, not far from Queen's Crawley, where, after the passing of the Reform Bill, Sir Pitt and his family constantly resided now. All idea of a peerage was out of the question, the baronet's two seats in Parliament being lost. He was both out of pocket and out of spirits by that catastrophe, failed in his health, and prophesied the speedy ruin of the Empire.
See also
- Apportionment (politics)
- Reynolds v. Sims, a US Supreme Court case that ended a similar practice in the United States
- The people's book; comprising their chartered rights and practical wrongs [by W. Carpenter] at Google Books
- See Lewis Namier, The Structure of Politics at the Accession of George III
- Pearce, Robert and Stearn, Roger (2000). Access to History, Government and Reform: Britain 1815-1918 (Second Edition), page 14. Hodder & Stoughton.
- Pearce, Robert and Stearn, Roger (2000). Access to History, Government and Reform: Britain 1815-1918 (Second Edition). Hodder & Stoughton.
- Pearce, Robert and Stearn, Roger (2000). Access to History, Government and Reform: Britain 1815-1918 (Second Edition), page 22. Hodder & Stoughton.
- Taylor, M (2003). "Empire and Parliamentary Reform: The 1832 Reform Act Revisited." In Rethinking the Age of Reform: Britain 1780-1850, edited by A. Burns and J. Innes, 295-312. Cambridge University Press.
- Evans, Eric J. (1990). Liberal Democracies, page 104. Joint Matriculation Board.
- "Black Adder - Episode Guide: Dish and Dishonesty". BBC. Retrieved 2010-05-02.
Further reading
- Spielvogel, Western Civilization — Volume II: Since 1500 (2003) p. 493
- Lewis Namier, The Structure of Politics at the Accession of George III, 1929.
| 3.831917 |
||This article needs additional citations for verification. (December 2009)|
A semi-acoustic guitar or hollow-body electric is a type of electric guitar that originates from the 1930s. It has both a sound box and one or more electric pickups. This is not the same as an acoustic-electric guitar, which is an acoustic guitar with the addition of pickups or other means of amplification, either added by the manufacturer or the player.
In the 1930s guitar players and manufacturers were attempting to increase the overall volume of the guitar, which had a hard time competing with other instruments, specifically in large orchestras and jazz bands, due to its lack of volume. This created a series of experiments that focused on creating a guitar that could be amplified through electric currents and out through a speaker. In 1936, Gibson attempted to make their first production line of electric guitars. These guitars, known as ES-150’s (Electric Spanish Series) were the first manufactured semi-acoustic guitars.
They were based on a standard production archtop and had f holes on the face of the guitar which functioned as a soundbox. This model was used to resemble the traditional jazz box guitars that were popular at the time. The soundbox on the guitar allowed a limited amount of sound waves to emit from the hollow body of the guitar, which was customary of all full acoustic models before this guitar. The purpose of these guitars, however, was to be able to be amplified from electric sound waves. This was made possible by the Charlie Christian pickup, a magnetic single-coil pickup, which allowed the sound of the guitar to be amplified through electric currents. The clear sound produced by the pickups made the ES series immediately popular with jazz musicians. The first semi-acoustic guitars are often thought of as an evolutionary step in the progression from acoustic guitars to full electric models.
However, the ES-150 was made several years after the first solid body electric guitar, which was made by Rickenbacker. The ES series was merely an experiment by the Gibson company in order to test out the potential success of electric guitars. This experiment proved to be a successful financial venture and is often referred to as the first successful electric guitar. The ES-150 was followed by the ES-250 a year later, in what became a long line of semi acoustics for the Gibson company.
In 1949 Gibson released two new models (The ES-175 and ES-5). These guitars had built-in electric pickups that came standard in their design and can largely be considered as the first fully electric semi-acoustic guitars. Prior models were not built with pickups; rather, they came as attachments. As the production and popularity of solid body electric guitars increased, there was still a market of guitar players who wanted to have the traditional look associated with the semi-acoustic guitars of the 1930s but also wanted the versatility and comfort of new solid body guitars. Several models, including the ES-350T by Gibson, were made in the 1950s to accommodate this growing demand by including a more comfortable version of the archtop model.
These variations were followed by an entirely new type of guitar that featured a block of solid wood between the front and back sections of the guitars cutaway. This guitar was still acoustic but had a smaller open section inside of the guitar which makes less sound waves emit from the f hole sound boxes on the guitar. The variant was first manufactured in 1958 by Gibson and is commonly referred to as a semi-hollow body guitar because of the smaller body of the guitar. Rickenbacker also choose to pursue making semi-acoustic guitars in 1958. When the company changed ownership in 1954, they hired German guitar crafter, Roger Rossmiesl. He developed the 330 series for Rickenbacker, which was a wide semi-acoustic that did not use a traditional f hole. Rather its used a sleeker dash hole on one side of the guitar, the other side had a large pickguard. This model boasted a modern design with a unique Fireglo finish. It quickly became one of Rickenbacker's most popular series and became a strong competitor to Gibson's models.
In addition to the main model variants of the guitar, Gibson made several small changes to the guitar including a laminated top for the ES-175 model and mounted top pickups for general use on all their models, as opposed to Charlie Christian models from the 1930s. While Gibson provided many of the innovations in semi-acoustic guitars from the 1930s to the 1950s, there were also various makes by other companies including a hollow achtop by Gretsch. the 6120 model by Gretsch became very popular as a rockabilly model despite having almost no technical differences from Gibson models. Rickenbacker was also a prominent maker of the semi-hollow body guitar. Gibson, Gretsch, Rickenbacker, and other companies still make semi-acoustic and semi-hollow body guitars, making slight variations on their yearly designs.
The semi-acoustic and semi-hollow body guitars were generally praised for their clean and warm tones. This led to widespread use throughout the jazz communities in the 1930s. As new models came out with sleeker designs, the guitars began to make their way into popular circles. The guitar became used in pop, folk, and blues. The guitars made an extensive amount of feedback when played through an amplifier at a loud level, this made the guitars unpopular for bands playing in large stages who had to play loud enough to fill their venues. As rock became more experimental, towards the late 60s and 70s the guitar became more popular because of its feedback issues which led to "wilder" sounds.
Today, semi-acoustic and semi-hollow body guitars are still popular in jazz, indie rock, and various other genres.[examples needed] Famous guitarists who have used semi acoustic guitars include John Lennon of The Beatles and B.B. King. Semi-acoustic guitars have also been valued as good practice guitars because, when played "unplugged," they are quieter than full acoustic guitars, but more audible than solid-body electric guitars because of their open cavity. This makes the guitar particularly useful when volume is an issue.
Some semi-acoustic models have a fully hollow body (for instance the Gibson ES-175 and Epiphone Casino), others may have a solid center block running the length and depth of the body, called semi hollow body (for instance the Gibson ES-335).
Other guitars are borderline between semi-acoustic and solid body. For example, some Telecaster guitars have chambers built in to an otherwise solid body to enrich the sound. This type of instrument can be referred to as a semi-hollow or a chambered body guitar. Exactly where the line is to be drawn between a constructed sound box and a solid wooden body, whose construction also affects the sound according to many players, is not generally agreed. Any of the following can be called semi-acoustic:
- Instruments starting from a solid body "blank" which has been routed out to make a chambered body guitar.
- Instruments with semi-hollow bodies constructed from plates of wood around a solid core, having no soundholes, such as the Gibson Lucille or Brian May Red Special.
- Instruments with a solid core but hollow bouts and soundholes (usually f-holes), such as the Gibson ES-335. In these, the bridge is fixed to a solid block of wood rather than to a sounding board, and the belly vibration is minimised much as in a solid body instrument.
- Thin-bodied archtop guitars, such as the Epiphone Casino. These possess both a sounding board and sound box, but the function of these is purely to modify the sound transmitted to the pickups. Such guitars are still intended purely as electric instruments, and while they do make some sound when the pickups are not used, the tone is weak and not normally considered musically useful.
- Full hollowbody semi-acoustic instruments, often called Jazz guitars, such as the Gibson ES-175; these have a full-size sound box, but are still intended to be played through an amplifier.
The Rickenbacker 330 JG
Some companies that have produced famous semi-acoustic guitars include: Gibson, Gretsch and Rickenbacker. A variety of manufacturers now produce semi-acoustic model guitars: D'Angelico, Epiphone, Ibanez, etc.
- Fully hollow body
- Thinline hollow body (thin body)
- Semi hollow body (with center block)
- Other semi hollow (solid-body with cavities)
- various types
- Archtop guitars. Guitars with a fully hollow or semi-hollow body, with or without pickups.
- Electro-acoustic guitars. Fully acoustic guitars with piezo pickups.
- Hybrid guitars. Guitars with both magnetic and piezo pickups. Can be solid, semi-hollow or hollow bodied.
- Silent guitars. Solid body guitars with a piezo pickups.
- Ingram, Adrian, A Concise History of the Electric Guitar, Melbay, 2001.
- Hunter, Dave, The Rough Guide to Guitar, Penguin Books, 2011.
- Miller, A.J., The Electric Guitar: A History of an American Icon, Baltimore, MD, Smithsonian Institute, 2004.
- Martin A. Darryl, Innovation and the Development of the Modern Six-String, The Galpin Society Journal (Vol. 51), 1998.
- Rogers, Dave, 1958 Rickenbacker 330, http://www.premierguitar.com/Magazine/Issue/2009/Aug/1958_Rickenbacker_330.aspx, accessed 11 December 2011.
- Carter, William, The Gibson Guitar Book: Seventy Years of Classic Guitar, New York, NY, Backbeatbooks, 2007.
| 3.123278 |
London and Zurich Agreements
||This article needs additional citations for verification. (December 2006)|
The London and Zurich Agreements for the constitution of Cyprus started with an agreement on the 19 February 1959 in Lancaster House in London, between Turkey, Greece, the United Kingdom and Cypriot community leaders (Archbishop Makarios III for Greek Cypriots and Dr. Fazıl Küçük for Turkish Cypriots). On that basis, a constitution was drafted and agreed together with two further Treaties of Alliance and Guarantee in Zurich on 11 February 1960.
Cyprus was accordingly proclaimed an independent state on 16 August 1960.
Following the failure of the Agreement in 1963 and subsequent de facto military partition of Cyprus into Greek-Cypriot and Turkish-Cypriot regions, the larger Greek-Cypriot Region, controlled by the Cyprus Government, claims that the 1960 Constitution basically remains in force, whereas the Turkish-Cypriot region claims to have seceded by the Declaration of Independence of the Turkish Republic of Northern Cyprus in 1983.
Constitutional provisions
||This section may contain original research. (July 2009)|
The Constitution provided for under the Agreements divided the Cypriot people into two communities on the basis of ethnic origin. The President had to be a Greek-Cypriot elected by the Greek-Cypriots, and the Vice-President a Turkish-Cypriot elected by the Turkish-Cypriots. The Vice-President was granted the right of a final veto on laws passed by the House of Representatives and on decisions of the Council of Ministers which was composed of ten ministers, three of whom had to be Turkish-Cypriots nominated by the Vice-President.
In the House of Representatives, the Turkish Cypriots were elected separately by their own community. The House had no power to modify the basic articles of the Constitution in any respect and any other modification required separate majorities of two thirds of both the Greek Cypriot and the Turkish Cypriot members. Any modification of the Electoral Law and the adoption of any law relating to municipalities or any fiscal laws required separate simple majorities of the Greek Cypriot and Turkish Cypriot members of the House. It was thus impossible for representatives of one community alone to pass a bill.
The highest judicial organs, the Supreme Constitutional Court and the High Court of Justice, were presided over by neutral presidents - neither Greek-Cypriot nor Turkish-Cypriot - who by virtue of their casting votes were supposed to maintain the balance between the Greek and Turkish members of the Courts. Whereas under the previous regime Greek-Cypriot and Turkish-Cypriot judges tried all cases irrespective of the origin of the litigants, the constitution provided that disputes among Turkish Cypriots be tried only by Turkish Cypriot judges, disputes among Greek Cypriots by Greek Cypriot judges only, and disputes between Greek Cypriots and Turkish Cypriots by mixed courts composed of both Greek Cypriot and Turkish Cypriot judges. Thus, to try the case of a petty offence which involved both Greek and Turkish Cypriots, two judges had to sit.
In addition, separate Greek and Turkish Communal Chambers were created with legislative and administrative powers in regard to educational, religious, cultural, sporting and charitable matters, cooperative and credit societies, and questions of personal status. Separate municipalities were envisaged for Greek Cypriots and Turkish Cypriots in the five largest towns of the island. As the population and properties were intermixed, the provisions were difficult and expensive for the small towns of Cyprus.
The United Nations Mediator on Cyprus, Dr. Galo Plaza, described the 1960 Constitution created by the Zürich and London Agreements as "a constitutional oddity", and that difficulties in implementing the treaties signed on the basis of those Agreements had begun almost immediately after independence.
Within three years the functioning of the legislature started to fail, and in 1963, when the fiscal laws under Article 78 of the Constitution expired, the House of Representatives split along straight communal lines and failed to renew the income tax upon which the public finances depended.
In November 1963, the (Greek-Cypriot) President of the Republic, Archbishop Makarios III, suggested amendments to the Constitution "to resolve constitutional deadlocks". The Turkish-Cypriot leadership, following the mainland Turkish government, said they were unacceptable. The Vice-President publicly declared that the Republic of Cyprus had ceased to exist, and along with the three Turkish-Cypriot Ministers, the Turkish-Cypriot members of the House withdrew, as did Turkish-Cypriot civil servants. President Makarios refused all suggestions which would have resulted in the partition of Cyprus, and negotiations over the problem have never yet succeeded.
De facto, Cyprus has remained partitioned for over forty years.
Treaty of Guarantee
Together with the Zürich and London Agreements, two other treaties were also agreed upon in Zurich.
The Treaty of Guarantee was designed to preserve Bi-communal consociationalism and independent state of the Republic of Cyprus. Cyprus and the guarantor powers (the United Kingdom, Turkey, and Greece) promised to prohibit the promotion of "either the union of the Republic of Cyprus with any other State, or the partition of the Island".
Article Three of the Treaty of Guarantee provides, "In so far as common or concerted action may prove impossible, each of the three guaranteeing Powers reserves the right to take action with the sole aim of re-establishing the state of affairs (i.e. Bi-communal consociational state) established by the present Treaty."
In July 1974, there was briefly a Greek-backed coup d'état in Cyprus. Turkey claimed under the Treaty of Guarantee to intervene militarily. The legality of the invasion depends on whether common or concerted action between the United Kingdom, Greece and Turkey had proved impossible and whether the outcome of the invasion safeguarded the Bi-communal consociational, independence, sovereignty and territorial integrity of the Republic of Cyprus. In 1983, Turkish Cypriots issued the Declaration of Independence of the Turkish Republic of Northern Cyprus. This has been recognized by Turkey only. The United Nations declared the Turkish Republic of Northern Cyprus legally invalid and asked for its withdrawal. The UN Security Council has issued multiple resolutions that all states should refrain from recognizing the protectorate of Turkey in Cyprus.
- para. 163 of Report to the U.N. Secretary-General in March 1965
- paragraph 129, ibid.
- Treaty of Guarantee of Republic of Cyprus
| 3.058883 |
|Quantum field theory|
||It has been suggested that this article be merged with Zero-point energy. (Discuss) Proposed since June 2012.|
In quantum field theory, the vacuum state (also called the vacuum) is the quantum state with the lowest possible energy. Generally, it contains no physical particles. Zero-point field is sometimes used[by whom?] as a synonym for the vacuum state of an individual quantized field.
According to present-day understanding of what is called the vacuum state or the quantum vacuum, it is "by no means a simple empty space", and again: "it is a mistake to think of any physical vacuum as some absolutely empty void." According to quantum mechanics, the vacuum state is not truly empty but instead contains fleeting electromagnetic waves and particles that pop into and out of existence.
The QED vacuum of quantum electrodynamics (or QED) was the first vacuum of quantum field theory to be developed. QED originated in the 1930s, and in the late 1940s and early 1950s it was reformulated by Feynman, Tomonaga and Schwinger, who jointly received the Nobel prize for this work in 1965. Today the electromagnetic interactions and the weak interactions are unified in the theory of the electroweak interaction.
The Standard Model is a generalization of the QED work to include all the known elementary particles and their interactions (except gravity). Quantum chromodynamics is the portion of the Standard Model that deals with strong interactions, and QCD vacuum is the vacuum of quantum chromodynamics. It is the object of study in the Large Hadron Collider and the Relativistic Heavy Ion Collider, and is related to the so-called vacuum structure of strong interactions.
Non-zero expectation value
If the quantum field theory can be accurately described through perturbation theory, then the properties of the vacuum are analogous to the properties of the ground state of a quantum mechanical harmonic oscillator (or more accurately, the ground state of a QM problem). In this case the vacuum expectation value (VEV) of any field operator vanishes. For quantum field theories in which perturbation theory breaks down at low energies (for example, Quantum chromodynamics or the BCS theory of superconductivity) field operators may have non-vanishing vacuum expectation values called condensates. In the Standard Model, the non-zero vacuum expectation value of the Higgs field, arising from spontaneous symmetry breaking, is the mechanism by which the other fields in the theory acquire mass.
In many situations, the vacuum state can be defined to have zero energy, although the actual situation is considerably more subtle. The vacuum state is associated with a zero-point energy, and this zero-point energy has measurable effects. In the laboratory, it may be detected as the Casimir effect. In physical cosmology, the energy of the cosmological vacuum appears as the cosmological constant. In fact, the energy of a cubic centimeter of empty space has been calculated figuratively to be one trillionth of an erg. An outstanding requirement imposed on a potential Theory of Everything is that the energy of the quantum vacuum state must explain the physically observed cosmological constant.
For a relativistic field theory, the vacuum is Poincaré invariant, which follows from Wightman axioms. Poincaré invariance implies that only scalar combinations of field operators have non-vanishing VEV's. The VEV may break some of the internal symmetries of the Lagrangian of the field theory. In this case the vacuum has less symmetry than the theory allows, and one says that spontaneous symmetry breaking has occurred. See Higgs mechanism, standard model.
Electrical permittivity
In principle, quantum corrections to Maxwell's equations can cause the experimental electrical permittivity ε of the vacuum state to deviate from the defined scalar value ε0 of the electric constant. These theoretical developments are described, for example, in Dittrich and Gies. In particular, the theory of quantum electrodynamics predicts that the QED vacuum should exhibit nonlinear effects that will make it behave like a birefringent material with ε slightly greater than ε0 for extremely strong electric fields. Explanations for dichroism from particle physics, outside quantum electrodynamics, also have been proposed. Active attempts to measure such effects have been unsuccessful so far.
The vacuum state is written as or . The VEV of a field φ, which should be written as , is usually condensed to .
Virtual particles
The presence of virtual particles can be rigorously based upon the non-commutation of the quantized electromagnetic fields. Non-commutation means that although the average values of the fields vanish in a quantum vacuum, their variances do not. The term "vacuum fluctuations" refers to the variance of the field strength in the minimal energy state, and is described picturesquely as evidence of "virtual particles".
It is sometimes attempted to provide an intuitive picture of virtual particles based upon the Heisenberg energy-time uncertainty principle:
(with ΔE and Δt being the energy and time variations respectively; ΔE is the accuracy in the measurement of energy and Δt is the time taken in the measurement, and ħ is the Planck constant divided by 2π) arguing along the lines that the short lifetime of virtual particles allows the "borrowing" of large energies from the vacuum and thus permits particle generation for short times.
Although the phenomenon of virtual particles is accepted, this interpretation of the energy-time uncertainty relation is not universal. One issue is the use of an uncertainty relation limiting measurement accuracy as though a time uncertainty Δt determines a "budget" for borrowing energy ΔE. Another issue is the meaning of "time" in this relation, because energy and time (unlike position q and momentum p, for example) do not satisfy a canonical commutation relation (such as [q, p] = i ħ). Various schemes have been advanced to construct an observable that has some kind of time interpretation, and yet does satisfy a canonical commutation relation with energy. The very many approaches to the energy-time uncertainty principle are a long and continuing subject.
Physical nature of the quantum vacuum
According to Astrid Lambrecht (2002): "When one empties out a space of all matter and lowers the temperature to absolute zero, one produces in a Gedankenexperiment the quantum vacuum state."
Photon-photon interaction can occur only through interaction with the vacuum state of some other field, for example through the Dirac electron-positron vacuum field; this is associated with the concept of vacuum polarization.
According to Milonni (1994): "... all quantum fields have zero-point energies and vacuum fluctuations." This means that there is a component of the quantum vacuum respectively for each component field (considered in the conceptual absence of the other fields), such as the electromagnetic field, the Dirac electron-positron field, and so on.
According to Milonni (1994), some of the effects attributed to the vacuum electromagnetic field can have several physical interpretations, some more conventional than others. The Casimir attraction between uncharged conductive plates is often proposed as an example of an effect of the vacuum electromagnetic field. Schwinger, DeRaad, and Milton (1978) are cited by Milonni (1994) as validly, though unconventionally, explaining the Casimir effect with a model in which "the vacuum is regarded as truly a state with all physical properties equal to zero." In this model, the observed phenomena are explained as the effects of the electron motions on the electromagnetic field, called the source field effect. Milonni writes: "The basic idea here will be that the Casimir force may be derived from the source fields alone even in completely conventional QED, ..." Milonni provides detailed argument that the measurable physical effects usually attributed to the vacuum electromagnetic field cannot be explained by that field alone, but require in addition a contribution from the self-energy of the electrons, or their radiation reaction. He writes: "The radiation reaction and the vacuum fields are two aspects of the same thing when it comes to physical interpretations of various QED processes including the Lamb shift, van der Waals forces, and Casimir effects." This point of view is also stated by Jaffe (2005): "The Casimir force can be calculated without reference to vacuum fluctuations, and like all other observable effects in QED, it vanishes as the fine structure constant, α, goes to zero."
See also
References and notes
- Astrid Lambrecht (Hartmut Figger, Dieter Meschede, Claus Zimmermann Eds.) (2002). Observing mechanical dissipation in the quantum vacuum: an experimental challenge; in Laser physics at the limits. Berlin/New York: Springer. p. 197. ISBN 3-540-42418-0.
- Christopher Ray (1991). Time, space and philosophy. London/New York: Routledge. Chapter 10, p. 205. ISBN 0-415-03221-0.
- AIP Physics News Update,1996
- Physical Review Focus Dec. 1998
- Walter Dittrich & Gies H (2000). Probing the quantum vacuum: perturbative effective action approach. Berlin: Springer. ISBN 3-540-67428-4.
- For an historical discussion, see for example Ari Ben-Menaḥem, ed. (2009). "Quantum electrodynamics (QED)". Historical Encyclopedia of Natural and Mathematical Sciences, Volume 1 (5th ed.). Springer. pp. 4892 ff. ISBN 3-540-68831-5. For the Nobel prize details and the Nobel lectures by these authors see "The Nobel Prize in Physics 1965". Nobelprize.org. Retrieved 2012-02-06.
- Jean Letessier, Johann Rafelski (2002). Hadrons and Quark-Gluon Plasma. Cambridge University Press. p. 37 ff. ISBN 0-521-38536-9.
- Sean Carroll, Sr Research Associate - Physics, California Institute of Technology, June 22, 2006 C-SPAN broadcast of Cosmology at Yearly Kos Science Panel, Part 1
- David Delphenich (2006). "Nonlinear Electrodynamics and QED". arXiv:hep-th/0610088 [hep-th].
- Klein, James J. and B. P. Nigam, Birefringence of the vacuum, Physical Review vol. 135, p. B1279-B1280 (1964).
- Mourou, G. A., T. Tajima, and S. V. Bulanov, Optics in the relativistic regime; § XI Nonlinear QED, Reviews of Modern Physics vol. 78 (no. 2), 309-371 (2006) pdf file.
- Holger Gies; Joerg Jaeckel; Andreas Ringwald (2006). "Polarized Light Propagating in a Magnetic Field as a Probe of Millicharged Fermions". Physical Review Letters 97 (14). arXiv:hep-ph/0607118. Bibcode:2006PhRvL..97n0402G. doi:10.1103/PhysRevLett.97.140402.
- Davis; Joseph Harris; Gammon; Smolyaninov; Kyuman Cho (2007). "Experimental Challenges Involved in Searches for Axion-Like Particles and Nonlinear Quantum Electrodynamic Effects by Sensitive Optical Techniques". arXiv:0704.0748 [hep-th].
- Myron Wyn Evans, Stanisław Kielich (1994). Modern nonlinear optics, Volume 85, Part 3. John Wiley & Sons. p. 462. ISBN 0-471-57548-8. "For all field states that have classical analog the field quadrature variances are also greater than or equal to this commutator."
- David Nikolaevich Klyshko (1988). Photons and nonlinear optics. Taylor & Francis. p. 126. ISBN 2-88124-669-9.
- Milton K. Munitz (1990). Cosmic Understanding: Philosophy and Science of the Universe. Princeton University Press. p. 132. ISBN 0-691-02059-0. "The spontaneous, temporary emergence of particles from vacuum is called a "vacuum fluctuation"."
- For an example, see P. C. W. Davies (1982). The accidental universe. Cambridge University Press. p. 106. ISBN 0-521-28692-1.
- A vaguer description is provided by Jonathan Allday (2002). Quarks, leptons and the big bang (2nd ed ed.). CRC Press. pp. 224 ff. ISBN 0-7503-0806-0. "The interaction will last for a certain duration Δt. This implies that the amplitude for the total energy involved in the interaction is spread over a range of energies ΔE."
- This "borrowing" idea has led to proposals for using the zero-point energy of vacuum as an infinite reservoir and a variety of "camps" about this interpretation. See, for example, Moray B. King (2001). Quest for zero point energy: engineering principles for 'free energy' inventions. Adventures Unlimited Press. pp. 124 ff. ISBN 0-932813-94-1.
- Quantities satisfying a canonical commutation rule are said to be noncompatible observables, by which is meant that they can both be measured simultaneously only with limited precision. See Kiyosi Itô (1993). "§ 351 (XX.23) C: Canonical commutation relations". Encyclopedic dictionary of mathematics (2nd ed ed.). MIT Press. p. 1303. ISBN 0-262-59020-4.
- Paul Busch, Marian Grabowski, Pekka J. Lahti (1995). "§III.4: Energy and time". Operational quantum physics. Springer. pp. 77 ff. ISBN 3-540-59358-6.
- For a review, see Paul Busch (2008). "Chapter 3: The Time–Energy Uncertainty Relation". In J.G. Muga, R. Sala Mayato and Í.L. Egusquiza, editors. Time in Quantum Mechanics (2nd ed ed.). Springer. pp. 73 ff. ISBN 3-540-73472-4.
- Fowler, R., Guggenheim, E.A. (1965). Statistical Thermodynamics. A Version of Statistical Mechanics for Students of Physics and Chemistry, reprinted with corrections, Cambridge University Press, London, page 224.
- Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and Co., London, page 220.
- Wilks, J. (1971). The Third Law of Thermodynamics, Chapter 6 in Thermodynamics, volume 1, ed. W. Jost, of H. Eyring, D. Henderson, W. Jost, Physical Chemistry. An Advanced Treatise, Academic Press, New York, page 477.
- Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics, New York, ISBN 0–88318–797–3, page 342.
- Jauch, J.M., Rohrlich, F. (1955/1980). The Theory of Photons and Electrons. The Relativistic Quantum Field Theory of Charged Particles with Spin One-half, second expanded edition, Springer-Verlag, New York, ISBN 0–387–07295–0, pages 287–288.
- Milonni, P.W. (1994). The Quantum Vacuum. An Introduction to Quantum Electrodynamics, Academic Press, Inc., Boston, ISBN 0–12–498080–5, page xv.
- Milonni, P.W. (1994). The Quantum Vacuum. An Introduction to Quantum Electrodynamics, Academic Press, Inc., Boston, ISBN 0–12–498080–5, page 239.
- Schwinger, J., DeRaad, L.L., Milton, K.A. (1978). Casimir effect in dielectrics, Annals of Physics, 115: 1–23.
- Milonni, P.W. (1994). The Quantum Vacuum. An Introduction to Quantum Electrodynamics, Academic Press, Inc., Boston, ISBN 0–12–498080–5, page 418.
- Jaffe, R.L. (2005). Casimir effect and the quantum vacuum, Phys. Rev. D 72: 021301(R), http://1–5.cua.mit.edu/8.422_s07/jaffe2005_casimir.pdf
Further reading
- Free pdf copy of The Structured Vacuum - thinking about nothing by Johann Rafelski and Berndt Muller (1985) ISBN 3-87144-889-3.
- M.E. Peskin and D.V. Schroeder, An introduction to Quantum Field Theory.
- H. Genz, Nothingness: The Science of Empty Space
- Maybe this should discuss Star Trek and/or Star Gate: Engineering the Zero-Point Field and Polarizable Vacuum for Interstellar Flight
- E. W. Davis, V. L. Teofilo, B. Haisch, H. E. Puthoff, L. J. Nickisch, A. Rueda and D. C. Cole(2006)"Review of Experimental Concepts for Studying the Quantum Vacuum Field"
| 3.311444 |
Natural Resource Management Ministerial Council
Department of Environment and Heritage, 2001
ISBN 0 642 254775 0
3. Desired Native Vegetation Outcomes
This section of the National Framework for the Management and Monitoring of Australia's Native Vegetation describes the native vegetation outcomes expected from the implementation of the management and monitoring mechanisms described in Section 4. Such outcomes will serve biodiversity conservation, land protection, greenhouse gas reduction and other objectives specified in the complementary national policy and strategy documents (see Appendix A). The outcomes will be consistent with and enhance the social and economic outcomes being sought within the framework of Ecologically Sustainable Development (ESD).
Australia's native vegetation cover is diverse, rich in species and complexity, and has a very high degree of endemism. It is a priceless element of our natural heritage. It plays a crucial role in sustaining ecosystem function and processes, and consequently the productive capacity of Australia's relatively old and infertile soils and scarce freshwater resources. Native vegetation buffers the impact of harsh and extremely variable climates, binds and nourishes soils, and filters streams and wetlands. Native birds, invertebrates and other animals depend upon the condition and extent of native vegetation communities.
The vision sketched here assumes that native vegetation has, and is seen to have, intrinsic values in addition to ecological values and utilitarian values. It envisages Australian landscapes in which native vegetation is conserved for its ecological values, celebrated for its intrinsic values and enhanced for sustainable production.
This vision also recognises the inextricable link between the conservation of biodiversity and sustainable agriculture. Conservation of vegetation is neither an alternative land use nor an opportunity cost - it is an investment in natural capital, which underwrites material wealth. Conservation of biodiversity means much more than just protecting wildlife and its habitat in nature reserves. Conservation of native species and ecosystems, and the processes they support — the flows and quality of rivers, wetlands and groundwater, and soil structure and landscapes — are all crucial to the sustainability of primary industries.
This vision does not assume a return to some pre-European Arcadia and/or the replacement of all the native vegetation that has been cleared or modified since European settlement. However, it implies that restoring some hydrological balance, enhancing habitat for wildlife, protecting freshwater resources and rehabilitating degraded lands requires land use systems which are responsive to Australian conditions.
The shift towards more sustainable land use systems is likely to include greater use of native Australian species than occurs in conventional agriculture today.
Farming systems may in the future have portions of the landscape occupied by native perennials, some forming the basis of grazing systems, and others generating a range of products including carbon sequestration, timber, fuelwood, craftwood and pulp, cut flowers, essential oils, herbs, solvents and pharmaceuticals. Community revegetation and regeneration activities could be underpinned and complemented by a thriving native vegetation industry and associated infrastructure for native vegetation management.
The sort of infrastructure required would include:
- regional facilities and services to support ecological inventory, mapping and monitoring activities;
- local and regional seedbanks and nurseries stocking the full range of locally indigenous flora, by provenance;
- equipment such as seed harvesters, e.g. for native grasses, direct seeding machines, mechanical planters, sprayers, pruners and weeders - all adapted to local/regional needs and conditions; and
- the knowledge base, training capacities and human capital required to apply and refine best practice techniques at the appropriate scale.
The 'wider public benefit' would be understood in reference to robust, regionally specific articulations of the 'duty of care' of land users not to degrade natural resources. 'Duty of care' would be widely accepted and understood as setting out the responsibilities which are inseparable from the privilege of managing land, regardless of its tenure. 'Duty of care' would be defined in regulation where appropriate, but would be more commonly used in industry codes of practice, industry-based environmental management systems, and voluntary incentives programs. Land uses generating insufficient returns to enable land users to fulfil their duty of care would by definition be unsustainable, and hence unsuitable, uses of land.
Markets would be informed and constrained by the understanding that the human economy is a subset of society, which in turn is a subset of, and utterly dependent upon, the biophysical environment. Market forces would work to use natural resources more efficiently, discriminating against products, production systems and processes which deplete or degrade natural resources unsustainably. Linkages between well-informed consumers and all stages of production cycles would be fostered and direct feedback encouraged. Environmental externalities (positive and negative) would be internalised in market prices wherever possible. National accounts would account for natural capital stocks, as well as flows, offering a more true reflection of the relative sustainability of apparent economic performance. The role and limitations of market forces in questions of long-term sustainability would be well understood, and the conditions under which intervention in markets is justified would be well accepted.
Comprehensive incentive regimes would complement markets in encouraging and delivering more sustainable approaches. Management actions seen to be in the public interest, for example through positive externalities, and which are clearly over and above what would be expected under duty of care, would be supported by a wide range of direct and indirect incentives and disincentives. Such incentives would be derived and delivered at a range of scales. For example, nationally through the taxation system and major targeted grants for national priorities; sub-nationally through revolving funds, industry codes of practice, accreditation systems and regulatory approaches; and regionally through regional grants, stewardship payments, planning, zoning and rating systems.
The incentives regime would complement public-sector funding, and would be designed to attract private-sector funding into nature conservation at property and landscape scales through:
- tax measures encouraging philanthropy;
- rewards at industry level for best practice and corporate citizenship; and
- tax and other incentives for the individual or firm to go above and beyond their duty of care in managing for long-term conservation in the public interest.
The general principles informing the design and delivery of incentives would include the principle that natural resource management and resource allocation decisions should be made at the lowest practicable level; that systems should connect people as directly as possible with the consequences of their actions; and that local ownership of problems and solutions is most likely to be genuine when revenue raising and resource allocation operates at the same level.
The first step towards delivery of Australia's native vegetation objectives is to improve our knowledge base, in both theoretical and practical terms, about how to conserve, manage, enhance or re-establish native vegetation for various combinations of objectives at various scales. Basic toolkits for native vegetation management are needed - whether to assist a community group to plan wildlife habitat, or to assist a landholder to work out a burning regime in a remnant patch of bush. This vision assumes that such toolkits will be developed and readily utilised.
Underpinning this Framework is a basic set of principles that should encourage actions to achieve sustainable native vegetation management. These include:
- recognition that all vegetation management should be based on the overall goal of Ecologically Sustainable Development which recognises environmental, economic and social values;
- recognition of the important role of native vegetation in the functioning of ecosystems in maintaining productivity capacity of agricultural lands;
- recognition that the biological diversity of vegetation should be maintained through appropriate land management practices. These include a suite of measures from environmental protection through to sustainable use and production using best practice management techniques;
- recognition that vegetation management requires the continuing partnership of government, land managers, industry and the wider community;
- recognition that where there are threats of serious or irreversible environmental damage, lack of full scientific certainty should not be used as a reason for postponing measures to prevent environmental degradation. In the application of the precautionary principle, public and private decisions should be guided by:
- careful evaluation to avoid, wherever practicable, serious or irreversible damage to the environment; and
- an assessment of the risk-weighted consequences of various options;
- recognition that protecting existing remnant vegetation is the most efficient way of conserving biodiversity.
Native vegetation is usually managed within the broader natural resource management context that takes account of economic and social objectives additional to environmental objectives. However, sustainable native vegetation management does not only serve environmental objectives. Outcomes from sustainable native vegetation management also contribute substantially to important economic and social objectives.
The native vegetation outcomes being sought in this Framework are:
- a reversal in the long-term decline in the extent and quality of Australia's native vegetation cover by:
- conserving native vegetation, and substantially reducing land clearing;
- conserving Australia's biodiversity; and
- restoring, by means of substantially increased revegetation, the environmental values and productive capacity of Australia's degraded land and water;
- conservation and, where appropriate, restoration of native vegetation to maintain and enhance biodiversity, protect water quality and conserve soil resources, including on private land managed for agriculture, forestry and urban development;
- retention and enhancement of biodiversity and native vegetation at both regional and national levels; and
- an improvement in the condition of existing native vegetation.
Specific vegetation outcomes being sought, within the context of integrated natural resource management, are described below.
Biodiversity outcomes sought:
- protection of biological diversity and maintenance of essential ecological processes and life-support systems;
- maintenance of viable examples of native vegetation communities, species and dependent fauna throughout their natural ranges;
- maintenance of the genetic diversity of native vegetation species;
- enabling Australia's native vegetation species and communities threatened with extinction to survive and thrive in their natural habitats, and to retain their genetic diversity and potential for evolutionary development, and prevent additional species and communities from becoming threatened;
- return of threatened native vegetation species and communities to a secure status in the wild;
- reduction in the numbers of listed threatened native vegetation species and downgrading of the conservation threat category of listed threatened species;
- limitation of broad-scale clearance of native vegetation to those instances in which the proponent can clearly demonstrate that regional biodiversity objectives are not compromised;
- no clearing of endangered or vulnerable vegetation communities, critical habitat for threatened species, or other threatened species or communities listed under State or Commonwealth legislation, or identified through the NRMMC or other government processes;
- no activities that adversely affect the conservation status of vegetation communities or the species dependent on them.
Soil and water resource outcomes sought:
- maintenance and enhancement of the ecological integrity and physical stability of ground and surface water systems, including associated riparian zones and wetlands;
- revegetation of the upslope recharge areas in order to reduce the volume of groundwater movement to lowland areas;
- revegetation, where appropriate, of the highest priority degraded riparian areas;
- protection and rehabilitation of lowland wetlands and saltmarshes;
- protection of vegetation in erosion prone areas;
- protection of native vegetation on areas of potential acid sulphate soils.
Hydrology outcomes sought:
- protection of vegetation in areas at risk from dryland salinity;
- revegetation of recharge areas to slow or reverse rising groundwater tables and ameliorate dryland salinity;
- maintenance of native vegetation in water catchments to protect water quality and water yield.
Land productivity outcomes sought:
- protection and management of native vegetation in the landscape such that biomass production is sustained, providing the capacity for continued productivity;
- reduction and minimisation of the detrimental economic, environmental and social impact of weeds on the sustainability of Australia's productive capacity and natural ecosystems;
- prevention of the development of new weed and pest problems.
Sustainable land use outcomes sought:
- protection, management and establishment of native vegetation to provide for the social and economic value derived from the ecologically sustainable use and harvesting of native vegetation products such as wood, oils, flowers, seed and honey.
Natural and cultural heritage outcomes sought:
- protection and management of native vegetation to retain the natural and cultural significance of a place or landscape.
Indigenous peoples outcomes sought:
- maintenance of biological diversity on lands and waters over which Aboriginal and Torres Strait Islander people have title or in which they have an interest, to ensure the wellbeing, identity, cultural heritage and economy of Aboriginal and Torres Strait Islander communities.
Climate change outcomes sought:
- conservation and enhancement, as appropriate, of sinks and reservoirs for all greenhouse gases not controlled by the Montreal Protocol, including biomass and forests.
| 3.145649 |
Restoration Potential: Strong fidelity to nest and roost sites inhibits colonization of formerly occupied habitat (Meyer and Collopy 1996). Limited attempts to reintroduce this species to presently unoccupied former range have failed (Meyer 1990). Given the species' biology (e.g., strongly social, delayed breeding, mobile), reintroduction could be difficult, at best (Meyer 1995).
Preserve Selection and Design Considerations: Suitable nesting habitat requires appropriate nest and roost sites within a landscape that provides sufficient prey for successful reproduction. Habitat mosaics with various plant communities such as forests, prairies, and wetlands of various sizes, are essential. Minimum area requirements are difficult to define; where breeding habitat quality is good and prey is abundant and concentrated, 30 square kilometers may be sufficient, but where habitat quality is less suitable and prey is more diffuse, 100-300 square kilometers may be necessary (Meyer and Collopy 1995).
Management Requirements: Tall trees that emerge from the surrounding canopy are essential for nesting. Such trees should be managed for in landscapes dominated by short-rotation, even-aged pine plantations. Nests built in Australian pine (Casuarina equisetifolia), an exotic species, fail at a significantly higher rate than those in native pine (Pinus spp.) or cypress (Taxodium spp.). Where kites nest in large numbers, it may be prudent to reduce the availability of Australian pine as nest sites (Meyer 1990).
Management Programs: Collaborative efforts with Brazilian conservationists are ongoing to protect native habitats at the critical wintering and breeding sites, which are all privately owned agricultural lands (K. Meyer, pers. comm.).
Monitoring Programs: This species is monitored on North American Breeding Bird Survey (BBS) routes (Sauer et al. 1997) and irregularly by state wildlife agencies (Millsap and Runde 1988). In Florida, systematic state-wide roost observations would form a good basis for long-term monitoring (K. Meyer, pers. comm.).
Management Research Needs: An accurate means of assessing population changes needs to be developed. Also, nesting and foraging habitat requirements need to be defined, winter habitat requirements need to be determined, prey densities essential for reproductive success need to be examined, and a study of marked individuals is needed to determine age at first breeding, sex ratio, survival, and social behavior (Meyer 1990, Meyer and Collopy 1995).
Biological Research Needs: Better information is needed on demography, migration routes winter biology, and habitat needs. The validity of subspecies designation needs to be examined since this may influence listing status (Meyer 1995).
No one has provided updates yet.
| 3.524854 |
WikipediaRead full entry
The East African land snail, or giant African land snail, scientific name Achatina fulica, is a species of large, air-breathing land snail, a terrestrial pulmonate gastropod mollusk in the family Achatinidae.
As they develop rapidly and produce large numbers of offspring, this mollusc is now listed as one of the top 100 invasive species in the world. It is a voracious feeder, and recognized as a serious pest organism affecting agriculture, natural ecosystems, commerce, and also human health. Because of these threats, this snail species has been given top national quarantine significance in the United States. In the past, quarantine officials have been able to successfully intercept and eradicate incipient invasions on the mainland USA.
In the wild, this species often harbors the parasitic nematode Angiostrongylus cantonensis, which can cause a very serious meningitis in humans. Human cases of this meningitis usually result from a person having eaten the raw or undercooked snail, but even handling live wild snails of this species can infect a person with the nematode and cause a life-threatening infection.
- Achatina fulica hamillei Petit, 1859
- Achatina fulica rodatzi Dunker, 1852
- Achatina fulica sinistrosa Grateloup, 1840
- Achatina fulica umbilicata Nevill, 1879
This species has been found in China since 1931 (map of distribution in 2007), and its initial point of distribution in China was Xiamen. The snail has also been established in the Pratas Islands, of Taiwan, throughout India, the Pacific, Indian Ocean islands, and the West Indies. In the United States, it has become established in Hawaii and eradication is underway in Florida.
The species has recently been observed in Bhutan (Gyelposhing, Mongar), where it is an invasive species. It has begun to attack agricultural fields and flower gardens. It is believed there that dogs which have consumed the snail died as a result.
A small population has gained a foothold in Bangalore, within the Indian Institute of Science campus. The snails were imported as part of crystallographic and NMR studies on Conotoxins. In an act of misguided compassion, the snails were released post-experimentation, and have colonised vast swathes of forested campus grounds. Their escape into the city-proper would be catastrophic for the local ecosystem.
In Paraguay, the first sighting of the African snail was reported in Concepción in 2011. Later cases were reported in different parts of the country in 2012. It is believed that the snail may have been introduced into the country either as fishing bait in Ayolas, or as pets in Ciudad del Este. Other sightings have been reported in urban areas around the capital Asuncion.
The adult snails have a height of around 7 centimetres (2.8 in), and their length can reach 20 centimetres (7.9 in) or more.
The shell has a conical shape, being about twice as high as it is broad. Either clockwise (sinistral) or counter-clockwise (dextral) directions can be observed in the coiling of the shell, although the right-handed (dextral) cone is the more common. Shell colouration is highly variable, and dependent on diet. Typically, brown is the predominant colour and the shell is banded.
The East African land snail is native to East Africa, and can be traced back to Kenya and Tanzania. It is a highly invasive species, and colonies can be formed from a single gravid individual. In many places, release into the wild is illegal. Nonetheless, the species has established itself in some temperate climates and its habitat now includes most regions of the humid tropics, including many Pacific islands, southern and eastern Asia, and the Caribbean. The giant snail can now be found in agricultural areas, coastland, natural forest, planted forests, riparian zones, scrub/shrublands, urban areas, and wetlands.
Feeding habits
The giant East African snail is a macrophytophagous herbivore; it eats a wide range of plant material, fruit, and vegetables. It will sometimes eat sand, very small stones, bones from carcasses and even concrete as calcium sources for its shell. In rare instances the snails will consume each other.
In captivity, this species can be fed on grain products such as bread, digestive biscuits, and chicken feed. Fruits and vegetables must be washed diligently as the snail is very sensitive to any lingering pesticides. In captivity, snails need cuttlebone or other calcium supplements to aid in the growth and development of their shells. They also enjoy the yeast in beer, which provides protein for growth stimulus.
Life cycle
The Giant East African Snail is a simultaneous hermaphrodite; each individual has both testes and ovaries and is capable of producing both sperm and ova. Instances of self fertilization are rare, occurring only in small populations. Although both snails in a mating pair can simultaneously transfer gametes to each other (bilateral mating), this is dependent on the size difference between the partners. Snails of similar size will reproduce in this way. Two snails of differing sizes will mate unilaterally (one way), with the larger individual acting as a female. This is due to the comparative resource investment associated with the different genders.
Like other land snails, these have intriguing mating behaviour, including petting their heads and front parts against each other. Courtship can last up to half an hour, and the actual transfer of gametes can last for two hours. Transferred sperm can be stored within the body for up to two years. The number of eggs per clutch averages around 200. A snail may lay 5-6 clutches per year with a hatching viability of about 90%.
Adult size is reached in about six months; after which growth slows but does not ever cease. Life expectancy is commonly five or six years in captivity, but the snails may live for up to ten years. They are active at night and spend the day buried underground.
The East African Land Snail is capable of aestivating for up to three years in times of extreme drought, sealing itself into its shell by secretion of a calcerous compound that dries on contact with the air. This is impermeable; the snail will not lose any water during this period.
Parasites of Achatina fulica include:
- Aelurostrongylus abstrusus
- Angiostrongylus cantonensis - causes eosinophilic meningoencephalitis
- Angiostrongylus costaricensis - causes abdominal angiostrongyliasis
- Schistosoma mansoni - causes schistosomiasis, detected in faeces
- Trichuris spp. - detected in faeces
- Hymenolepis spp. - detected in faeces
- Strongyloides spp. - detected in faeces and in mucous secretion
Pest control
In many places the snail is seen as a pest. Suggested preventative measures include strict quarantine to prevent introduction and further spread. Many methods, including hand collecting and use of molluscicides and flame-throwers, have been tried to eradicate the giant snail. Generally, none of them has been effective except where implemented at the first sign of infestation. In Bhutan, the Plant Protection Center used salt to contain the snails, while to reduce snails' food availability, the surrounding weeds were killed using glyphosate.
In some regions, an effort has been made to promote use of the Giant East African Snail as a food resource, the collecting of the snails for food being seen as a method of controlling them. However, promoting a pest in this way is a controversial measure, as it may encourage the further deliberate spread of the snails.
One particularly catastrophic attempt to biologically control this species occurred on South Pacific Islands. Colonies of A. fulica were introduced as a food reserve for the American military during the second world war and they escaped. A carnivorous species (Florida rosy wolfsnail, Euglandina rosea) was later introduced by American government, but it instead heavily harvested the native Partula, causing the loss of most Partula species within a decade.
Human use
Achatina fulica are used by some practitioners of Candomblé for religious purposes in Brazil as an offering to the deity Oxalá. The snails substitute for a closely related species, the African Giant Snail (Archachatina marginata) normally offered in Nigeria. The two species share a common name (Ìgbín, also known as Ibi or Boi-de-Oxalá in Brazil), and are similar enough in appearance to satisfy religious authorities. They are also edible if cooked properly.
This article incorporates CC-BY-2.0 text from the reference.
- IUCN 2009. IUCN Red List of Threatened Species. Version 2009.1. <www.iucnredlist.org>. Downloaded on 10 July 2009.
- "Achatina fulica". Integrated Taxonomic Information System. Retrieved July 6, 2007.
- Global Invasive Species Database: 100 of the Worst Invasive Species
- Cowie R. H., Dillon R. T., Robinson D. G. & Smith J. W. (2009). "Alien non-marine snails and slugs of priority quarantine importance in the United States: A preliminary risk assessment". American Malacological Bulletin 27: 113-132. PDF
- PBS "Alien Invasion". Accessed on 6 January 2008
- African snail: Deadly invasion in South America
- Rowson, B.; Warren, B.; Ngereza, C. (2010). "Terrestrial molluscs of Pemba Island, Zanzibar, Tanzania, and its status as an "oceanic" island". ZooKeys 70. doi:10.3897/zookeys.70.762.
- Lv, S.; Zhang, Y.; Liu, H. X.; Hu, L.; Yang, K.; Steinmann, P.; Chen, Z.; Wang, L. Y. et al. (2009). "Invasive Snails and an Emerging Infectious Disease: Results from the First National Survey on Angiostrongylus cantonensis in China". In Knight, Matty. PLoS Neglected Tropical Diseases 3 (2): e368. doi:10.1371/journal.pntd.0000368. PMC 2631131. PMID 19190771. figure 5.
- Wu S.-P., Hwang C.-C., Huang H.-M., Chang H.-W., Lin Y.-S. & Lee P.-F. (2007). "Land Molluscan Fauna of the Dongsha Island with Twenty New Recorded Species". Taiwania 52(2): 145-151. PDF.
- Campo-Flores, Arian. October 4th, 2011. "Giant Alien Snails Attack Miami, Though They're Not in Much of a Rush: Eradication Teams Go House to House, Nabbing 10,000 Invaders; 'Crunch Under Our Feet'." Wall Street Journal. Link
- Skelley, PE; Dixon, WN; and Hodges, G. 2011. Giant African land snail and giant South American snails: field recognition. Florida Department of Agriculture and Consumer Services. Gainesville, Florida. PDF
- (Portuguese) Soares C. M., Hayashi C., Gonçalves G. S., Nagae M. Y. & Boscolo W. R. (1999). "Exigência de proteína para o caracol gigante (Achatina fulica) em fase de crescimento. Protein requirements for giant snail (Achatina fulica) during the growth phase". Acta Scientiarum. Animal Sciences 21(3): 683-686. abstract, PDF.
- Ohlweiler, F. P.; Guimarães, M. C. D. A.; Takahashi, F. Y.; Eduardo, J. M. (2010). "Current distribution of Achatina fulica, in the State of São Paulo including records of Aelurostrongylus abstrusus (Nematoda) larvae infestation". Revista do Instituto de Medicina Tropical de São Paulo 52 (4): 211. doi:10.1590/S0036-46652010000400009. PDF.
- (Spanish) Libora M., Morales G., Carmen S., Isbelia S. & Luz A. P. (2010). "Primer hallazgo en Venezuela de huevos de Schistosoma mansoni y de otros helmintos de interés en salud pública, presentes en heces y secreción mucosa del molusco terrestre Achatina fulica (Bowdich, 1822). [First finding in Venezuela of Schistosoma mansoni eggs and other helminths of interest in public health found in faeces and mucous secretion of the mollusc Achatina fulica (Bowdich, 1822)]. Zootecnia Tropical 28: 383-394. PDF.
- Léo Neto, N. A.; Brooks, S. E.; Alves, R. M. R. (2009). "From Eshu to Obatala: Animals used in sacrificial rituals at Candomblé "terreiros" in Brazil". Journal of Ethnobiology and Ethnomedicine 5: 23. doi:10.1186/1746-4269-5-23. PMC 2739163. PMID 19709402.
| 3.610754 |
The Union War
By Gary W. Gallagher
Harvard University Press. $27.95
New York Times Book Review, May 1, 2011
Among the enduring mysteries of the American Civil War is why millions of Northerners were willing to fight to preserve the nation's unity. It is not difficult to understand why the Southern states seceded in 1860 and 1861. As the Confederacy's founders explained ad infinitum, they feared that Abraham Lincoln's election as president placed the future of slavery in jeopardy. But why did so few Northerners echo the refrain of Horace Greeley, the editor of The New York Tribune: "Erring sisters, go in peace"?
The latest effort to explain this deep commitment to the nation's survival comes from Gary W. Gallagher, the author of several highly regarded works on Civil War military history. In "The Union War," Gallagher offers not so much a history of wartime patriotism as a series of meditations on the meaning of the Union to Northerners, the role of slavery in the conflict and how historians have interpreted (and in his view misinterpreted) these matters.
The Civil War, Gallagher announces at the outset, was "a war for Union that also killed slavery." Emancipation was an outcome (an "astounding" outcome, Lincoln remarked in his second Inaugural Address) but, Gallagher insists, it always "took a back seat" to the paramount goal of saving the Union. Most Northerners, he says, remained indifferent to the plight of the slaves. They embraced emancipation only when they concluded it had become necessary to win the war. They fought because they regarded the United States as a unique experiment in democracy that guaranteed political liberty and economic opportunity in a world overrun by tyranny. Saving the Union, in the words of Secretary of State William H. Seward, meant "the saving of popular government for the world."
At a time when only half the population bothers to vote and many Americans hold their elected representatives in contempt, Gallagher offers a salutary reminder of the power of democratic ideals not simply to Northerners in the era of the Civil War, but also to people in other nations, who celebrated the Union victory as a harbinger of greater rights for themselves. Imaginatively invoking sources neglected by other scholars — wartime songs, patriotic images on mailing envelopes and in illustrated publications, and regimental histories written during and immediately after the conflict — Gallagher gives a dramatic portrait of the power of wartime nationalism.
His emphasis on the preservation of democratic government and the opportunities of free labor as central to the patriotic outlook is hardly new — one need only read Lincoln's wartime speeches to find eloquent expression of these themes. But instead of celebrating the greatness of American democracy, Gallagher claims, too many historians dwell on its limitations, notably the exclusion from participation of nonwhites and women. Moreover, perhaps because of recent abuses of American power in the name of freedom, scholars seem uncomfortable with robust expressions of patriotic sentiment, especially when wedded to military might. According to Gallagher, they denigrate nationalism and suggest that the war had no real justification other than the abolition of slavery. (Gallagher ignores a different interpretation of the Union war effort, emanating from neo-Confederates and the libertarian right, which portrays Lincoln as a tyrant who presided over the destruction of American freedom through creation of the leviathan national state, not to mention the dreaded income tax.)
Gallagher devotes many pages — too many in a book of modest length — to critiques of recent Civil War scholars, whom he accuses of exaggerating the importance of slavery in the conflict and the contribution of black soldiers to Union victory. Often, his complaint seems to be that another historian did not write the book he would have written.
Thus, Gallagher criticizes Melinda Lawson, the author of "Patriot Fires," one of the most influential recent studies of wartime nationalism, for slighting the experiences of the soldiers. But Lawson was examining nation-building on the Northern home front. Her investigation of subjects as diverse as the marketing of war bonds, the dissemination of pro-Union propaganda and the organization of Sanitary Fairs, where goods were sold to raise money for soldiers' aid, illuminates how the nation state for the first time reached into the homes and daily lives of ordinary Americans.
Gallagher also criticizes recent studies of soldiers' letters and diaries, which find that an antislavery purpose emerged early in the war. These works, he argues, remain highly "impressionistic," allowing the historian "to marshal support for virtually any argument." Whereupon Gallagher embarks on his own equally impressionistic survey of these letters, finding that they emphasize devotion to the Union.
Ultimately, Gallagher's sharp dichotomy between the goals of Union and emancipation seems excessively schematic. It begs the question of what kind of Union the war was being fought to preserve. The evolution of Lincoln's own outlook illustrates the problem. On the one hand, as Gallagher notes, Lincoln always insisted that he devised his policies regarding slavery in order to win the war and preserve national unity. Yet years before the Civil War, Lincoln had argued that slavery fatally undermined the nation's ability to exemplify the superiority of free institutions. The Union to be saved, he said, must be "worthy of the saving." During the secession crisis, Lincoln could have preserved the Union by yielding to Southern demands. He adamantly refused to compromise on the crucial political issue — whether slavery should be allowed to expand into Western territories.
Gallagher maintains that only failure on the battlefield, notably Gen. George B. McClellan's inability to capture Richmond, the Confederate capital, in the spring of 1862, forced the administration to act against slavery. Yet the previous fall, before significant military encounters had taken place, Lincoln had already announced a plan for gradual emancipation. This hardly suggests that military necessity alone placed the slavery question on the national agenda. Early in the conflict, many Northerners, Lincoln included, realized that there was little point in fighting to restore a status quo that had produced war in the first place.
Many scholars have argued that the war brought into being a new conception of American nationhood. Gallagher argues, by contrast, that it solidified pre--existing patriotic values. Continuity, not change, marked Northern attitudes. Gallagher acknowledges that as the war progressed, "a struggle for a different kind of Union emerged." Yet his theme of continuity seems inadequate to encompass the vast changes Americans experienced during the Civil War. Surely, he is correct that racism survived the war. Yet he fails to account for the surge of egalitarian sentiment that inspired the rewriting of the laws and Constitution to create, for the first time, a national citizenship enjoying equal rights not limited by race.
Before the war, slavery powerfully affected the concept of self-government. Large numbers of Americans identified democratic citizenship as a privilege of whites alone — a position embraced by the Supreme Court in the Dred Scott decision of 1857. Which is why the transformation wrought by the Civil War was so remarkable. As George William Curtis, the editor of Harper's Weekly, observed in 1865, the war transformed a government "for white men" into one "for mankind." That was something worth fighting for.
| 3.447453 |
Monday, April 2, 2012 - 15:31 in Earth & Climate
Corals may be better placed to cope with the gradual acidification of the world's oceans than previously thought – giving rise to hopes that coral reefs might escape climatic devastation.
- Corals 'could survive a more acidic ocean'Mon, 2 Apr 2012, 10:11:13 EDT
- Studies shed light on collapse of coral reefsThu, 28 May 2009, 14:26:24 EDT
- Acid oceans demand greater reef careMon, 14 Feb 2011, 10:03:09 EST
- Rising Co2 'will hit coral reefs harder'Tue, 28 Oct 2008, 10:44:19 EDT
- New ocean acidification study shows added danger to already struggling coral reefsMon, 8 Nov 2010, 15:52:09 EST
| 3.034424 |
A great Czech composer, Bedrich Smetana had a significant influence on the creation of artistic and musical nationalism in his native country. Smetana was one of the founders of the Czech national opera. In his operas and symphonic poems Smetana used the national legends, history and ideas. Smetana’s style was very original and often very dramatic. Smetana became deaf, but continued to compose until the end of his life.
More Smetana sheet music download on EveryNote.com
| 3.636245 |
Written for Political Science 101 last term at the University of Waterloo. Node your homework they said...
Democracy is more than an election every few years, a familiar process removed from the daily grind until it comes time to tick a small box on a larger piece of paper. It is instead more about people than protocol, more magical and less mechanical. We should see it in terms of an ideal to which our institutions and practices strive towards, rather than the view that these infrastructures come about as a result of this intangible juggernaut of democracy. Democracy is not a construct of man, it is instead a set of ideals and values we seek.
The typical citizen of a liberal democratic society does not have much to say about democracy except when confronted by “man on the street” interviews or whenever your particular national holiday rolls around. This apathy is not a result of genuine malice, but more a testament to the fact that our particular implementation of the idea of democracy works so well it is almost transparent. No mobs run loose through his streets at night, no men dressed in black come to “talk” to him in the early hours of the morning. His roads, sewer, electricity and television hum day and night without losing a beat. In a more direct sense, his government functions properly and does not become a burden to him. The pleasant life he leads is a direct result of a democratic society functioning properly, and it is his very right as a citizen of this society to ignore it on a daily basis.
This individualistic view of democracy cannot hold in all situations. It works for general day to day circumstances, however even the most right-wing of individualist thinkers holds a belief that under certain circumstances, citizens have a duty to perform certain tasks for the state. These duties may be mundane, such as paying taxes or voting, or extreme, such as defending one’s nation. All have a common thread, that which citizens as a member of a state have certain natural duties. Democracy cannot exist without its members participating in it, this is a fundamental requirement. These natural duties may vary from time to time but the constant is that they always exist in some capacity or another. Democracy is based upon many citizens performing small duties, instead of a small group of citizens controlling many responsibilities.
The concept of working together is one that democracy builds itself upon. Democracy is the rule of the people, not a person. It fulfills the innate human need to guide one’s destiny, through even such a small part as filling out a ballot. The fact that democracy is based on such emotionally appealing ideas should give you some conception as to the reasons for its success. Hobbes may have argued that we need someone to control us, but in the end, what we all really want is to control ourselves. The fact that democracy is able to take a selfish desire, such as the want to control the state, and turn it into a government which acts for the good of all is further evidence as to the robustness of the democratic ideal.
Democracy is an enduring dream, contrary to the doomed wunderkinds of communism and other governments based on theory not practice. While superior in their vision of a utopia on paper, they come against one fundamental flaw, namely people tend to run toward the jerk side of the personality scale. Communism without greed would indeed be utopia but the real world runs up against tangible problems with this. You cannot remove greed from a man by political posturing no more than you can paint stripes on a horse and call it a zebra. It may pass on first inspection, but when it comes down to the most basic of things, you tend to run into a few problems. The reason democracy works in the physical realm is it engages in political judo, in that it takes men’s selfishness and desires, parries them into another direction unpredicted by the man, all with the full momentum of his swing still behind him. It has survived from the ancient Greeks to this present day for this very reason.
The initial view of democracy as we know it was conceived by the Greeks, however the practical application of democracy we have today is drastically different from their view. Initially it was the concept that every citizen (citizens being of course aristocratic males) would have a say in the management of the state. Today however we have a different conception of this democratic ideal. Pure practicality dictates that we cannot have the entire community attempt to come to a conclusion on issues addressed by the state. This was practical in the Greek age where a manageable number would discuss the issues of the day, but this is not feasible in this day and age where our world population is measured in billions. The fundamental thing to remember however is that the ideal of democracy survives between this gulf of years and culture.
This romanticism of democracy is the root of its power. The society we live in values the ideals held by the democratic system, and as such we accept it as a ruling influence in our lives. An example of this is the Prime Minister being a “public servant”. Only in the strictest most idealistic sense is he a genuine servant of the people; however we call him such without a hint of irony as we value the democratic ideal so highly. All politicians are crooks we tell each other, yet we keep on voting. Why, when we so enthusiastically hate the dictators and Marcos of the world who embezzle funds? The answer lies in that we see democracy as striving toward an ideal. No man is perfect, but they’re working on it. This contradiction between reality and the psyche is at the heart of any power, and in Western countries it is what tells us that democracy is the cure for all that ills a state.
Contradiction is fundamental to democracy. Democracy brings us together we are told, it is the great equalizer. All men are born equal, none shall be held in higher esteem than another. One citizen shall have one vote. All say that the members of a democratic state are inherently equal. On the other hand we have Canada, a liberal democratic society, in which multiculturalism is not only encouraged but has an official policy to address it. Differences are encouraged, and any attempt to insinuate that we should all become equal is dismissed as right-wing xenophobia. Where then is the balance? Democracy gives us equality, but it also gives us the right to be different. It is the fine line between the two, a tightrope act of titanic proportions. The balance must not swing too far one way or the other, lest the acrobat be unset and come crashing down. The democratic ideal allows us to weigh multiculturalism and its variants against solidarity and never find a clear winner. It allows us to value them equally, as this is the ultimate measure of equality.
Equality can lead to problems however, if democracy becomes the rule of the “most equal”. A tyranny of the majority is completely democratic in the most literal sense of the word in that the majority chooses for it to be so, however it is unpalatable to many in our society. This is due to the fact that we see democracy in more than just the literal sense, we see it as a shining ideal. This ideal would not allow trampling of minority rights, and as discussed before, the ideal of democracy is the fine balance between differences and solidarity. As such we cannot allow this tyranny, permitted as it is in a literal interpretation of a democratic society. The democratic ideal implies compassion and empathy, more than just cold cruel statistics of fifty percent plus one.
The democratic ideal hinges on this idea of not allowing technicalities and numbers to become the ruling force instead of a vision of participation by all. Common occurrences such as majority governments being elected by a minority as seen as undemocratic, even though in the strictest sense they follow literal democracy. If your system is built upon the philosophy that a leader is elected indirectly through grouping voters into regions, this is particularly apparent. The recent Florida fiasco in the American elections is a particularly apt example of this. Counting non-participating voters and the popular vote, a leader was elected who received far less support from his citizens than a majority. While seen as undemocratic and a travesty, at the same time it is completely by the book.
Unfortunately, there is no book of democracy. We instead view democracy as an ideal not a construct. It is not a point by point leaflet we can airdrop over dictatorships, but instead an attitude that results from culture and history. It is a result of directing people’s desires toward solidarity, and at the same time respecting differences. While at time contradictory and awkward, it endures. It endures due to the fact that democracy is a dream not a document, and dreams are not easily lost.
| 3.017852 |
Speaking to the National Academy of Sciences on Education and 21st Century American Agriculture, U.S. Agriculture Secretary Mike Johanns addressed the future of farming in the U.S.
One of the agriculture community's greatest challenges, he says, is to inspire young people to pursue agricultural careers. First, Johanns says, we must increase awareness of agricultural opportunities, research, and technological advances.
"Fewer and fewer young people have a sense of where food comes from, and most kids have no idea how sophisticated this industry has become and how much lies before us in the future," Johanns says.
In the 2002 Census of Agriculture, the average age of U.S. principal farm operators was 55.3. With the average farm operator just below retirement age and the nature of agriculture shifting towards the sciences, Johanns points out the need for a new generation of science-oriented agricultural workers.
"Many of the young people who will replace these retirees are already here in our educational system, and many are not studying science and they are certainly not studying the agricultural sciences," Johanns says.
We also need to promote agricultural literacy programs in urban areas, Johanns says. "Teachers, parents, and students need to understand that 21st century agriculture is a global enterprise based in science, which needs constant growth in discovery and in application."
| 3.268744 |
Synopsis of Philippine Mammals
To explore the Synopsis of Philippine Mammals, click here.
To explore the Supplement to the Synopsis of Philippine Mammals, click here.
The mammalian fauna of the Philippine Islands is remarkably diverse and species-rich, comprising what may be the greatest concentration of endemic mammals of any country on earth. Since 1988, the Field Museum of Natural History has been the primary base of operations for the Philippine Mammal Project, a multi-institutional, international collaborative effort to document the number of species that are present, the distributions of those species, their relationships within the tree of life, their ecology, and their conservation status. This website, the Synopsis of Philippine Mammals, is a summary of the information that is currently available.
The website was first implemented in 2002, using information from the 1998 publication by Heaney et al., entitled “A Synopsis of the Mammalian Fauna of the Philippine Islands”. The Synopsis website was extensively revised and expanded in 2010, in order to incorporate extensive new data, include more photographs, and provide detailed maps of the known distribution of each species. To keep the website up to date, as additional species are discovered and formally described, or species of marine mammals are documented in Philippine waters for the first time, we add them to the Supplement, and will eventually merge these into the main site once site maintenance has been completed.
As documented on this website, the terrestrial fauna is now known to include at least 214 native species (plus seven introduced species), in an area of only a bit over 300,000 square kilometers, one of the highest densities of native mammals in the world. Moreover, most of the species are found nowhere else: of the 214 native terrestrial species, 125 (58%) are endemic, and among the 111 non-flying native mammals, 101 (91%) are unique to the Philippines. They constitute an astounding example of adaptive radiation by mammals in an oceanic archipelago, and may justifiably serve as a source of great pride to the Philippine nation.
This website has been developed as a collaborative project with the Protected Areas and Wildlife Bureau of the Philippine Department of Environment and Natural Resources. Primary funding for its development has come from the Negaunee Foundation.
Explore our Philippine mammal project further:
Videos from the Abbott Hall of Conservation-Restoring Earth:
Mammal Discoveries in the Philippines
Why Mossy Forests in the Philippines are important
Island Evolution: Why islands have so many endemic species
Science at FMNH: Mammal Conservation in Island Ecosystems
The Field Revealed: Cloud Rat
(Above photo by LR Heaney. Musseromys gulantang from Quezon Province, Luzon Island, Philippines.)
| 3.400568 |
Battles - The Siege of Kut-al-Amara, 1916
Following the signal (and, to the British at least, unexpected) failure of the Anglo-Indian attack upon Ctesiphon in November 1915 Sir Charles Townshend led his infantry force, the 6th (Poona) Division, on a wearisome retreat back to Kut-al-Amara, arriving in early December.
Aware too that his force was exhausted and unable to retreat further Townshend resolved to stay and hold Kut, a town of key importance to the British presence in the region. In this he was supported by regional Commander-in-Chief Sir John Nixon. The War Office in London however favoured a retreat still further south; however by the time this news reached Townshend he was already under siege.
Consequently the defence of Kut - sited in a loop of the River Tigris - was set in train ahead of the arrival of the besieging Turk force of 10,500 men on 7 December. However Kut's very geographical formation in effect meant that Townshend and his men were effectively bottled up.
Nevertheless the division's cavalry were despatched back to Basra the day before the arrival of the Turkish force (6 December 1915), since they were likely to prove of little use and yet a drain upon scarce resources during siege operations.
Leading the Turks were Nur-Ud-Din and the German commander Baron von der Goltz. Their instructions were straightforward if steep: to force the British entirely from Mesopotamia.
Consequently Nur-Ud-Din and von der Goltz attempted to pierce Kut's defences on three separate occasions in December; all however failed. Thus the Turks set about blockading the town while despatching forces to prevent British relief operations from succeeding in reaching Kut.
In Britain, as in India, the news of Townshend's setback had stunned the government which resolved to immediately send additional forces to the region, diverted from the Western Front. Consideration was given to regard both Palestine and Mesopotamia as a single front.
Townshend was led to expect rapid relief. He himself calculated that there were enough supplies to maintain the garrison for a month (subsequently revised to two months and then to almost five), although this assumed full daily rations.
Informed that a relief operation might take two months to assemble Townshend proposed instead breaking out and retiring further south: Nixon however insisted that he remain at Kut and therefore tie up as many Turkish forces as possible.
In due course the first British expedition to raise the blockade was set underway from Basra in January 1916, led by Sir Fenton Aylmer. Their efforts were repeatedly repulsed however with heavy loss, at Sheikh Sa'ad, the Wadi and Hanna in January 1916 and again two months later in March at Dujaila.
April brought a further relief operation, this time led by the sceptical Sir George Gorringe. Despite meeting von der Goltz and his Turkish Sixth Army, piercing their line some 30km south of Kut, the expedition ran out of steam and was abandoned on 22 April.
With no further hope of relief - a final attempt by the paddle steamer Julnar to reach the town with supplies having failed - Townshend requested and received an armistice pending surrender talks on 26 April.
The Turks agreed to send 10 days of food into the garrison while the six-day armistice was in effect. While the talks were in progress the British took the opportunity of destroying anything of value in the town, aware of its imminent surrender.
An additional 23,000 British casualties have been suffered during the relief efforts; the Turks lost approximately 10,000 men.
Although Khalil Pasha, Baghdad's military governor, proved sympathetic to Townshend's offer of £1 million plus a guarantee that none of his men would be used again in fighting against the Ottoman Empire - effectively buying parole, he was instructed by Minister of War Enver Pasha to require Townshend's unconditional surrender.
This was duly delivered on 29 April 1916, the British having run out of food supplies and wracked with disease of epidemic proportions (and with entirely inadequate medical provisioning to meet it).
It was the greatest humiliation to have befallen the British army in its history. For the Turks - and for Germany - it proved a significant morale booster, and undoubtedly weakened British influence in the Middle East.
Approximately 8,000 Anglo-Indian troops were taken prisoner (many weak through sickness), as was Townshend himself. However whereas he was treated as something of an honoured guest (and ultimately was released to assist with the Ottoman armistice negotiations in October 1918), his men were treated with cruelty and routine brutality, with a significant percentage dying while in captivity.
Baron von der Goltz meanwhile did not live to witness the conclusion of siege operations; he died ten days earlier of Typhus, although rumours persisted (unproven) that he was actually poisoned by a group of Young Turk officers.
Click here to view a map charting operations in Mesopotamia through to 1917.
Battles and Engagements of the Relief Operation
|Battle of Sheikh||Opened 6 January 1916|
|Battle of the Wadi||Opened 13 January 1916|
|Battle of Hanna||Opened 21 January 1916|
|Battle of Dujaila||Opened 8 March 1916|
|First Battle of Kut||Opened 5 April 1916|
Photographs courtesy of Photos of the Great War website
Observation balloons were referred to as 'sausages'.
- Did you know?
| 3.325404 |
I. Freedom through Faith and Knowledge
The Black church and other faith assemblies have been at the core of the freedom movement since the mid 1800′s. Church provided opportunities to gather, and congregations throughout Orange County, black and white, have inspired and perpetuated the freedom movement. Schools for African-American children, often connected with churches, also were part of the freedom movement, beginning in the late 1800′s, in the midst of a segregated society. In 1968, Orange County schools became fully integrated.
| 3.112262 |
The characteristic features of the climate of Malaysia are uniform
temperature, high humidity and copious rainfall and they arise
from the maritime exposure of the country.
Winds are generally
light. Situated at the equatorial doldrum area, it is extremely rare
to have a full day with completely clear sky even in periods of
severe drought. On the other hand, it is also rare to have a stretch
of a few days with completely
no sunshine except during the northeast monsoon seasons.
Wind flow in Malaysia
Though the wind over the country is generally light and variable,
there are, however, some uniform periodic changes in the wind flow
patterns. Based on these changes, four seasons can be distinguished,
namely, the southwest monsoon, northeast monsoon and two shorter
inter monsoon seasons.
The southwest monsoon is usually established in the later half of
May or early June and ends in September. The prevailing wind flow is
generally south westerly and light, below 15 knots.
The northeast monsoon usually commences in early November and ends
in March. During this season, steady easterly or north-easterly
winds of 10 to 20 knots prevail. The more severely affected areas
are the east coast states of Peninsular Malaysia where the wind may
reach 30 knots or more during periods
of intense surges of cold air from the north (cold surges).
The winds during the two inter monsoon seasons are generally light
and variable. During these seasons, the equatorial trough lies over
It is worth mentioning that during the months of April to November,
when typhoons frequently develop over the west Pacific and move
westwards across the Philippines, south-westerly winds over the
northwest coast of Sabah and Sarawak region may strengthen reaching
20 knots or more.
As Malaysia is mainly a maritime country, the effect of land and sea
breezes on the general wind flow pattern is very marked especially
over days with clear skies. On bright sunny afternoons, sea breezes
of 10 to 15 knots very often develop and reach up to several tens of
kilometre inland. On clear nights, the reverse process takes place
and land breezes of weaker strength can also develop over the
The seasonal wind flow patterns coupled with the local topographic
features determine the rainfall distribution patterns over the
country. During the northeast monsoon season, the exposed areas like
the east coast of Peninsular Malaysia, Western Sarawak and the
northeast coast of Sabah experiences heavy rain spells. On the other
hand, inland areas or areas which are sheltered by mountain ranges
are relatively free from its influence. It is best to describe the
rainfall distribution of the country according to seasons.
Seasonal Rainfall Variation in Peninsular Malaysia
The seasonal variation of rainfall in Peninsular Malaysia is of
three main types:
(a) Over the east coast districts, November, December and January
are the months with maximum rainfall, while June and July are the
driest months in most districts.
(b) Over the rest of the Peninsula with the exception of the
southwest coastal area, the monthly rainfall pattern shows two
periods of maximum rainfall separated by two periods of minimum
rainfall. The primary maximum generally occurs in October - November
while the secondary maximum generally occurs in April - May. Over
the north-western region, the primary minimum occurs in January -
February with the secondary minimum in June - July while elsewhere
the primary minimum occurs in June - July with the secondary minimum
(c) The rainfall pattern over the southwest coastal area is much
affected by early morning "Sumatras" from May to August with the
result that the double maxima and minima pattern is no longer
discernible. October and November are the months with maximum
rainfall and February the month with minimum rainfall. The March -
April - May maximum and the June -July minimum are absent or
Seasonal Rainfall Variation in Sabah and Sarawak
The seasonal variation of rainfall in Sabah and Sarawak can be
five main types:
(a) The coastal areas of Sarawak and northeast Sabah experience a
rainfall regime of one maximum and one minimum. While the maximum
occurs during January in both areas, the occurrence of the minimum
differs. In the coastal areas of Sarawak, the minimum occurs in June
or July while in the northeast coastal areas of Sabah, it occurs in
April. Under this regime, much of the rainfall is received during
the northeast monsoon months of December to March. In fact, it
accounts for more than half of the annual rainfall received on the
western part of Sarawak.
(b) Inland areas of Sarawak generally experience quite evenly
distributed annual rainfall. Nevertheless, slightly less rainfall is
received during the period June to August which corresponds to the
occurrence of prevailing south-westerly winds. It must be pointed out
that the highest annual rainfall area in Malaysia may well be found
in the hill slopes of inland Sarawak areas. Long Akah, by virtue of
its location, receives a mean annual rainfall of more than 5000 mm.
(c) The northwest coast of Sabah experiences a rainfall regime of
which two maxima and two minima can be distinctly identified. The
primary maximum occurs in October and the secondary one in June. The
primary minimum occurs in February and the secondary one in August.
While the difference in the rainfall amounts received during the two
months corresponding to the two maxima is small, the amount received
during the month of the primary minimum is substantially less than
that received during the month of the secondary minimum. In some
areas, the difference is as much as four times.
(d) In the central parts of Sabah where the land is hilly and
sheltered by mountain ranges, the rainfall received is relatively
lower than other regions and is evenly distributed. However, two
maxima and two minima can be noticed, though somewhat less distinct.
In general, the two minima occur in February and August while the
two maxima occur in May and October.
(e) Southern Sabah has evenly distributed rainfall. The annual
rainfall total received is comparable to the central part of Sabah.
The period February to April is, however slightly drier than the
rest of the year.
Being an equatorial country, Malaysia has uniform temperature
throughout the year. The annual variation is less than 2°C except
for the east coast areas of Peninsular Malaysia which are often
affected by cold surges originating
from Siberia during the northeast monsoon.
Even there, the annual variation is below 3°C.
The daily range of temperature is large, being from 5°C to 10°C at
the coastal stations and from 8°C to 12°C at the inland stations but
the excessive day temperatures which are found in continental
tropical areas are never experienced. It may be noted that air
temperature of 38°C has very rarely been recorded in Malaysia.
Although the days are frequently hot, the nights are reasonably cool
Although the seasonal and spatial temperature variations are
relatively small, they are nevertheless fairly definite in some
respects and are worthy of mention. Over the whole Peninsula, there
is a definite variation of temperature with the monsoons and this is
accentuated in the east coast districts. April and May are the
months with the highest average monthly temperature in most places
and December and January are the months with the lowest average
monthly temperature. The average daily temperature in most districts
to the east of the Main Range is lower than that of the
corresponding districts west of the Main Range. The differences in
the average values in the east and the west are due almost entirely
to the low day temperatures experienced in the eastern districts
during the northeast monsoon as a result of rain and greater cloud
cover. At Kuala Terengganu, for example, the day temperature rarely
reaches 32°C during the northeast monsoon and often fails to reach
27°C. A number of occasions have been recorded on which the
temperature did not rise above 24°C which is quite frequently the
lowest temperature reached during the night in most districts. Night
temperatures do not vary to the same extent, the average usually
being between21°C to 24°C. Individual values can fall much below
this at nearly all stations, the coolest nights commonly follow some
of the hottest days.
As mentioned earlier, Malaysia has high humidity. The mean monthly
relative humidity falls within 70to 90%, varying from place to place
and from month to month. For any specific area, the range of the
mean monthly relative humidity varies from a minimum of 3% to a
maximum of about 15%. In Peninsular Malaysia, the minimum range of
mean relative humidity varies from a low 84% in February to a high
of only 88% in November. The maximum range is found in the northwest
area of the Peninsula (Alor Setar) where the mean relative humidity
varies from a low of 72% in February to a high of 87%. It is
observed that in Peninsular Malaysia, the minimum relative humidity
is normally found in the months of January and February except for
the east coast states of Kelantan and Terengganu which have the
minimum in March. The maximum is however generally found in the
month of November.
As in the case of temperature, the diurnal variation of relative
humidity is much greater as compared to the annual variation. The
mean daily minimum can be as low as 42% during the dry months and
reaches as high as 70% during the wet months. The mean daily
maximum, however, does not vary much from place to place and is at
no place falls below 94%. It may reach as high as nearly 100%.
Again, the northwest states of Kedah and Perlis have the largest
diurnal variation of relative humidity.
Sunshine and Solar Radiation
Being a maritime country close to the equator, Malaysia naturally
has abundant sunshine and thus solar radiation. However, it is
extremely rare to have a full day with completely clear sky even in
periods of severe drought. The cloud cover cuts off a substantial
amount of sunshine and thus solar radiation. On the average,
Malaysia receives about 6 hours of sunshine per day. There are,
however, seasonal and spatial variations in the amount of sunshine
received. Alor Setar and Kota Bharu receive about 7 hours per day of
sunshine while Kuching receives only 5 hours on the average. On the
extreme, Kuching receives only an average of 3.7 hours per day in
the month of January. On the other end of the scale, Alor Setar
receives a maximum of 8.7 hours per day on the average in the same
Solar radiation is closely related to the sunshine duration. Its
seasonal and spatial variations are thus very much the same as in
the case of sunshine.
Source - Malaysia Meteorological Service
information - details -
| 3.626318 |
Happy Easter, Happy Passover, and Happy Cherry Blossom Time! I hope you are having a joyous spring!
Native to Japan, the Yoshino cherry (Prunus x yedoensis) is cultivated extensively and is also found growing wild on plains and mountains countrywide. For more than ten centuries, and continuing with no less enthusiasm today, cherry blossom time has been cause for joyful celebration that is deeply integrated in the Japanese culture.
Cherry blossoms, cherry blossoms
As far as you can see.
Across yayoi skies
Is it mist? Is it clouds?
Ah, the fragrance!
Let us go, Let us go and see!
To see a cherry blossom snowstorm:
In the Japanese language the cherry is called “sakura,” which is generally believed to be a corruption of the word “sukuya” (blooming). Poets and artists strive to express the loveliness of its flowers in words and artistry. Called the flower of flowers, when the Japanese use the word “hane” (flower) it has come to mean sakura, and no other flower. Since the Heian period “hanami” has referred to cherry blossom viewing; the term was used to describe cherry blossom parties in the Tale of Gengi. Aristocrats wrote poetry and sang songs under the flowering trees for celebratory flower viewing parties. The custom soon spread to the samurai society and by the Edo period, hanami was celebrated by all people.
From ancient times, during early spring planting rituals, falling blossoms symbolized a bounteous crop of rice. Beginning with the Heian period (794–1185), when the imperial courtiers of Kyoto held power, the preference for graceful beauty and the appreciation of cherry blossoms for beauty’s sake began to evolve. The way in which cherry petals fall at the height of their beauty, before they have withered and become unsightly, and the transience of their brief period of blooming, assumed symbolism in Buddhism and the samurai warrior code.
The delicacy and transience of the cherry blossom have poignant and poetic appeal, providing themes for songs and poems since the earliest times. The motif of the five petal cherry blossoms is used extensively for decorative arts designs, including kimonos, works in enamel, pottery, and lacquer ware. Cherry tree wood is valued for its tight grain and is a lustrous reddish brown when polished. The wood is used to make furniture, trays, seals, checkerboards, and woodblocks for producing color wood block prints.
In modern times the advent of the cherry blossom season not only heralds the coming of spring, but is also the beginning of the new school year and the new fiscal year for businesses. Today families and friends gather under the blooms and celebrate with picnicking, drinking, and singing. The fleeting beauty of the blossoms, scattering just a few days after flowering, is a reminder to take time to appreciate life. In the evening when the sun goes down, viewing the pale-colored cherry blossoms silhouetted against the night sky is considered an added pleasure of the season.
The tradition of celebrating cherry blossom season began in the United States when, on Valentine’s Day in 1912, Tokyo mayor Yukio Okaki gave the city of Washington, D.C., 3,000 of twelve different varieties of cherry trees as an act of friendship. First Lady Helen Taft and the wife of the Japanese ambassador, Viscountess Chinda, planted the initial two of these first cherry trees in Potomac Park. Today cherry blossom festivals are celebrated annually not only in Wash- ington, D.C., but in Brooklyn, San Francisco, Seattle, and Macon, Georgia.
It is said that the true lover of cherry blossoms considers the season is at its height when the buds are little more than half open—for when the blossoms are fully opened there is already the intimation of their decline.
| 3.129716 |
A few months ago we wrote about Kristianstad, Sweden, an area that now uses biomass to generate all of its heat and some of its electricity. That city pioneered use of this renewable technology, and gradually biomass evolved from a niche component of its fuel mix to the backbone of its fuel supply.
A number of rural areas in Germany and the Netherlands have undertaken similar projects. As the article noted, while biomass could be deployed in similar agricultural regions in the United States, adoption has been slow in this country.
That looks as if it might be changing.
This week the federal Department of Agriculture announced a host of renewable energy and energy efficiency projects in rural America, and Agriculture Secretary Tom Vilsack is touring the Midwest, seeding biomass projects as he goes.
On Friday, the departments of Agriculture and Energy announced that up to $30 million would go toward supporting research and development in advanced biofuels, bioenergy and “high-value biobased products” over the next three to four years.
The money is to be dispensed through the Biomass Research and Development Initiative, which started accepting proposals last year.
If properly produced, biomass heat and power produce fewer emissions than fossil fuels like coal or oil because much of the material used as fuel would otherwise sit in landfill releasing methane, a potent greenhouse gas, as it rots. The use of biomass could also reduce the need to import oil. President Obama has called for a one-third reduction in the nation’s oil imports by 2025.
Biomass can include old tree cuttings, rice husks, corn stalks, manure -– almost any kind of biological farm waste. In the past these leftovers were typically left to rot. So a growing number of agricultural regions are burning them or degrading them through chemical digestion to produce biogas.
But new forms of biomass, like the algae biomass produced at the plant that Secretary Vilsack is visiting Friday afternoon, do not use agricultural leftovers; they rely on farmers or factories that grow plants specifically for use as fuel. That involves a different kind of trade-off, since those fields and farms could instead be growing food.
| 3.328921 |
By Alan Mozes
THURSDAY, Jan. 3 (HealthDay News) -- It's possible that a serious mosquito-borne virus -- with no known vaccine or treatment -- could migrate from Central Africa and Southeast Asia to the United States within a year, new research suggests.
The chances of a U.S. outbreak of the Chikungunya virus (CHIKV) varies by season and geography, with those regions typified by longer stretches of warm weather facing longer periods of high risk, according to the researchers' new computer model.
"The only way for this disease to be transmitted is if a mosquito bites an infected human and a few days after that it bites a healthy individual, transmitting the virus," said study lead author Diego Ruiz-Moreno, a postdoctoral associate in the department of ecology and evolutionary biology at Cornell University in Ithaca, N.Y. "The repetition of this sequence of events can lead to a disease outbreak."
And that, Ruiz-Moreno said, is where weather comes into the picture, with computer simulations revealing that the risk of an outbreak rises when temperatures, and therefore mosquito populations, rise.
The study analyzed possible outbreak scenarios in three U.S. locales.
In 2013, the New York region is set to face its highest risk for a CHIKV outbreak during the warm months of August and September, the analysis suggests. By contrast, Atlanta's highest-risk period was identified as longer, beginning in June and running through September. Miami's consistent warm weather means the region faces a higher risk all year.
"Warmer weather increases the length of the period of high risk," Ruiz-Moreno said. "This is particularly worrisome if we think of the effects of climate change over [average] temperatures in the near future."
Ruiz-Moreno discussed his team's research -- funded in part by the U.S. National Institute for Food and Agriculture -- in a recent issue of the journal PLoS Neglected Tropical Diseases.
CHIKV was first identified in Tanzania in 1953, the authors noted, and the severe joint and muscle pain, fever, fatigue, headaches, rashes and nausea that can result are sometimes confused with symptoms of dengue fever.
Few patients die of the illness, and about one-quarter show no symptoms whatsoever. Many patients, however, experience prolonged joint pain, and there is no effective treatment for the disease, leaving physicians to focus on symptom relief.
Disease spread is of paramount concern in the week following infection, during which the patient serves as a viral host for biting mosquitoes. Infected mosquitoes can then transmit the virus and cause a full-blown outbreak.
The U.S. Centers for Disease Control and Prevention became aware of the growing threat of a global outbreak in 2005 and 2006, following the onset of epidemics in India, Southeast Asia, Reunion Island and other islands in the Indian Ocean. In 2007, public health concerns mounted following an outbreak in Italy.
To assess the risk of a U.S. epidemic, the authors collected data concerning regional mosquito population patterns, daily regional weather and human population statistics.
They ran the information through a computer simulation designed to conservatively crunch the numbers based on the likelihood that an outbreak would occur in the coming year after just one CHIKV-infected individual entered any of the three test regions.
The results suggested that because environmental factors affect mosquito growth cycles, the regional risk for a CHIKV outbreak is, to a large degree, a function of weather. The authors said that public health organizations need to be "vigilant," while advocating for region-specific planning to address varying levels of risk across the country.
However, Dr. Erin Staples, a CDC medical epidemiologist based in Fort Collins, Colo., said that although the study was "carefully and nicely done" the investigation's focus on the role of temperature in CHIKV outbreak risk should not negate the importance of other key factors such as human behavior.
"We're aware of the potential introduction and spread of this virus, as well as several other mosquito-borne diseases," she said. "We've been working to create and prepare a response to the risk that this virus could expand into the U.S."
| 3.450047 |
|Click to see the full size image|
About the Image
Since 2003 I've been gathering texts from the web written in indigenous and minority languages. The image above is a "family tree" of the 1000 languages I've found to date, where proximity in the tree is measured by a straightforward statistical comparison of writing systems (details below).
- When you load the full image it will be too big to fit in a browser window and you may not see anything at first – you'll need to use the horizontal and vertical scrollbars to explore different parts of the tree (most browsers will let you zoom in and out also). And because it's an SVG image, you can use your browser's search functionality (probably Ctrl+F or ⌘-F) to find different language codes, although the search behavior can be a bit weird/unpredictable.
- Each language is colored according to its linguistic family (details here). For example, all Indo-European languages are greenish colors, with different subfamilies (Celtic, Germanic, etc.) being slightly different shades of green. I also tried to use similar colors for languages from the same geographical region even when there is no known genetic relationship among them, and so Arawakan, Quechuan, Tucanoan languages (all from South America) are shades of purple, while Central and North American languages are shades of blue.
- Clicking on a language opens a new tab or window with the documentation page for the ISO 639-3 language identifier where you'll find a name for the language in English and a link to its Ethnologue page for additional information.
- What I'm calling "languages" are really "writing systems"; you'll see, for example, separate nodes for bo (Tibetan) and bo-Latn (Tibetan written in Latin script). In a small number of cases I track macrolanguages, regional variants (e.g. en, en-IE, en-ZA), and some dialects. In total, there are 919 distinct ISO 639-3 codes among the 1000 writing systems represented.
The Gory Details
Everything is based on an analysis of three character sequences ("3-grams") in the different languages. It turns out that computing the statistics of 3-grams in a given language provides a "fingerprint" that can be used for language identification and a number of other applications. Specifically, imagine the huge-dimensional vector space V whose axes are labelled with all possible 3-grams of Unicode characters (dim V > 1015). Given a collection of texts in a language, you can compute the frequencies of all 3-grams that appear in the collection, defining a (sparse) vector in V "representing" the language. We then define the distance between two languages to be the angle between their representative vectors in V. This can be computed by scaling the vectors to unit length and computing their dot product (which is the cosine of the angle we want).
Once we know the distance between each pair of languages, we can reconstruct a phylogenetic tree using any of a number of well-known algorithms. The image above was created using the so-called "neighbor-joining" algorithm (which basically builds the tree in a greedy, bottom-up way). A side-effect of the algorithm is that each edge in the tree is assigned a length, but note that the edge lengths in the rendered image have nothing to do with the computed edge lengths (indeed, it's unlikely that the tree can be rendered in a distance-preserving way in two dimensions). Another side-effect of the algorithm is that the tree is connected – by definition, all languages are within a bounded distance of each other – and so near the root of the tree you'll see various languages which use completely different scripts joined in a more-or-less random fashion (Khmer, Georgian, Tamil, Cherokee, etc.). It would be easy enough to tweak the distance function or the algorithm to render languages with different scripts as separate connected components.
How many languages are out there?
Ethnologue lists 6909 living languages in the world, but how many have some presence on the web? The answer depends greatly on what kinds of documents you include. If one takes linguistic studies into account, the number might be as high as 4000 – the Open Language Archives Community (OLAC) brings together data from linguistic archives all over the world into a single, searchable interface. The OLAC coverage page shows, at present, the existence of online resources for 3930 of the 6909 Ethnologue languages, with more material coming online every day. The amazing ODIN project harvests examples of interlinear glossed text from linguistic papers, and has over 1250 languages in its database.
The 1000 languages found by my web crawler are, for the most part, what you might call "primary texts": newspapers, blog posts, Wikipedia articles, Bible translations, etc. My best guess at present is that around 1500 languages have primary texts of this kind on the web. If you know of online resources written in a language that's not listed on our status page, please let me know in the comments.
Here are a couple of closely-related (but ill-defined) questions: first, "How many of the 6909 languages have a writing system?" and second, since a great number of the texts we've found are Bible translations or other evangelical works, one might ask "How many languages have a writing system that's used regularly by members of the speaker community?" I've looked around a bit for answers to these questions but I haven't found any careful studies in the literature.
Mash it up!
First, I'd like to thank the hundreds of people who have contributed to the project over the years by providing training texts in many of the languages, correcting errors in the language identification, editing word lists, and helping separate different dialects/orthographies. You'll find many of their names on the project status page. Thanks also to Michael Cysouw who first suggested generating an image of this kind (you can find his image, created in 2005, on the main project page). Finally, thanks to my colleagues at Twitter for several helpful conversations and for their interest in the Indigenous Tweets project.
| 3.380089 |
Development of the Eye (2nd of 3) Beginning of 4th week (4 mm, lens placode appears)
Early in the 4th week optic vesicles extend from the 3rd ventricle and wall of the forebrain (diencephalon). As the vesicle reaches the surface ectoderm it flattens (a) and progressively invaginates (b) to form the optic cup, which remains attached to the forebrain by the optic stalk (precursor of the optic nerve). The asymmetric invagination leaves a groove, the choroid fissure, in the stalk. The adjacent ectoderm thickens to form the lens placode (a & b) which invaginates and (eventually) separates (c) from the ectoderm to form the lens vesicle.
The primary optic vesicle (a) becomes a double-walled optic cup (c). With continued invagination the original lumen of the optic vesicle is reduced to a slit between the 1) inner neural layer and 2) outer pigment layer of the optic cup (c).
Mesenchyme around the optic vesicle will contribute to the fibrous coats of the eye (sclera/cornea) externally and the choroid layer adjacent to the pigment layer. It also forms the hyaloid vessels (c) which pass in the choroid fissure, across the vitreous chamber, to supply the lens.
| 3.321851 |
The bone marrow is the principal source of the many different types of cells that circulate in your blood stream. The term “myelodysplasia“ describes certain abnormalities in the production of these blood cells. “Myelodysplastic syndrome” (MDS) refer to at least five different entities, all of which interfere with the growth of blood cells in the bone marrow. The differences among them are found in the appearance of the cells under the microscope and are helpful primarily in determining prognosis.
MDS frequently progresses to a form of acute leukemia. Leukemia is a cancer of the blood cells. But in the case of leukemia, there is an overproduction of immature cells (blasts) circulating in the blood and an underproduction of healthy cells. In MDS, there is usually, but not always, only an underproduction of healthy cells. The progression to acute leukemia is so common that MDS used to be known as “preleukemia.”
The bone marrow contains stem cells, which have the capacity to become any of the cell types that circulate in the blood stream. These stem cells normally undergo a maturation process that results in mature cells with fixed functions:
- Red blood cells carry oxygen.
- Three types of granulocytes (a type of white blood cell) carry out immune functions.
- Two types of lymphocytes (a type of white blood cell) carry out immune functions.
- Macrophages and monocytes help us fight infection.
- Platelets provide a defense against bleeding and bruising.
Once cells have matured in the bone marrow, they are released into the blood circulation. MDS interrupts the normal maturation process of blood cells.
Who Is Affected?
Roughly 3,000 new cases of MDS occur yearly in the United States. A similar rate of 1 to 10 per 100,000 people occurs in the rest of the developed nations. Cases without a known cause are most frequently found in older males, usually between 70 and 80 years of age; among this group, the rate rises to 25 per 100,000 people.
The incidence of MDS among younger populations is rising. This is partially due to the success of chemotherapy and radiation in treating and eradicating other forms of cancer since these therapies increase the risk of developing MDS.
Causes and Complications
The exact cause of MDS is unclear, but certain factors are believed to increase risk. These include radiation for the treatment of cancer, certain drugs and chemicals, genetic factors, and some birth defects.
MDS may lead to a number of complications related to blood cells:
Bleeding —If blood clotting elements (like platelets) become depleted, bleeding may become uncontrollable.
Infection —If immune cells (white blood cells) are depleted, even small infections can be serious.
Anemia —When the number of red blood cells decreases, anemia may develop. A lack of red blood cells reduces the oxygen-carrying capacity of the blood and may cause fatigue, shortness of breath, palpitations.
Leukemia —MDS commonly leads to acute leukemia .
- Reviewer: Mohei Abouzied, MD
- Review Date: 03/2013 -
- Update Date: 00/31/2013 -
| 3.7504 |
Toxic shock syndrome can happen to anyone — men, women, and children. Although it can be serious, it's a very rare illness. If you're concerned about toxic shock syndrome, the smartest thing you can do is to read and learn about it, then take some precautions.
What Is Toxic Shock Syndrome?
If you're a girl who's had her period, you may have heard frightening stories about toxic shock syndrome (TSS), a serious illness originally linked to the use of tampons. But TSS isn't strictly related to tampons. The contraceptive sponge and the diaphragm, two types of birth control methods, have been linked to TSS. And, sometimes, the infection has occurred as a result of wounds or surgery, where the skin has been broken, allowing bacteria to enter.
TSS is a systemic illness, which means that it affects the whole body. It can be caused by one of two different types of bacteria, Staphylococcus aureus and Streptococcus pyogenes — although toxic shock that is caused by the Streptococcus bacteria is rarer. These bacteria can produce . In some people whose bodies can't fight these toxins, the immune system reacts. This reaction causes the symptoms associated with TSS.
When people think of TSS, they often think of tampon use. That's because the earliest cases of the illness, back in the late 1970s, were related to superabsorbent tampons. Research led to better tampons and better habits for using them — such as changing tampons more often. The number of TSS cases dropped dramatically. Today about half of all TSS cases are linked to menstruation.
Aside from tampon use, TSS has been linked to skin infections that are typically minor and can be associated with the chickenpox rash. TSS has also been reported following surgical procedures, giving birth, and prolonged use of nasal packing for nosebleeds — although all of these are rare.
What Are the Signs and Symptoms?
Symptoms of TSS occur suddenly. Because it's an illness that is caused by a toxin, many of the body's organ systems are affected.
The signs and symptoms of TSS include:
high fever (greater than 102° F [38.8° C])
rapid drop in blood pressure (with lightheadedness or fainting)
sunburn-like rash that can be anywhere on the body, including the palms of the hands and soles of the feet
vomiting or diarrhea
severe muscle aches or weakness
bright red coloring of the eyes, mouth, throat, and vagina
headache, confusion, disorientation, or seizures
kidney and other organ failure
The average time before symptoms appear for TSS is 2 to 3 days after an infection with Staphylococcus or Streptococcus, although this can vary depending on the infection.
Your risk of getting TSS is already low. But you can reduce it still further by simply following some common-sense precautions:
Clean and bandage any skin wounds.
Change bandages regularly, rather than keeping them on for several days.
Check wounds for signs of infection. If a wound gets red, swollen, painful, or tender, or if you develop a fever, call your doctor right away.
If you're a girl whose period has started, the best way to avoid TSS is to use pads instead of tampons.
For girls who prefer to use tampons, select the ones with the lowest absorbency that can handle your menstrual flow and change them frequently. You can also alternate the use of tampons with sanitary napkins. If your flow is light, use a pad instead of a tampon.
If you've already had an episode of TSS or have been infected with S. aureus, don't use tampons or contraceptive devices that have been associated with TSS (such as diaphragms and contraceptive sponges).
What Do Doctors Do?
TSS is a medical emergency. If you think you or someone you know may have TSS, call a doctor right away. Depending on the symptoms, a doctor may see you in the office or refer you to a hospital emergency department for immediate evaluation and testing.
If doctors suspect TSS, they will probably start intravenous (IV) fluids and antibiotics as soon as possible. They may take a sample from the suspected site of the infection, such as the skin, nose, or vagina, to check it for TSS. They may also take a blood sample. Other blood tests can help monitor how various organs like the kidneys are working and check for other diseases that may be causing the symptoms.
Medical staff will remove tampons, contraceptive devices, or wound packing; clean any wounds; and, if there is a pocket of infection (called an abscess), a doctor may need to drain pus from the infected area.
People with TSS typically need to stay in the hospital, often in the intensive care unit (ICU), for several days to closely monitor blood pressure, respiratory status, and to look for signs of other problems, such as organ damage.
TSS is a very rare illness. Although it can be fatal, if recognized and treated promptly it is usually curable.
| 3.444587 |
Operating on a baby before birth may seem like science fiction, but prenatal surgery is becoming more and more common in special pediatric programs throughout the United States.
Since prenatal surgery was first pioneered in the 1980s, it's become an important way to correct certain birth defects that could be severe (and in some cases fatal) if babies were born with them unrepaired.
Prenatal surgery (also called fetal surgery or fetal intervention) most often is done to correct serious problems that can't wait to be fixed, like certain heart defects, urinary blockages, bowel obstructions, and airway malformations.
Some of the greatest successes have come from correcting spina bifida (an often disabling spinal abnormality in which the two sides of the spine fail to join together, leaving an open area). A recent landmark study reports that kids with spina bifida who received fetal surgery typically are more likely to walk, less likely to have serious neurological problems, and less likely to need a shunt to drain brain fluid.
So how does prenatal surgery work? The most common types are:
Open fetal surgery: In this type of procedure, the mother is given anesthesia, then the surgeon makes an incision in the lower abdomen to access the uterus (as would be done during a Cesarean section). The uterus is opened with a special stapling device that prevents bleeding, the fetus is either partially or completely taken out of the womb, surgery is done, then the baby is returned to the uterus, and the incision is closed. Open fetal surgery is performed for problems like spina bifida and certain other serious conditions. The mother will be in the hospital for 3-7 days and will need a C-section to give birth to the baby (and any future children).
Fetoscopic surgery: This minimally invasive type of procedure, often called Fetendo fetal surgery, is more common than open fetal surgery. Small incisions are made and the fetus is not removed from the uterus. The surgeon uses a very small telescope (called an endoscope) designed just for this kind of surgery and other special instruments to enter the uterus and correct the problem. Fetendo is most useful for problems with the placenta, such as twin-twin transfusion syndrome in which one identical twin grows at the expense of the other because of abnormal blood vessel connections in the placenta they share.
Fetal image-guided surgery: Some fetal surgery is done without an incision to the uterus or use of an endoscope. Doctors use ultrasound to guide them as they perform "fetal manipulations," such as placing a catheter in the bladder, abdomen, or chest. The least-invasive form of fetal surgery, it's not used for serious conditions that require open surgery.
The benefits of prenatal surgery don't come without risks, though. Chief among them are premature births and problems with the incision site. Moms who have fetal intervention are closely monitored for preterm labor and receive medications to control it. Still, for many parents and their babies, fetal surgery is a true medical miracle.
| 3.755874 |
If your child is diagnosed with cancer, it may feel as though you went to bed one night and woke up in an alternate universe. Suddenly there are all these new words — oncology, chemotherapy, radiation — not to mention a slew of new fears and emotions. Now the doctor is saying your child's immune system isn't strong enough for him or her to go to school or even visit family.
If that's the case, chances are it's because your child has developed a condition called neutropenia. Neutropenia is when the body has abnormally low levels of certain white blood cells (called neutrophils), the body's main defense against infection.
Other problems with the immune system caused by the cancer and its treatment vary among patients, but they also can be important reasons to avoid crowds of people that may expose your child to viruses.
A Weakened Immune System
When a germ enters the body, a healthy immune system springs into action, sending an army of neutrophils to the area to attack. The next time those same germs enter the body, the immune system will "remember" them and try to head them off before they can cause any serious trouble.
Someone with cancer, though, commonly has fewer neutrophils patrolling the body. In some cases, that's because the cancer itself damages the bone marrow, the spongy material inside the bones where all new blood cells — including neutrophils — are made. (This is especially common with cancers like leukemia and lymphoma.)
Other times it may be the cancer treatments themselves that are doing the damage. Both chemotherapy (powerful cancer-fighting drugs) and radiation (high-energy X-rays) work by killing the fastest-growing cells in the body — both bad and good. That means that along with cancer cells, healthy blood cells, like neutrophils, often get destroyed too.
With fewer neutrophils, a person is more prone to infection. Even things the body would normally be able to fight off without much trouble, like skin infections or ear infections, become much more serious and long-lasting when a person is in a neutropenic state. That's why it's important to call the doctor right away if your child has a fever, shaking or chills, or any mouth or skin sores, which may be signs of infection.
Fortunately, doctors can use a blood test called an absolute neutrophil count (ANC) to judge how cautious your child needs to be about avoiding germs. When the neutrophil count falls below 1,000 cells per microliter of blood, the risk of infection increases somewhat; when it falls below 500 cells per microliter the risk increases quite a bit more. If it stays below 100 for many days, the risk of serious infection becomes very high.
Sometimes medications called growth factors can be given to encourage the body to produce more neutrophils. But often it's safest for your child to remain home for a length of time determined by the doctor. Places like schools, locker rooms, malls, and even churches — where people are close together and germs spread easily — are just too risky. To your child's weakened immune system, it would feel like standing at the edge of a forest fire with only a water gun for defense.
Being stuck at home can be tough on anyone. When things feel out of control, most people — and especially kids — count on the routines of daily life to help maintain some sense of normalcy. It's only natural that losing that, even temporarily, can leave your child feeling angry, frustrated, left out, depressed, punished, and even jealous of siblings and friends.
So what can you do to help your child make the best of the time at home?
Plenty — though it may depend on how your child feels. Some days the cancer treatments will wipe your child out, and all he or she will want to do is sleep. Other days your child will have more energy. Follow your child's lead, and when he or she seems up for it, here are some ideas for beating the boredom:
Help Your Child Stay Connected
Even if you lowered the boom on screen time before your child got sick, now's a good time to consider easing up. Allowing access to the Internet, texting, IM, photo sharing, Skype, and online games with friends is more than just a perk; it's a valuable way for your child to stay within his or her social network.
Ask the doctor or nurse if a friend can come over. In some cases, if the doctor says it's OK, your child may be able to have a friend over for a brief visit or a movie night. If so, a little prep work on both sides can make the evening go smoothly.
First, make sure the friend knows that your child's cancer, and related neutropenia, isn't contagious — otherwise he or she may be reluctant to come. More important for your child's safety, reschedule get-togethers if there's any question about whether the visitor is sick, even if it's just a cold. And finally, always have everyone who comes in contact with your child wash their hands.
Even though it may hurt to talk about this, let your child know that some friends may deal with his or her illness better than others. Remind your child to try not to take it personally if some friends don't know what to say, or if they talk about things that your child missed out on. The good news is that there will usually be a few true friends who will know how to treat your child like the same person he or she has always been.
What are some things your child never gets a chance to do? Maybe your daughter is an athlete who's always wondered if she has an artistic side; or your son is a computer whiz who's always enjoyed creative writing.
Now's the time for to explore those other sides of your child's personality. Painting, drawing, building models, designing clothes or jewelry, learning an instrument, or making a scrapbook or collage of favorite photos are all great ways to get those creative juices flowing. Writing poetry or keeping a journal or blog can also help your child deal with difficult emotions. Even better, reading them back later on will be a reminder of how far your child has come.
OK a Room Makeover
With a little help from you, your child's bedroom can become the coolest and comfiest space ever. Maybe you can turn a corner into a lounge, or the bed into a funky sofa, with fluffy pillows and a bolster. Choose colors that make your child feel good and be sure to keep favorite music, books, and photos nearby to really make it special.
Even when public places are off limits, fresh air usually isn't. So encourage your child to sit on the porch or in the yard and read, talk on the phone, or listen to music.
Help Your Child Feel Empowered
One of the best ways for anyone to feel stronger is to do something good — maybe your child can coordinate a fundraiser for a favorite charity, whether it has to do with cancer or another special cause, like animals or the environment. Maybe he or she could start a website about dealing with cancer that can help other kids in the same position.
Or maybe your child can make a list of things to look forward to when this experience is over. Getting your child to think beyond the here and now can make the time go faster and help everyone stay positive.
Feelings and worries can become overwhelming when they're held in, so find a way to help your child let them out. A good place to start is with your hospital's social worker, who can put your family in touch with others who've been where you are now.
Or check out some of the many cancer support websites, most with chat areas or message boards, that make it easy to share what your family is going through with others who understand.
Try to Keep Up With Schoolwork
And last but not least, encourage your child to stay on top of schoolwork as much as possible. Keep in touch with teachers to find ways to stay involved in classroom life and modify assignments, when necessary.
Staying home may be hard on kids at first, especially if a child was always on the go. The good news for many kids with cancer is that having to stay home is only a temporary setback. Once the immune system recovers, your child should be able to get back in the swing of things.
In the meantime, keep your child's spirits up, look toward the future, and have confidence that, even though things seem difficult now, your child will get through it with help from loved ones.
| 3.335968 |
Pub. date: 2007 | Online Pub. Date: September 25, 2007 | DOI: 10.4135/9781412952637 | Print ISBN: 9780761923879 | Online ISBN: 9781412952637| Publisher:Sage Publications, Inc.About this encyclopedia
Saint-Simon, Henri (1760–1825)
Francis M. Williams
Claude-Henri de Rouvroy, Comte de Saint-Simon, was a French social theorist who is considered the founding father of social science and socialism. His writings contained original ideas on the application of scientific methods to the study of humans and society, championed a new “scientific-industrial” age, and influenced social theory and modern thought. His call for a “science of society,” situating it on a par with the natural sciences, influenced his disciple, Auguste Comte (1798–1857), as well as later sociologists. In his major work, Nouveau Christianisme (1825), Saint-Simon advocated a New Christianity—a secular humanist religion to replace the defunct traditional religions—that would have Saint-Simonism ...
| 3.266927 |
Helping you get the most out of our U.S. Census Collection has been a top priority. To date, we’ve painstakingly gone through hundreds of millions of records spanning hundreds of years to make these 14 censuses easier to search and sharper to see. And 1930 is in the works. Take a look — you might just discover someone who was here the whole time.
Deciphering a name on some U.S. Census records can be difficult. So we’ve enhanced the 1850, 1860, 1870, 1900, 1910 and 1920 censuses by adding an alternate where the information was unclear — over 20% new names in total. This improved indexing lets you find a record by searching either name.
Only heads of households were mentioned by name in early censuses. And many of them shared that name (John Smith, for example) with others in their community. We’ve expanded the indexes for the 1790-1840 censuses with at least four new fields — like the total number of people in the household — so you can use your family knowledge to narrow down your search results.
Now's the perfect time to dig in or renew the search for the missing pieces in your family story.
Click the plus signs below to see the difference.
1910 U.S. Census
| 3.185461 |
The pleura are two thin, moist membranes around the lungs. The inner layer is attached to the lungs. The outer layer is attached to the ribs. Pleural effusion is the buildup of excess fluid in the space between the pleura. The fluid can prevent the lungs from fully opening. This can make it difficult to catch your breath.
Pleural effusion may be transudative or exudative based on the cause. Treatment of pleural effusion depends on the condition causing the effusion.
Effusion is usually caused by disease or injury.
Transudative effusion may be caused by:
Exudative effusion may be caused by:
Factors that increase your chance of getting pleural effusion include:
- Having conditions or diseases listed above
- Certain medications such as:
- Nitrofurantoin (Macrodantin, Furadantin, Macrobid)
- Methysergide (Sansert)
- Bromocriptine (Parlodel)
- Procarbazine (Matulane)
- Amiodarone (Cordarone)
- Chest injury or trauma
- Radiation therapy
Surgery, especially involving:
- Organ transplantation
Some types of pleural effusion do not cause symptoms. Others cause a variety of symptoms, including:
- Shortness of breath
- Chest pain
- Stomach discomfort
- Coughing up blood
- Shallow breathing
- Rapid pulse or breathing rate
- Weight loss
- Fever, chills, or sweating
These symptoms may be caused by many other conditions. Let your doctor know if you have any of these symptoms.
The doctor will ask about your symptoms and medical history. A physical exam will be done. This may include listening to or tapping on your chest. Lung function tests will test your ability to move air in and out of your lungs.
Images of your lungs may be taken with:
Your doctor may take samples of the fluid or pleura tissue for testing. This may be done with:
Treatment is usually aimed at treating the underlying cause. This may include medications or surgery.
Your doctor may take a "watchful waiting" approach if your symptoms are minor. You will be monitored until the effusion is gone.
To Support Breathing
If you are having trouble breathing, your doctor may recommend:
- Breathing treatments—inhaling medication directly to lungs
- Oxygen therapy
Drain the Pleural Effusion
The pleural effusion may be drained by:
- Therapeutic thoracentesis —a needle is inserted into the area to withdraw excess fluid.
- Tube thoracostomy—a tube is placed in the side of your chest to allow fluid to drain. It will be left in place for several days.
Seal the Pleural Layers
The doctor may recommend chemical pleurodesis. During this procedure, talc powder or an irritating chemical is injected into the pleural space. This will permanently seal the two layers of the pleura together. The seal may help prevent further fluid buildup.
Radiation therapy may also be used to seal the pleura.
In severe cases, surgery may be needed. Some of the pleura will be removed during surgery. Suregery options may include:
- Thoracotomy—traditional, open chest procedure
- Video-assisted thorascopic surgery (VATS)—minimally-invasive surgery that only requires small keyhole size incisions
Prompt treatment for any condition that may lead to effusion is the best way to prevent pleural effusion.
- Reviewer: Brian Randall, MD
- Review Date: 02/2013 -
- Update Date: 03/05/2013 -
| 3.527226 |
Sugar comes in many forms. One type of sugar, lactose, occurs primarily in milk. Nature gives young children the ability to digest lactose, because they need to do so when they nurse. However, as people grow up, they often lose the lactose-digesting enzyme, known as lactase . The result is a condition called lactose intolerance. Symptoms include intestinal cramps, gas, and diarrhea following consumption of lactose-containing foods.
Principal Proposed Natural Treatments
Lactose intolerance is most prevalent in people of Hispanic, African, Asian, Middle Eastern, or Native American descent, although Caucasians can develop it as well. Treatment consists primarily of avoiding foods containing lactose, such as milk and ice cream. Use of lactase supplements may help people who are lactose intolerant handle more lactose than otherwise. Also, special milk products are available from which the lactose has been removed (often through the use of lactase).
Other Proposed Natural Treatments
Many people confuse milk allergy with lactose intolerance. The two conditions are not related. Milk allergy involves an allergic reaction to the protein component of milk, and lactase supplements will not help. For more information on natural approaches to food allergies, see the food allergy article.
- Reviewer: EBSCO CAM Review Board
- Review Date: 07/2012 -
- Update Date: 07/25/2012 -
| 3.651453 |
Ask Dr. Math
High School Archive
Dr. Math Home ||
Middle School ||
High School ||
Dr. Math FAQ
See also the
Dr. Math FAQ:
Browse High School History/Biography
Stars indicate particularly interesting answers or
good places to begin browsing.
- The Welsh Vigesimal Number System [02/17/2003]
What can you tell me about the Welsh version of the Vigesimal Number
- What If There Was No Zero? [10/21/2003]
Why did it take so long to discover zero? Why did early civilizations
not need zero? How would math as we know it be different if there was
- What is a Sign? [04/21/2003]
Can you do anything in math without signs?
- What is Menelaus' Theorem? [11/15/1998]
Proof of Menelaus' Theorem, and discussion of its converse and Desargues'
- What was Fermat's Last Theorem? [07/23/1997]
I wonder if you might take the time to explain Fermat's Last Theorem. I
am an undergraduate in mathematics, so an easy answer would be perfect.
- When Do We Need to Know Roman Numerals? [08/26/2003]
I have a student who does not see how learning Roman numerals will
benefit her. What advice can you give her to make this learning
experience more relevant to her life and needs?
- Where did Fahrenheit and Celsius Come From? [07/26/1997]
How did scientists figure out the relation between two numbers that mean
the same thing, e.g. 0 deg C and 32 deg F?
- Where did Pi come from? [12/2/1994]
We are an adult high school and we were wondering if you could answer our
question. Where did the word pi come from, and how did someone determine
it was equal to 3.14?
- Where does pi come from? [1/13/1995]
I was wondering how exactly the math notation pi was derived, and whoever
derived it, why did he make up that symbol for pi?
- Where does sine come from? [1/6/1995]
Who was the inventor of Sine? And when did he/she discover it? Also, how
did he/she do it?
- Who Invented Algebra? [06/07/2004]
Who invented Algebra?
- Who Invented Binary? [05/07/2000]
Who invented the binary system of numbers and when was it developed?
- Who was Hero (or Heron)? [11/12/1997]
I have been trying to find information on the Greek mathematician Hero.
- Why Are the Numbers on a Dartboard Where They Are? [03/09/2004]
The way that the numbers 1 to 20 are arranged on a standard dart board
at first seems to be random, but are they placed in such a way as to
encourage accuracy, i.e., so that missing a high number results in
hitting a low number?
- Why b for Intercept? [10/16/2003]
In the slope-intercept formula, y = mx + b, why is 'b' used to
represent the y-intercept?
- Why Does 1+1=2? [2/28/1995]
Nobody that I ask has been able to answer that question with an
explanation. Notice the word WHY, not HOW.
- Why Do We Need to Study Rational Numbers? [04/22/2008]
My students want to know why they need to know what rational numbers
are and what use they have in the real world.
- Why Is a Circle 360 Degrees? [07/01/1998]
Why is a circle defined as 360 degrees?
- Why m for slope? [11/9/1994]
My class wants me to ask you why the letter m was selected to represent
- Why Straightedge and Compass Only? [10/02/2002]
My geometry students want to know why constructions can only be done
using a straightedge and a compass.
- Why x and y? [11/30/1994]
Can Dr. Math tell me where we get x and y?
- Women in Mathematics [3/6/1996]
Would someone answer questions from eleventh grade girls about being a
- You Can't Trisect an Angle [7/16/1996]
Who proved you can't trisect an angle?
Search the Dr. Math Library:
© 1994-2013 Drexel University. All rights reserved.
Home || The Math Library || Quick Reference || Search || Help
The Math Forum is a research and educational enterprise of the Drexel University School of Education.
| 3.420724 |
The Growing Child: 7 to 9 Months
While all babies may grow at a different rate, the following indicates the average for boys and girls 7 to 9 months of age:
- Weight: average gain of 1 pound each month; boys usually weigh about 1/2 pound more than girls; two-and-a-half times the birthweight by 8 months
- Height: average growth of about 1/2 inch each month
- Head size: average growth of about 1/4 inch each month
Babies are rapidly developing their physical abilities at this age. They become mobile for the first time and safety in the home becomes an important issue. While babies may progress at different rates, the following are some of the common milestones your baby may reach in this age group:
- Rolls over easily from front to back and back to front
- Sits leaning forward on hands at first, then unsupported
- Bounces when supported to stand
- Gets on hands and feet and rocks back and forth
- May creep, scoot, crawl--backwards first, then forward
- Begins to pull up to stand
- Reaches for and grasps objects using whole hand
- Bangs toy on table
- Can hold an object in each hand
- May hold a bottle
- Plays peek-a-boo
- Grasps object with thumb and finger by 8 to 9 months
- Begins teething, usually starting with the two center front teeth in the lower jaw, then the two center front teeth in the upper jaw
- Learns to drink from cup
- Puts everything into mouth
- Naps are usually twice, sometimes three times a day, for one to two hours each (on average)
- May begin to awaken during the night and cry
It is very exciting for parents to watch their babies become social beings that can interact with others. While every baby develops speech at his or her own rate, the following are some of the common milestones in this age group:
- Makes two syllable sounds (ma-ma, da-da)
- Makes several different vowel sounds, especially "o" and "u"
- Repeats tones or sounds made by others
A baby's awareness of people and surroundings increases during this time. While babies may progress at different rates, the following are some of the common milestones in this age group:
- Responds to own name and "no"
- Pays attention to conversation
- Appears to understand some words (i.e., eat)
- Prefers mother over others
- Enjoys seeing self in mirror
- Responds to changes in emotions of others
- Is afraid of strangers
- Shows interest in and dislike of foods
- Makes attention-getting sounds such as a cough or snort
- Begins to understand object permanence and can uncover a toy after seeing it covered
- May follow one-step commands with a sign to demonstrate (i.e., "get the ball" while parent points to ball)
Consider the following as ways to foster the emotional security of your baby:
- Give your baby safe toys that make noises when shaken or hit.
- Play in front of a mirror, calling your baby by name and pointing to your baby's reflection in the mirror.
- When talking to your baby, pause and wait for him or her to respond just as when talking with an adult.
- Play pat-a-cake and peek-a-boo.
- Name common objects when shown to your baby.
- Make a variety of sounds with your mouth and tone of voice.
- Repeat and expand the sounds your baby makes, such as "ma-ma" when he or she says "ma."
- Show picture books and read stories to your baby every day.
- Give your baby toys with objects or knobs to push, poke, or turn.
- Give your baby toys that stack or nest and show him or her how they work.
- Build a tower with your baby and show him or her how to knock it down.
- Establish a routine for bath and bedtime.
- Offer a cup.
| 3.516818 |
Osteoarthritis is usually diagnosed after your doctor has taken a careful history of your symptoms. A physical exam will be done. There are no definitive lab blood tests to make an absolute diagnosis of osteoarthritis. Certain tests, specifically x-rays of the joint, may confirm your doctor’s impression that you have developed osteoarthritis.
X-ray examination of an affected joint —A joint with osteoarthritis will have lost some of the normal space that exists between the bones. This space is called the joint space. This joint space is made up of articular cartilage, which becomes thin. There may be tiny new bits of bone (bone spurs) visible at the end of the bones. Other signs of joint and bone deterioration may also be present. X-rays , however, may not show very much in the earlier stages of osteoarthritis, even when you are clearly experiencing symptoms.
Arthrocentesis —Using a thin needle, your doctor may remove a small amount of joint fluid from an affected joint. The fluid can be examined in a lab to make sure that no other disorder is causing your symptoms (such as rheumatoid arthritis , gout , infection).
Blood tests —Blood tests may be done to make sure that no other disorder is responsible for your symptoms (such as rheumatoid arthritis or other autoimmune diseases that include forms of arthritis). Researchers are also looking at whether the presence of certain substances in the blood might indicate osteoarthritis and help predict the severity of the condition. These substances include breakdown products of hyaluronic acid (a substance that lubricates joints) and a liver product called C-reactive protein.
- Reviewer: Rosalyn Carson-DeWitt, MD
- Review Date: 09/2011 -
- Update Date: 09/01/2011 -
| 3.169508 |
(ARA) - Traditionally, the term “war zone” elicits images of tanks, gunfire and military personnel. However, as technology evolves, so do the weapons associated with the art of warfare. Most recently, the battleground has moved online, with the introduction of a new computer malware threat known as “Flame.”
Flame steals information from e-operations of certain nation states – making it a vital threat to both governments and military units. Based on the way Flame works, it can be classified as a “cyber weapon,” according to Kaspersky Lab, a Russian anti-virus firm.
Web attacks cost businesses $114 billion each year, according to a 2011 study conducted by Symantec. And as more business, government and military institutions store classified information online, the probability of an attempted attack by these new forms of cyber-weaponry increases. Given the likelihood for future security breaches, the need for professionals with the skills required to protect those at risk for such forms of online espionage is amplifying. The U.S. Bureau of Labor Statistics Occupational Outlook Handbook reports that by the year 2020, demand for cyber security experts will increase by 28 percent.
Much like the way the military and police serve and protect our country and its citizens, cyber security experts play a crucial role in protecting an institution’s network and information from attacks. These professionals, known as computer forensics experts, also analyze the electronic evidence, and in some cases identify and serve as expert witnesses to help prosecute the criminals responsible.
Bachelor’s degree programs such as computer information systems (CIS) help prepare students for this role. Many programs allow students to concentrate their studies in a variety of cyber security specialties. For example, students focusing on computer forensics will learn the skills necessary to handle the electronic evidence of criminal cases and how to identify and prosecute criminals.
At DeVry University, students enrolled in the Computer Information Systems bachelor’s degree program can pursue a cyber security specialization in computer forensics that allows them to gain understanding of the diversity of computer crime, and the laws and principals concerned with computer forensics and electronic evidence. They also learn how to discover data that resides in a computer system, and how to recover deleted, encrypted or damaged file information.
“Technical knowledge is only one piece of the skillset puzzle for cyber security practitioners,” says Dr. Ahmed Naumaan, national dean for the College of Engineering & Information Sciences at DeVry University. “Creativity and the ability to think outside the box play a pertinent role, as those in this field must be able to take on the mindset of the hackers they protect against.”
The many forms of online assault will continue to evolve. As governments, businesses and other institutions increasingly become targets of online warfare, the demand for those armed with the competencies to successfully defend against them will grow.
| 3.08663 |
What do we really know about creativity? Very little. We know that creative genius is not the same thing as intelligence. In fact, beyond a certain minimum IQ threshold – about one standard deviation above average, or an IQ of 115 – there is no correlation at all between intelligence and creativity. We know that creativity is empirically correlated with mood-swing disorders. A couple of decades ago, Harvard researchers found that people showing ‘exceptional creativity’ – which they put at fewer than 1 per cent of the population – were more likely to suffer from manic-depression or to be near relatives of manic-depressives. As for the psychological mechanisms behind creative genius, those remain pretty much a mystery. About the only point generally agreed on is that, as Pinker put it, ‘Geniuses are wonks.’ They work hard; they immerse themselves in their genre.Could this immersion have something to do with stocking the memory? As an instructive case of creative genius, consider the French mathematician Henri Poincaré, who died in 1912. Poincaré’s genius was distinctive in that it embraced nearly the whole of mathematics, from pure (number theory) to applied (celestial mechanics). Along with his German coeval David Hilbert, Poincaré was the last of the universalists. His powers of intuition enabled him to see deep connections between seemingly remote branches of mathematics. He virtually created the modern field of topology, framing the ‘Poincaré conjecture’ for future generations to grapple with, and he beat Einstein to the mathematics of special relativity. Unlike many geniuses, Poincaré was a man of great practical prowess; as a young engineer he conducted on-the-spot diagnoses of mining disasters. He was also a lovely prose stylist who wrote bestselling works on the philosophy of science; he is the only mathematician ever inducted into the literary section of the Institut de France. What makes Poincaré such a compelling case is that his breakthroughs tended to come in moments of sudden illumination. One of the most remarkable of these was described in his essay ‘Mathematical Creation’. Poincaré had been struggling for some weeks with a deep issue in pure mathematics when he was obliged, in his capacity as mine inspector, to make a geological excursion. ‘The changes of travel made me forget my mathematical work,’ he recounted.
Having reached Coutances, we entered an omnibus to go some place or other. At the moment I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’s sake, I verified the result at my leisure.
How to account for the full-blown epiphany that struck Poincaré in the instant that his foot touched the step of the bus? His own conjecture was that it had arisen from unconscious activity in his memory. ‘The role of this unconscious work in mathematical invention appears to me incontestable,’ he wrote. ‘These sudden inspirations … never happen except after some days of voluntary effort which has appeared absolutely fruitless.’ The seemingly fruitless effort fills the memory banks with mathematical ideas – ideas that then become ‘mobilised atoms’ in the unconscious, arranging and rearranging themselves in endless combinations, until finally the ‘most beautiful’ of them makes it through a ‘delicate sieve’ into full consciousness, where it will then be refined and proved.
Poincaré was a modest man, not least about his memory, which he called ‘not bad’ in the essay. In fact, it was prodigious. ‘In retention and recall he exceeded even the fabulous Euler,’ one biographer declared. (Euler, the most prolific mathematician of all – the constant e takes his initial – was reputedly able to recite the Aeneid from memory.) Poincaré read with incredible speed, and his spatial memory was such that he could remember the exact page and line of a book where any particular statement had been made. His auditory memory was just as well developed, perhaps owing to his poor eyesight. In school, he was able to sit back and absorb lectures without taking notes despite being unable to see the blackboard.
It is the connection between memory and creativity, perhaps, which should make us most wary of the web. ‘As our use of the web makes it harder for us to lock information into our biological memory, we’re forced to rely more and more on the net’s capacious and easily searchable artificial memory,’ Carr observes. But conscious manipulation of externally stored information is not enough to yield the deepest of creative breakthroughs: this is what the example of Poincaré suggests. Human memory, unlike machine memory, is dynamic. Through some process we only crudely understand – Poincaré himself saw it as the collision and locking together of ideas into stable combinations – novel patterns are unconsciously detected, novel analogies discovered. And this is the process that Google, by seducing us into using it as a memory prosthesis, threatens to subvert.
| 3.033926 |
Specifies the axis type for the X and Y-axes of a Series.
Assembly: System.Windows.Forms.DataVisualization (in System.Windows.Forms.DataVisualization.dll)
The enumeration represents the axis type used for the X and Y-axes of a Series.
A Series is plotted using two axes, with the exception of pie and doughnut charts. This enumeration is used in conjunction with the XAxisType and YAxisType properties to set the axes used for plotting the associated data points of the series.
For all charts except for bar, stacked bar, pie and doughnut types, the primary and secondary axes are as follows:
Bottom horizontal axis.
Top horizontal axis.
Left vertical axis.
Right vertical axis.
Bar and stacked bar charts have their axes rotated 90 degrees clockwise. For example, the primary X-axis for these two charts is the left-vertical axis.
| 3.196426 |
Tenaim: The Conditions of Marriage
Contemporary couples are reinterpreting an old ceremony that set the financial and logistical arrangements for an upcoming marriage
The author provides a historical context for the Jewish tradition of tenaim where the families of prospective bride and groom would meet to set primarily financial and logistical "conditions" for an upcoming marriage; in a small number of communities, tenaim are still practiced this way. In the spirit of the contemporary trend toward developing new Jewish ceremonies, the author then describes how a modern "tenaim" ceremony might work. The contemporary version is, for all practical purposes, a new ceremony based broadly on the notion that certain "conditions," albeit primarily personal ones, are set in anticipation of the upcoming marriage. Excerpted with permission from The New Jewish Wedding (Simon & Schuster, Inc.).
The decision to marry is one of life's momentous choices. Some couples have made it the occasion for a celebration based on the Ashkenazic custom of tenaim--literally, the "conditions" of the marriage.
Every engagement announces that two people are changing their status; the public declaration of their decision instantly designates them bride and groom. Tenaim kicks off the season of the wedding, officially and Jewishly.
An Old Ceremony
From the 12th to the early 19th century, tenaim announced that two families had come to terms on a match between their children. The document setting out their agreement, also called tenaim, would include the dowry and other financial arrangements, the date and time of the huppah [the actual wedding ceremony], and a knas, or penalty, if either party backed out of the deal.
After the document was signed and read aloud by an esteemed guest, a piece of crockery was smashed. The origins of this practice are not clear; the most common interpretation is that a shattered dish recalls the destruction of the Temple in Jerusalem, and it is taken to demonstrate that a broken engagement cannot be mended. The broken dish also anticipates the shattered glass that ends the wedding ceremony.
In some communities it was customary for all the guests to bring some old piece of crockery to smash on the floor. There is also a tradition that the mothers-in-law-to-be break the plate--a symbolic rending of mother-child ties and an acknowledgment that soon their children will be feeding each other. After the plate breaking, the party began.
Possibilities for Celebration of Tenaim Today
Tenaim is not required by Jewish law, and as family-arranged weddings became a thing of the past, the ceremony lost much of its meaning and popularity. The signing of traditional tenaim remains a vestigial practice in some Jewish communities, where the agreement to marry is signed on the day of the wedding itself. Modern reinterpretations of the tenaim return the ceremony to its original, anticipatory celebration some months in advance.
| 3.054497 |
Gold has been known since prehistory. The symbol is derived from Latin aurum (gold).
AuI 9.2 eV, AuII 20.5 eV, AuIII 30.0 eV.
Absorption lines of AuI
In the sun, the equivalent width of AuI 3122(1) is 0.005.
Behavior in non-normal stars
The probable detection of Au I was announced by Jaschek and Malaroda (1970) in one Ap star of the Cr-Eu-Sr subgroup. Fuhrmann (1989) detected Au through the ultimate line of Au II at 1740(2) in several Bp stars of the Si and Ap stars of the Cr-Eu-Sr subgroups. The presence of Au seems to be associated with that of platinum and mercury.
Au has one stable isotope, Au197 and 20 short-lived isotopes and isomers.
Au can only be produced by the r process.
Published in "The Behavior of Chemical Elements in Stars", Carlos Jaschek and Mercedes Jaschek, 1995, Cambridge University Press.
| 3.741143 |
This week marks the anniversary of the first time a human, Soviet cosmonaut Yuri Gagarin, went into space. It's also the anniversary of the inaugural launch of NASA's space shuttle program. We've assembled a slideshow representing some of the spacecraft used for manned space travel in the past, present, and future. Seen here is Gagarin before he took off on his 108-minute orbit around the Earth, an event that shocked the world and accelerated the space race between the Soviet Union and the U.S.
April 14, 2012 3:59 AM PDT
Photo by: NASA
| Caption by: Martin LaMonica
Conversation powered by Livefyre
| 3.128484 |
Using voting records, the researchers found out political party affiliation for 35 of the men and 47 of the women in that study. Political parties aren't a perfect match with ideology, but they come very close, the researchers wrote Feb. 13 in the journal PLOS ONE. Most Democrats hold liberal values, while most Republicans hold conservative values.
Comparing the Democrat and Republican participants turned up differences in two brain regions: the right amygdala and the left posterior insula. Republicans showed more activity than Democrats in the right amygdala when making a risky decision. This brain region is important for processing fear, risk and reward.
Meanwhile, Democrats showed more activity in the left posterior insula, a portion of the brain responsible for processing emotions, particularly visceral emotional cues from the body. The particular region of the insula that showed the heightened activity has also been linked with "theory of mind," or the ability to understand what others might be thinking.
While their brain activity differed, the two groups' behaviors were identical, the study found.
Schreiber and his colleagues can't say whether the functional brain differences nudge people toward a particular ideology or not. The brain changes based on how it is used, so it is possible that acting in a partisan way prompts the differences.
The functional differences did mesh well with political beliefs, however. The researchers were able to predict a person's political party by looking at their brain function 82.9 percent of the time. In comparison, knowing the structure of these regions predicts party correctly 71 percent of the time, and knowing someone's parents' political affiliation can tell you theirs 69.5 percent of the time, the researchers wrote.
This article originally appeared on LiveScience.com.
More From LiveScience.com:
| 3.011965 |
It's a look that's been painted and photographed untold times: a mother gazing deep into her infant's eyes while the two smile and kiss. Psychologists believe this interplay helps a child's emotional and cognitive development. The behavior was thought to exist only in humans and to a lesser extent in our closest kin, chimpanzees. Now, scientists have discovered similarly intense shared gazing and facial expressions in monkeys. And that means, the researchers say, that this kind of maternal communication dates back at least 30 million years.
Although scientists have studied rhesus macaque monkeys (Macaca mulatto) in the lab and field for more than 50 years, they missed this key behavior. "Previous researchers were looking more at what happens when a mother and infant are separated," says Pier Francesco Ferrari, a neuroscientist at the University of Parma in Italy, not what happens when they're together.
But plenty occurs between the two, as Ferrari and his team observed. In a semi-free-range environment at the Laboratory of Comparative Ethology, part of the National Institutes of Health in Bethesda, Maryland, the scientists filmed 14 mother-and-infant pairs during the first 2 months of the youngsters' lives, beginning when the infants were a few hours old. The team watched each pair one to three times a day for 15 minute sessions while they were awake. Infants sleep 50% to 75% of the time, which may be another reason the emotional gazing was previously missed.
As the macaque mothers looked into their babies' eyes, they "actively searched for the infants' gaze and tried to engage their babies," says Ferrari. For instance, a mother might hold her baby's head and pull the child's face toward her own or bounce her own head up and down, all while gazing directly into the infant's eyes. Other times, mothers would smack their lips in an exaggerated manner and kiss their babies' faces--reminiscent of the way human mothers get their infants' attention. The infants often responded by imitating their mothers' lip-smacks or using lip-smacks of their own to get her attention, the team reports today in Current Biology. As in humans, these actions are likely important in the baby rhesus macaque's emotional development, says Ferrari. Unlike in humans, however, the shared behaviors begin to disappear in macaques after the first month as the babies become more independent.
Please download the latest version of the free Flash plug-in.
"It's a very interesting study of a neglected, or misjudged, topic," says Frans de Waal, a primatologist at the Yerkes National Primate Research Center at Emory University in Atlanta, Georgia. It demonstrates that "the true observer can still discover things no one knows about" even in species that have been extensively studied, he says.
Because rhesus macaques are born with the capacity to communicate, Ferrari says, “We can now explore the roots of interpersonal communication, maybe even the mutual appreciation of others' intentions and emotions."
| 3.575179 |
These two group activities use mathematical reasoning - one is
numerical, one geometric.
EWWNP means Exploring Wild and Wonderful Number Patterns Created by Yourself! Investigate what happens if we create number patterns using some simple rules.
Place this "worm" on the 100 square and find the total of the four
squares it covers. Keeping its head in the same place, what other
totals can you make?
| 3.407093 |
Mutualism is very common: the classic example is the relationship pollinators and their plants. Around 70% of land plants require other species to help them reproduce via pollination. Often, the pollinators, like bees and wasps, gain food from the plant while the plant benefits by getting to mix its genes with other plants - a clear win-win for both. But both have to give up something, too, and whenever there is a cost to a relationship, both sides have good reason to cheat.
When I say cheat, I mean a species not keeping up their half of the deal. A species would gain something if they could maintain the positive benefits provided by another other species without having to expend whatever cost is associated with their side of the mutualistic bargain. A plant would benefit, for example, if it could attract its pollinators without having to make nectar or pretty flowers to attract them.
So how is mutualism maintained when there is strong evolutionary pressure to cheat? In some cases, it's by nature of the relationship. In the example above, it's simply hard for the plant to cheat because skimping on the goods directly affects how the other side acts - no nectar-laden flowers, no reason for a bee or other bug to stop and get covered in pollen.
But some mutualist relationships are easier to cheat on - take the case of fig wasps.
Fig wasps are wasps that lay their eggs in fig flowers. As these flowers turn into fruits, the wasp larvae are protected and fed by the fig, costing the tree resources. This relationship looks parasitic at first glance: the wasp gets healthy babies while the fig gets its fruit ruined. But the wasp has a promise it must keep to the tree: when it lays its eggs, it has to pollinate flowers so the tree can produce seeds.
There are actually two kinds of fig wasps: one that pollinates passively and one that pollinates actively. The passive pollinators collect pollen on their extremities and, while climbing around to deposit eggs, pollinate the trees' flowers without even thinking about it. Passively pollinating wasps do not expend extra energy to pollinate, and they cannot easily avoid carrying pollen, so there's no real way or reason for them to cheat.
David Attenborough explains their relationship rather nicely:
The active pollinators are much more deliberate about things: female wasps specifically collect pollen in specialized pouches (see R) and deposit it on another tree's flowers by choice when they lay their eggs. Active pollinators don't have to pollinate, per se - they can, and do sometimes, flit around without collecting pollen and bring it to another tree. After all, it costs the wasp time and energy to go about collecting and lugging around pollen, so why bother if they don't have to? Instead, the female wasps just infect flowers with wasp eggs, acting more like a parasite than a mutualist.
Clearly, there's an easy, good reason for the wasp to short-change the tree. But, if there's good reason for the wasps to cheat, there is equally good reason for the trees to catch them, evolutionarily speaking. Having a cheating wasps' young growing in its fruit does the tree no good whatsoever. But can the trees spot cheaters and somehow punish them for it?
That's the question that biologists K. Charlotte Jandér and Edward Allen Herre wanted to answer. To find out, they carefully watched six different species of figs, four that had active pollinating wasps and two that had passive pollinating wasps. They wanted to see if the actively-pollinated trees somehow reacted differently to loyal wasps who pollinated like they're supposed to and cheaters. Since it's hard to tell if a wasp is doing its job, instead, the researchers intentionally manipulated the wasps. For each fig tree–pollinator species-pair, they experimentally produced pollen-carrying and artificially pollen-free wasps, which, because they had no pollen, played the role of cheaters. They then waited to see how well the cheaters larvae survived.
They found that the passively pollinated figs had no system in place to protect against cheaters - which is exactly what you'd expect, since it's basically impossible for a passive-pollinating wasp to get around on the flowers without pollinating, meaning that cheating is not likely.
The actively pollinated figs, on the other hand, all punished cheaters.
First off, the figs carrying cheater offspring were aborted more frequently. When a fig aborts a larvae-containing fruit, it kills all of the larvae inside. One active species only kept around 3% of the number of figs that the passive pollinated species did. But to punish them even more, the fig also manipulated the conditions within the growing fruits which contained cheating larvae - per fruit, fewer cheater adults emerged than non-cheating ones. In one species of fig, almost no cheaters survived to adulthood - just 5% of the number that emerged from passively pollinated figs. How exactly the fig changes the condition of the fruit to harm the growing larvae isn't yet known.
This made the scientists wonder how common cheaters were in the wild, and whether the species that strongly reacted to cheating were plagued by more cheaters. As expected, they didn't find any pollen-free passive pollinating wasps, but they did find active pollinating ones that weren't carrying the goods. They also found that the species that cheated the most lived on the fig tree that punished them the least.
These data strongly support consistent coevolution between the fig wasps and their trees. If the tree doesn't catch cheaters, the wasps exploit their longtime friends, and since cheating isn't punished, cheating young grow up and continue cheating, leading to high frequencies of cheaters. This rapidly degrades their relationship from mutualism to parasite-host. However, if the trees respond by culling free-riders, they reduce the number of wasps inclined to cheat and maintain the true mutualism that the two have had for around 80 million years.
Mutualism is often portrayed as "playing nice", a beautiful harmony between species. Just listen to how the relationship between active pollinating fig wasps and their trees is portrayed in this PBS special:
How sweet. Too bad it's totally not true. Just like the arms races between predator and prey or parasite and host, mutualist species constantly adapt to try get the upper hand in their relationship. There is still a battle going on between even the best of friends to gain an evolutionary advantage, and just like other interactions, mutualists have to constantly evolve to maintain the status quo.
Jander, K., & Herre, E. (2010). Host sanctions and pollinator cheating in the fig tree-fig wasp mutualism Proceedings of the Royal Society B: Biological Sciences DOI: 10.1098/rspb.2009.2157
| 3.489841 |
Donald W. Pfaff
|Donald W. Pfaff, PhD|
Donald W. Pfaff, Ph.D., professor and head of the Laboratory of Neurobiology and Behavior at The Rockefeller University, is a brain scientist who uses neuroanatomical, neurochemical and neurophysiological methods to study the cellular mechanisms by which the brain controls behavior. His laboratory’s research has proceeded through four steps to demonstrate how steroid hormone effects on nerve cells can direct natural, instinctive behaviors.
First, Pfaff is known for discovering exact cellular targets for steroid hormones in the brain. A system of hypothalamic and limbic forebrain neurons with sex hormone receptors, discovered in rodents, was later found to be present in species ranging from fish through primates. This hormone-sensitive system apparently is a general feature of the vertebrate brain. His lab recently found that “knocking out” the gene for the estrogen receptor in animals prevents female reproductive behavior. Surprisingly, that single gene deletion resulted both in masculinizing female animals and, counterintuitively, feminizing males’ behavior.
Secondly, his lab at Rockefeller then worked out the neural circuitry for hormone-dependent female reproductive behavior, the first behavior circuit elucidated for any mammal. Third, he and his colleagues demonstrated several genes that are “turned on” by estrogens in the forebrain. Fourth, in turn, their gene products facilitate reproductive behavior. For example, the induction of one of them, the gene for the progesterone receptor, showed that the hormone estrogen could turn on another transcription factor important, in turn, for behavioral control. Regulated gene expression in the brain participates in the control of behavior.
Taken together, these four advances proved that specific chemicals acting in specific parts of the brain could determine individual behavioral responses.
While two genetic transcription factors, estrogen receptor and progesterone receptor, cooperate with each other to promote reproductive behavior, another transcription factor, thyroid hormone receptor, actually interferes with estrogenic actions. Seasonal environmental changes, raising thyroid hormone levels, can block reproductive behaviors when they would be biologically inappropriate.
In an experiment that lent support to the concept of the “unity of the body,” Pfaff found that the nervous system protein GnRH promotes reproductive behavior as well as directing the pituitary to stimulate the ovaries and testes. This action of GnRH renders instinctive behaviors congruent with the physiology of reproductive organs elsewhere in the body
Pfaff’s lab subsequently discovered that GnRH-producing neurons are not actually born in the brain as other neurons are. Instead, during embryonic development, they are born in the olfactory epithelium. Once born, they migrate up the nose and into the forebrain. In humans, interruption of that migration, especially in men, causes a state in which the body does not produce adequate amounts of the sex hormone testosterone. This hypogonadal state is associated with a loss of libido.
In 2003, Pfaff received an NIH MERIT Award for the study of generalized arousal, responsible for activating all behavioral responses.. His team formulated the first operational definition of nervous system arousal, enabling scientists to measure arousal quantitatively in laboratory animals, as well as in human beings. In humans, deficits in arousal contribute to such cognitive problems as attention deficit hyperactivity disorder, autism and Alzheimer's disease. Erosion of arousal also may account for some of the mental difficulties that people face as they age. Understanding generalized arousal may help scientists develop pharmacological methods to enhance alertness during the day and sleep at night. Analyzing the mechanisms of arousal may also lead to a more precise anesthesiology.
Pfaff has made fundamental contributions to our understanding of how the administration of sex hormones can affect health. Pfaff’s lab recently showed that giving hormone doses in pulses, rather than as a steady exposure, may maximize the benefits and limit the side effects now associated with hormone therapies. By giving estrogen replacement to the rats, the scientists studied the actions of the hormone at the level of the brain cell's protective outer membrane, and inside the nucleus where the cell's DNA is housed. They found that both the membrane and the DNA pathways are crucial, with one facilitating the other, in triggering hormone-dependent gene expression and female mating behavior. By limiting the estrogen exposure of to short pulses, the total dose can be kept much smaller than with steady delivery, and therefore some of the negative effects will be reduced.
Born in Rochester, N.Y., on December 9, 1939, he received the A.B. degree magna cum laude from Harvard College in 1961 and a Ph.D. from the Massachusetts Institute of Technology in 1965. He held a National Merit Scholarship, Harvard National Scholarship, Woodrow Wilson Fellowship, MIT President's Award Fellowship, National Institutes of Health Predoctoral Fellowship and National Science Foundation Postdoctoral Fellowship.
Pfaff joined The Rockefeller University in 1966 as a postdoctoral fellow. He was named assistant professor in 1969, associate professor in 1971, granted tenure in 1973 and promoted to full professor in 1978.
He is a member of the U.S. National Academy of Sciences and a fellow of the American Academy of Arts and Sciences. He also is a member of several scientific organizations related to studies of the central nervous system.
He is the author of Estrogens and Brain Function (Springer, 1980), Drive: Neurobiological and Molecular Mechanisms of Sexual Motivation (MIT Press, 1999), Brain Arousal and Information Theory (Harvard University Press, 2005) and The Neuroscience of Fair Play: Why We (Usually) Follow the Golden Rule (Dana Press, 2007). He has edited The Physiological Bases of Motivation (1982), Ethical Questions in Brain and Behavior (1984), Genetic Influences on the Nervous System (CRC Press, 1999) and Hormones, Brain and Behavior (5 volumes, Academic Press, 2002). He also is on the editorial boards of several scientific journals.
Pfaff and his first wife, the poet Stephanie Strickland, have three children: Robin (Palo Alto, Calif.), Alexander (New York, N.Y.) and Douglas (New York, N.Y.).
| 3.00018 |
An important discovery has been made with respect to the mystery of “handedness” in biomolecules. Researchers led by Sandra Pizzarello, a research professor at Arizona State University, found that some of the possible abiotic precursors to the origin of life on Earth have been shown to carry “handedness” in a larger number than previously thought.
The work is being published in this week’s Early Edition of the Proceedings of the National Academy of Sciences. The paper is titled, “Molecular asymmetry in extraterrestrial chemistry: Insights from a pristine meteorite,” and is co-authored by Pizzarello and Yongsong Huang and Marcelo Alexandre, of Brown University.
Pizzarello, in ASU’s Department of Chemistry and Biochemistry, worked with Huang and Alexandre in studying the organic materials of a special group of meteorites that contain among a variety of compounds, amino acids that have identical counterparts in terrestrial biomolecules. These meteorites are fragments of asteroids that are about the same age as the solar system (roughly 4.5 billion years.)
Scientists have long known that most compounds in living things exist in mirror-image forms. The two forms are like hands; one is a mirror reflection of the other. They are different, cannot be superimposed, yet identical in their parts.
When scientists synthesize these molecules in the laboratory, half of a sample turns out to be “left-handed” and the other half “right-handed.” But amino acids, which are the building blocks of terrestrial proteins, are all “left-handed,” while the sugars of DNA and RNA are “right-handed.” The mystery as to why this is the case, “parallels in many of its queries those that surround the origin of life,” said Pizzarello.
Years ago Pizzarello and ASU professor emeritus John Cronin analyzed amino acids from the Murchison meteorite (which landed in Australia in 1969) that were unknown on Earth, hence solving the problem of any contamination. They discovered a preponderance of “left-handed” amino acids over their “right-handed” form.
“The findings of Cronin and Pizzarello are probably the first demonstration that there may be natural processes in the cosmos that generate a preferred amino acid handedness,” Jeffrey Bada of the Scripps Institution of Oceanography, La Jolla, Calif., said at the time.
The new PNAS work was made possible by the finding in Antarctica of an exceptionally pristine meteorite. Antarctic ices are good “curators” of meteorites. After a meteorite falls -- and meteorites have been falling throughout the history of Earth -- it is quickly covered by snow and buried in the ice. Because these ices are in constant motion, when they come to a mountain, they will flow over the hill and bring meteorites to the surface.
“Thanks to the pristine nature of this meteorite, we were able to demonstrate that other extraterrestrial amino acids carry the left-handed excesses in meteorites and, above all, that these excesses appear to signify that their precursor molecules, the aldehydes, also carried such excesses,” Pizzarello said. “In other words, a molecular trait that defines life seems to have broader distribution as well as a long cosmic lineage.”
“This study may provide an important clue to the origin of molecular asymmetry,” added Brown associate professor and co-author Huang.
Source: Arizona State University
Explore further: University of Illinois biophysicists measure mechanism that determines fate of living cells
| 3.611129 |
A schematic of a blind quantum computer that could protect user's privacy.
Image credit: Phillip Walther et al./Vienna University.
Researchers worry that if quantum computers are realized in the next few years, only a few specialized facilities will be able to host them. This may leave users' privacy vulnerable. To combat this worry, scientists have proposed a "blind" quantum computer that uses polarization-entangled photonic qubits.
| 3.180372 |
This summer, Mount Diablo Unified School District's governing board will make sure district policies on bullying comply with state legislation known as Seth's Law, which went into effect on July 1. Named for Seth Walsh, a 13-year-old in Tehachapi, Calif. who hanged himself in 2010 after being bullied for being gay, AB 9 requires public schools to have clear rules about preventing and punishing bullying.
Here's the definition of bullying Mt. Diablo Unified staff shared with the school board in June:
No student or group of students shall through physical, written, verbal, or other means harass, sexually harass, threaten, intimidate, cyberbully, cause bodily injury to, or commit hate violence against any other student or school personnel.
Unwilling to force new costs on cash-strapped school districts, state lawmakers nixed language in the original bill that would have required staff to attend trainings on bullying.
It falls on the California Department of Education to make sure school districts are following the new law, but with resources scarce, it will likely be difficult for the CDE to do much in the way of enforcement. That means administrators, teachers, parents and students will ultimately be responsible for addressing the problem of bullying in individual schools.
What do you think about the culture in Pleasant Hill's secondary schools? Does school staff take bullying seriously? Do gay and lesbian students feel respected and safe? Tell us in the comments below.Suspensions for bullying, violence, intimidation or sexual harassment in 2010-2011 Total suspensions in 2010-2011
57 113 Source: California Department of Education
During the 2010-2011 school year, researchers for the California Healthy Kids Survey asked around 5,600 secondary students in Mount Diablo Unified how they feel about their schools. The findings below are from the questions related to bullying.Mean rumors spread about you 2 or more times Sexual comments or jokes directed at you 2 or more times Been made fun of for the way you look or talk 2 or more times 7th Graders 23 percent 28 percent 27 percent 9th Graders 33 percent 34 percent 25 percent 11th Graders 20 percent 36 percent 24 percent Been pushed shoved or hit 2 or more times Been afraid of being beaten up 2 or more times Been in a physical fight 2 or more times 7th Graders 24 percent 12 percent 12 percent 9th Graders 15 percent 9 percent 9 percent 11th Graders 9 percent 7 percent 8 percent Source: California Healthy Kids Survey, 2010-2011
Want Pleasant Hill news delivered straight to your inbox? Sign up for our newsletter.
| 3.109736 |
Railroads and ferries brought prosperity
|A. B. Safford Memorial Museum in Cairo, Illinois, built in 1883|
Cairo, Illinois, is at the extreme southern tip of Illinois, at the point where the Ohio and Mississippi Rivers converge.
I always have mixed feelings as I drive through Cairo (pronounced "Kay-roh".) Sadly, the town has endured a long period of hard times and population loss. In the business district, empty lots suggest that many deteriorated buildings have been bulldozed and hauled away. Some old buildings, still standing, are candidates for the next demolition list.
|I'm not sure if this church is in use.|
Cairo became an important railroad hub after the Civil War, and the town enjoyed several decades of great prosperity. Train cars (and other vehicles) were ferried across the rivers, and the ferry business was as important to local fortunes as the railroad and river-shipping businesses.
|The Riverlore in Cairo, Illinois|
Then in 1889, the Illinois Central Railroad completed the Cairo Rail Bridge across the Ohio River (image, another image). It was a masterpiece of engineering. The metal bridge itself was nearly 2 miles long and the entire structure including the wooden approaches was almost 4 miles long. Freight from Chicago could travel directly to New Orleans via the Cairo Rail Bridge -- a revolution in rail shipping, but a blow to Cairo.
|More mansions in Cairo|
Vehicles traveling in the Cairo area still used the ferries until two highway bridges were built -- the Mississippi River bridge (leading to Missouri) in 1929, and the Ohio River bridge (leading to Kentucky) in 1937. The bridges and roads connected a short distance south of Cairo, so travelers could quickly cross both rivers without even entering town.
The loss of the railroad and ferry industries was significant, but it alone did not kill the town. By the early 1900s, other serious problems (racism, corruption, violence, crime) were well-established in Cairo. Over the next century, these evils had a slow-but-deadly effect on the town. You can read about the darker side of Cairo's history at "Cairo, Illinois, Death by Racism."
|Overgrowth and disrepair, too!|
A photo I took inside the Customs House some years ago
Seen at Wickliffe, Kentucky
|Ohio River bridge, just south of Cairo|
| 3.262089 |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
It is caused by the presence of three — instead of two — chromosomes 18 in a fetus or baby's cells.
The additional chromosome usually occurs before conception. A healthy egg or sperm cell contains 23 individual chromosomes - one to contribute to each of the 23 pairs of chromosomes needed to form a normal cell with 46 chromosomes. Numerical errors arise at either of the two meiotic divisions and cause the failure of segregation of a chromosome into the daughter cells (non-disjunction). This results in an extra chromosome making the haploid number 24 rather than 23. Fertilization of these eggs or sperm that contain an extra chromosome results in trisomy, or three copies of a chromosome rather than two.
It is this extra genetic information that causes all the abnormalities characteristic of individuals with Edwards Syndrome. As each and every cell in their body contains extra information, the ability to grow and develop appropriately is delayed or impaired. This results in characteristic physical abnormalities such as low birth weight; a small, abnormally shaped head; small jaw; small mouth; low-set ears; and clenched fists with overlapping fingers. Babies with Edwards syndrome also have heart defects, and other organ malformations such that most systems of the body are affected.
Edwards Syndrome also results in significant developmental delays. For this reason a full-term Edwards syndrome baby may well exhibit the breathing and feeding difficulties of a premature baby. Given the assistance offered to premature babies, some of these infants are able to overcome these initial difficulties, but most eventually succumb.
The survival rate for Edwards Syndrome is very low. About half die in utero. Of liveborn infants, only 50% live to 2 months, and only 5 - 10% will survive their first year of life. Major causes of death include apnea and heart abnormalities. It is impossible to predict the exact prognosis of an Edwards Syndrome child during pregnancy or the neonatal period. As major medical interventions are routinely withheld from these children, it is also difficult to determine what the survival rate or prognosis would be for the condition if they were treated with the same aggressiveness as their genetically normal peers. They are typically severely to profoundly developmentally delayed.
The rate of occurrence for Edwards Syndrome is ~ 1:3000 conceptions and 1:6000 live births, as 50% of those diagnosed prenatally with the condition will not survive the prenatal period. Although women in their 20's and 30's may conceive Edwards Syndrome babies, there is an increased risk of conceiving a child with Edwards Syndrome as a woman's age increases.
A small percentage of cases occur when only some of the body's cells have an extra copy of chromosome 18, resulting in a mixed population of cells with a differing number of chromosomes. Such cases are sometimes called mosaic Edwards syndrome. Very rarely, a piece of chromosome 18 becomes attached to another chromosome (translocated) before or after conception. Affected people have two copies of chromosome 18, plus extra material from chromosome 18 attached to another chromosome. With a translocation, the person has a partial trisomy for chromosome 18 and the abnormalities are often less than for the typical Edwards syndrome.
Features and characteristicsEdit
Symptoms and findings may be extremely variable from case to case. However, in many affected infants, the following may be found:
- Growth deficiency
- Feeding difficulties
- Breathing difficulties
- Developmental delays
- Mental retardation
- Undescended testicles in males
- Prominent back portion of the head
- Small head (microcephaly)
- Low-set, malformed ears
- Abnormally small jaw (micrognathia)
- Small mouth
- Cleft lip/palate
- Upturned nose
- Narrow eyelid folds (palpebral fissures)
- Widely-spaced eyes (ocular hypertelorism)
- Dropping of the upper eyelids (ptosis)
- Overlapped, flexed fingers
- Underdeveloped or absent thumbs
- Underdeveloped nails
- Absent radius
- Webbing of the second and third toes
- Clubfeet or Rocker bottom feet
- Small pelvis with limited movements of the hips
- Short breastbone
- Kidney malformations
- Structural heart defects at birth (i.e., ventricular septal defect, atrial septal defect, patent ductus arteriosus)
- Stenson, Carol M. (1999). Trisomy 18: A Guidebook for Families. University of Nebraska Medical Center. ISBN 1-889843-29-6.
- Barnes, Ann M. (2000). Care of the infant and child with trisomy 18 or 13: medical problems, reported treatments and milestones. University of Nebraska Medical Center. ISBN 1-889843-58-X.
- Trisomy 18 Support Foundation
- Support Organisation For Trisomy 18, 13, and Related Disorders (SOFT)
- The Chromosome 18 Registry & Research Society
- Who Named It synd/3438
| 3.588655 |
Scientific Investigations Report 2005-5232
The carbonate-rock aquifer of the Great Basin is named for the thick sequence of Paleozoic limestone and dolomite with lesser amounts of shale, sandstone, and quartzite. It lies primarily in the eastern half of the Great Basin and includes areas of eastern Nevada and western Utah as well as the Death Valley area of California and small parts of Arizona and Idaho. The carbonate-rock aquifer is contained within the Basin and Range Principal Aquifer, one of 16 principal aquifers selected for study by the U.S. Geological Survey’s National Water- Quality Assessment Program.
Water samples from 30 ground-water sites (20 in Nevada and 10 in Utah) were collected in the summer of 2003 and analyzed for major anions and cations, nutrients, trace elements, dissolved organic carbon, volatile organic compounds (VOCs), pesticides, radon, and microbiology. Water samples from selected sites also were analyzed for the isotopes oxygen-18, deuterium, and tritium to determine recharge sources and the occurrence of water recharged since the early 1950s.
Primary drinking-water standards were exceeded for several inorganic constituents in 30 water samples from the carbonate-rock aquifer. The maximum contaminant level was exceeded for concentrations of dissolved antimony (6 μg/L) in one sample, arsenic (10 μg/L) in eleven samples, and thallium (2 μg/L) in one sample. Secondary drinking-water regulations were exceeded for several inorganic constituents in water samples: chloride (250 mg/L) in five samples, fluoride (2 mg/L) in two samples, iron (0.3 mg/L) in four samples, manganese (0.05 mg/L) in one sample, sulfate (250 mg/L) in three samples, and total dissolved solids (500 mg/L) in seven samples.
Six different pesticides or metabolites were detected at very low concentrations in the 30 water samples. The lack of VOC detections in water sampled from most of the sites is evidence thatVOCs are not common in the carbonate-rock aquifer. Arsenic values for water range from 0.7 to 45.7 μg/L, with a median value of 9.6 μg/L. Factors affecting arsenic concentration in the carbonate-rock aquifer in addition to geothermal heating are its natural occurrence in the aquifer material and time of travel along the flow path.
Most of the chemical analyses, especially for VOCs and nutrients, indicate little, if any, effect of overlying land-use patterns on ground-water quality. The water quality in recharge areas for the aquifer where human activities are more intense may be affected by urban and/or agricultural land uses as evidenced by pesticide detections. The proximity of the carbonate-rock aquifer at these sites to the land surface and the potential for local recharge to occur through the fractured rock likely results in the occurrence of these and other land-surface related contaminants in the ground water. Water from sites sampled near outcrops of carbonate-rock aquifer likely has a much shorter residence time resulting in a potential for detection of anthropogenic or land-surface related compounds. Sites located in discharge areas of the flow systems or wells that are completed at a great depth below the land surface generally show no effects of land-use activities on water quality. Flow times within the carbonate-rock aquifer, away from recharge areas, are on the order of thousands of years, so any contaminants introduced at the land surface that will not degrade along the flow path have not reached the sampled sites in these areas.
First posted February, 2006
Part or all of this report is presented in Portable Document Format (PDF); the latest version of Adobe Reader or similar software is required to view it. Download the latest version of Adobe Reader, free of charge.
Schaefer, D.H., Thiros, S.A., and Rosen, M.R., 2005, Ground-water quality in the carbonate-rock aquifer of the Great Basin, Nevada and Utah, 2003: U.S. Geological Survey Scientific Investigations Report 2005-5232, 41 p.
Description of Study Area
Study Design and Methods
Appendix 1. Water-quality constituents analyzed in ground-water samples from wells and springs in the carbonate-rock aquifer, Nevada and Utah
| 3.222978 |
Microsoft Windows is a series of popular proprietary operating environments and operating systems created by Microsoft for use on personal computers and servers. Microsoft first introduced an operating environment named Windows in November, 1985, as an add-on to MS-DOS. This was in response to Apple Computer's computer system, the Apple Macintosh, which used a graphical user interface (GUI). Microsoft Windows eventually came to dominate the world personal computer market with market analysts like IDC estimating that Windows has around 90% of the client operating system market. All recent versions of Windows are fully-fledged operating systems.
Tools and Libraries
- Curses library
- List of C Development enviroments on the Game Programming Wiki
- LibSDL, Simple DirectMedia Layer
- 'Microsoft Visual C++ Toolkit 2003' free, from Microsoft themselves.
| 3.22773 |
Our cells generate most of the energy they need in tiny structures inside them called mitochondria, which can be thought of as the cells' powerhouses. Mitochondria have their own DNA, independent of the cell's nuclear genome, which is compelling similar to the DNA of bacterial genomes. What this suggests is that many thousands of years ago, mitochondria were not just components of our cells, but were in fact unicellular organisms in their own right. According to this hypothesis – the endosymbiotic theory – mitochondria (and possibly some other organelles) originated as free-living bacteria which later became incorporated inside other cells in a symbiotic relationship.
Like man-made powerhouses, mitochondria produce hazardous by-products as well as useful energy. They are the main source of free radicals in the body – hugely reactive particles which cause damage to all cellular components through oxidative stress. They attack the first thing they come across, which is usually the mitochondrion itself. This hazardous environment has put the genes located in the mitochondrion at risk of mutational damage, and over many years of evolutionary pressure the mitochondrial DNA has gradually moved into the cell's nucleus, where it is comparatively well-protected from the deleterious effects of free-radicals alongside all of the cell's other DNA. This is called allotopic expression, and it has moved all but thirteen of the mitochondrion's full complement of at least one thousand genetic instructions for proteins into the 'bomb-shelter' of the nucleus.
However, the remaining thirteen genes in the mitochondrion itself are subject to the ravages of free-radicals, and are likely to mutate. Mutated mitochondria, as Aubrey de Grey has identified, may indirectly accelerate many aspects of ageing, not least when their mutation causes them to no longer produce the required energy for the cell, in turn impairing the cell's functionality. In order to combat the down-stream ageing damage as a consequence of mitochondrial mutation, de Grey believes that the mitochondrial DNA damage itself needs to be repaired or rendered harmless.
His characteristically bold solution to this problem is to put the mutations themselves beyond use by creating backup copies of the remaining mitochondrial genetic material and storing them in the safety of the cell's nucleus. Allotopically expressed here, like the rest of the mitochondrial DNA, any deletions in the mitochondrial DNA can be safely overwritten by the backup master copy, which is much less likely to mutate hidden away from the constant bombardment of free radicals. There are several difficulties to this solution, not least the fact that the remaining proteins are extremely hydrophobic and so don't 'want' to be moved at all, and additionally the code disparity between the language of the mitochondrial DNA and the nuclear DNA which makes a simple transplantation without translation impossible.
Even if this engineered solution to the problem proves impracticable, at the very least the theory is sound. If we can devise a way systematically defend our mitochondria from their own waste products, we will drastically reduce the number of harmful free radicals exported throughout our bodies, thereby reducing preventing a lot of the damage that distinguishes the young from the old, extending and improving the quality of our lives as a result.
Dr Aubrey de Grey, a gerontologist from Cambridge, believes that ageing is a disease that can be cured. Like man-made machines, de Grey sees the human body as a system which ages as the result of the accumulation of various types of damage. And like machines, de Grey argues that this damage can be periodically repaired, potentially leading to an indefinite extension of the system's functional life. De Grey believes that just as a mechanic doesn't need to understand precisely how the corrosive processes of iron oxidation degrades an exhaust manifold beyond utility in order to successfully repair the damage, so we can design therapies that combat human ageing without understanding the processes that interact to contribute to our ageing. All we have to do is understand the damage itself.
De Grey is confident that he has identified future technologies that can comprehensively remove the molecular and cellular lesions that degrade our health over time, technologies which will one day overcome ageing once and for all. In order to pursue the active development and systematic testing of these technologies, de Grey has made it part of his mission to break the 'pro-ageing trance' that he sees as a widespread barrier to raising the funding and stimulating the research necessary to successfully combat ageing. De Grey defines this trance as a psychological strategy that people use to cope with ageing, fuelled from the incorrect belief that ageing is forever unavoidable. This trance is coupled with the general wisdom that anti-ageing therapies can only stretch out the years of debilitation and disease which accompany the end of most lifetimes. De Grey contends that by repairing the pathologies of ageing we will in fact be able to eliminate this period completely, postponing it with new treatments for indefinitely longer time periods so that no-one ever catches up with the damage caused by their ageing.
To get over our collective 'trance' it is worth realising that this meme has made perfect psychological sense until very recently. Given the traditional assumption that ageing cannot be countered, delayed or reversed, it has paid to make peace with such a seemingly immutable fact, rather than wasting one's life preoccupied with worrying about it. If we follow de Grey's rationale that the body is a machine that can be repaired and restored, we have to accept that there are potential technologies that can effectively combat ageing, and thus the trance can no longer be rationally maintained.
Telomeres are repetitive DNA sequences which cap the ends of chromosomes, protecting them from damage and potentially cancerous breakages and fusings. They act as disposable buffers, much as the plastic aglets at the end of shoelaces prevent fraying. Each time a cell divides, the telomores get shorter as DNA sequences are lost from the end. When telomeres reach a certain critical length, the cell is unable to make new copies of itself, and so organs and tissues that depend on continued cell replication begin to senesce. The shortening of telomeres plays a large part in ageing (although not necessarily a causal one), and so advocates of life extension are exploring the possibility of lengthening telomeres in certain cells by searching for ways to selectively activate the enzyme telomerase, which maintains telemore length by the adding newly synthesized DNA code to their ends. If we could induce certain parts of our bodies to express more telomerase, the theory goes, we will be able to live longer, healthier lives, slowing down the decline of ageing.
Every moment we're fighting a losing battle against our telomeric shortening; at conception our telomeres consist of roughly 15,000 DNA base pairs, shrinking to 10,000 at birth when the telomerase gene becomes largely deactivated. Without the maintenance work of the enzyme our telomeres reduce in length at a rate of about 50 base pairs a year. When some telomeres drop below 5,000 base pairs, their cells lose the ability to divide, becoming unable to perform the work they were designed to carry out, and in some cases also releasing chemicals that are harmful to neighbouring cells. Some particularly prominent cell-types that are affected by the replicative shortening of telomeres include the endothelial cells lining blood vessels leading to the heart, and the cells that make the myelin sheath that protects our brain's neurons. Both brain health and heart health are bound to some degree to the fate of cells with a telomeric fuse. The correlation between telomere length and biological ageing has motivated a hope that one day we will be able to prevent and perhaps reverse the effects of replicative senescence by optimally controlling the action of telomerase.
The complexity of synthesizing proteins for specific purposes is so great that predicting the amino acid sequences necessary to generate desired behaviour is a huge challenge. Mutations far away from the protein’s active site can influence its function, and the smallest of changes in the structure of an enzyme can have a large impact on its catalytic efficacy – a key concern for engineers creating proteins for industrial applications. Even for a small protein of only 100 amino acids long there are more possible sequences than there are atoms in the universe.
What this means is that an exhaustive search through the space of all possible proteins for the fittest protein for a particular purpose is essentially unachievable, just as a complete search through all possible chess games to decide the absolutely optimal next move is computationally impractical. This is true both for scientists and for nature. This means that even though evolution has been searching the space of all possible proteins for billions of years for solutions to survival, it has in fact explored only a minute corner of all possible variations. All evolved solutions are likely to be 'good enough' rather than the absolute optimum – it just so happens that the ones already 'discovered' are sufficient to create and maintain the diversity and richness of life on planet earth.
New ways of efficiently searching this vast space of possible sequences will reveal proteins with properties that have never before existed in the natural world, and which will hopefully provide answers to many of our most pressing problems. Directed evolution not only provides a faster way of searching this space than many other methods, but it also leaves a complete 'fossil record' of the evolutionary changes that went into evolving a specific protein, providing data on the intermediate stages which will offer insight after detailed study into the relationship between protein sequence and function. Unlike natural evolution, directed evolution can also explore sequences which aren't directly biologically relevant to a single organism's survival, providing a library of industrially relevant proteins, and perhaps one day creating bacteria capable of answering worldwide problems caused by pollution and fossil fuel shortage.
Neo-evolution is factorially faster than normal evolutionary processes. Our genetically engineered organisms have already neo-evolved – shortcutting traditional evolution to produce desirable results without the costly time-delay of selection over hundreds or thousands of generations. Higher-yielding and insecticide-resistant crops have been engineered through the painstaking modification of individual genes, achieving better results than years of selective breeding in a fraction of the time. Genetic engineering of humans, both embryonic and those already alive, will perhaps one day bring the benefits of this new type of evolution to our bodies.
At the moment, we simply do not understand how DNA sequences encode useful functions, and so genetic engineering remains a tremendously costly and laborious process. It cost $25 million and took 150 person-years to engineer just a dozen genes in yeast to cause it to produce an antimalarial drug, and commercial production has yet to begin. The amount of time and money required to effect a beneficial result through genetic engineering – even if it involves relatively simple changes to only a dozen genes – is so costly that the transformative idea of neo-evolved humans has been kept at a safe distance.
But there are other ways to neo-evolve that might make the possibility of too-good-to-miss genetic enhancements in humans a reality before long. Earlier this year, for instance, the National Academy of Engineering awarded its Draper Prize to Francis Arnold and Willem Stemmer for their independent work towards 'directed evolution', a technique which harnesses the power of traditional evolution in a highly optimized environment to accelerate the evolution of desirable proteins with properties not found in nature. Rather than attempt to manually code the strings of individual DNA letters necessary to effect a particular trait, directed evolution and its associated 'evolution machines' take a prototype 'parent' gene, create a library of genetic variants from it and apply selection pressures to screen for the strains that produce the desired trait, iterating this process with the best of each batch until the strongest remain. This was first evidenced in 2009, when geneticist Harris Wang used directed evolution to create new proteins in E. coli bacteria that would produce more of the pigment that makes tomatoes red than was previously possible.
To achieve this genetic modification without manually fine-tuning each gene, Wang synthesized 50,000 DNA strands which contained modified sequences of genes that produce the pigment, and multiplied them in his evolution machine. After repeating the process 35 times with the results of each cycle fed into the next, he produced some 15 billion new strains, each with a different combination of mutations in the target pigment-producing genes. Of these new strains, some produced up to five times as much pigment as the original strain, more than the entire biosynthesis industry had ever achieved. The process took days rather than years.
There are three distinct possibilities for how technological and medical advancement will impact future human evolution. The first contingency is that the human species will undergo no further natural selection, because we may have already advanced to a position of evolutionary equipoise, where our technologies have artificially preserved genes that would otherwise have been removed by natural selection; evolution no longer has a chance to select. As a species we already control our environment to such an extent that traditional evolutionary pressures have been functionally alleviated – we adapt the environment to us rather than the other way around. Indeed, local mobility and international migration allow populations to genetically integrate to such a degree that the isolation necessary for evolution to take place may in fact already no longer possible.
The second possibility is that we will continue to evolve in the traditional way, through inexorable selection pressures exerted by the natural environment. The isolation necessary to allow the impact of any environmental changes to be selected for in the population will now be on the planetary scale, enabled by colonization of distant space.
The third possibility is that we will evolve in an entirely new way, guided not by unconscious natural forces but by our own conscious design decisions. In this neo-evolution we would use genetic engineering to eliminate diseases like diabetes, protect against strokes and reduce the risks of cancer. We would be compressing a natural process which takes hundreds of thousands of years into single generations, making evolutionarily advantageous adjustments ourselves.
From an economic perspective, cheating is a simple cost-benefit analysis, where the probability of being caught and the severity of punishment must be weighed against how much stands to be gained from cheating. Behavioural economist Dan Ariely has conducted experimental studies to test whether there are predictable thresholds for this balance, and how they can be influenced.
In one study, Ariely gave participants twenty maths problems with only five minutes to solve them. At the end of the time period, Ariely paid each participant one dollar for each correctly answered question; on average people solved four questions and so received four dollars. Ariely tempted some members of the study to cheat, by asking them to shred their paper, keep the pieces and tell him how many questions they answered correctly. Now the average number of questions solved went up to seven; and it wasn't because a few people cheated a lot, but rather that everyone cheated a little.
Hypothesizing that we each have a “personal fudge factor”, a point at which we can still feel good about ourselves despite having cheated, Ariely ran another experiment to examine how malleable this standard was. Before tempting participants to cheat, Ariely asked them to recall either ten books they read at school or to recall The Ten Commandments. Those who had tried to recall the Commandments – and nobody in the sample managed to get them all – did not cheat at all when given the opportunity, even those that could hardly remember any of the Commandments. When self-declared atheists were asked to swear on the Bible before being tempted to cheat in the task, they did not cheat at all. Cheating was also completely eradicated by asking students to sign a statement to the effect that they understood that the survey falls under the “MIT Honor Code”, despite MIT having no such code.
In an additional variant of the same experiment, Ariely tried to increase the fudge-factor and to encourage cheating. A third of particpants were told to hand back their results paper to the experimenters, a third were told to shred it and ask for X number of dollars for X completed questions, and a third were told to shred their results and ask for X tokens. For this last group, tokens were handed out, and the participants would walk a few paces to the side and exchange their tokens for dollars. This short disconnect between cash and token encouraged cheating rates to double in this last group.
Putting these results in a social context, Ariely ran yet another variant of the experiment, to see how people would react when they saw examples of other people cheating in their group. Subjects were given envelopes filled with money, and at the end of the experiment they were told to pay back money for the questions that they did not complete. An actor was planted in the group, without the knowledge of the other participants. After thirty seconds the actor stood up and announced that he had finished all of the questions. He was told that the experiment was completed for him, and that he could go home (i.e. keeping the contents of the envelope). Depending on whether he was wearing a shirt identifying him as from the same university as the rest of the students in the test or not, cheating went either up or down respectively. Carnegie Mellon students would cheat more if he was identified as a Carnegie Mellon student, whilst cheating would decrease if he was identified by a University of Pittsburgh shirt.
Ariely's results show that the probability of getting caught doesn't influence the rate of cheating so much as the norms for cheating influence behaviour: if people in your own group cheat, you are more likely to cheat as well. If a person from outside of your group cheats, the personal fudge factor increases, and the likelihood of cheating drops, just as it did with the Ten Commandments experiment, reminding people of their own morality.
The stock market combines a worrying cocktail of features from these experiments. It deals with 'tokens', stocks and derivatives and not 'real' money. Stocks are many steps removed from real money, and for long portions of time. This encourages cheating. Any enclaves of cheating will be reinforced by people mirroring the behaviours of those around them, and this is precisely what happened in the Enron scandal.
Here is a syllogism that is deeply embedded in Western society. Welfare is maximized by maximizing individual freedom. Individual freedom is maximized by maximizing choice. Welfare increases with more choice.
Supermarkets are an embodiment of this belief. They are symbols of affluence and empowerment conferred through their superabundance of choice. The range of products they offer is dizzying. So disorientingly so, in fact, that too many options have paralyzing effects, making it very difficult to choose at all – a fact that completely undermines the belief that maximizing choice has unqualified beneficial effects.
If we finally do manage to make a decision and overcome this paralytic effect, too much choice diminishes the satisfaction that can be gained compared with choices made between fewer options. This is because if the choice you make leaves you feeling dissatisfied in any way it is easy to simulate the myriad of other choices that could have been better. These imagined alternatives, conjured from the myriad real alternatives, can induce regret which dilutes the satisfaction from your choice, even if it was a good one. The wider the range of options, the easier it becomes to regret even the smallest disappointment in your decision.
A wider range of choice also makes it easier to imagine the attractive features of the alternatives that have been rejected, once more diminishing the sense of satisfaction with the chosen alternative. This phenomenon is known as the opportunity cost, the sacrificial loss of other opportunities when a choice is made: choosing to do one thing is choosing not to do many other things. Many of these other choices will have attractive features which will make whatever you have chosen less attractive, no matter how good it really is.
The maximization of choice leads to an escalation of expectations, where the best that can ever be hoped for is that a decision meets expectations. In a world of extremely limited choice, pleasant surprises are possible. In a world of unlimited choice, perfection becomes the expectation: you could always have made a better choice. When there is only one choice on offer, the responsibility for the outcome of that 'choice' is outside of your control, and so any disappointment resulting from that decision can safely be blamed on external factors. But when you have to choose between hundreds of options it becomes much easier to blame oneself if anything is less than perfect. It is perhaps no coincidence that as choice has proliferated and standards have risen in the past few generations, so has the incidence of clinical depression and suicide.
What this means is that there is a critical level of choice. Some societies have too much, others patently too little. At the point at which there is too much choice in a critical proportion of our lives, our welfare is no longer improved. Too much choice is paralytic and dissatisfying, and too little is impoverishing. We don't want perfect freedom and nor do we want the absence of it; somewhere there is an optimal threshold, and affluent, materialist societies have probably already passed it.
Our uniquely large pre-frontal cortex enables us to simulate experiences, allowing us to compare potential futures and make judgements based on these simulations. The difficulty in deciding which of several simulations we prefer arises because we are surprisingly poor at analyzing what makes us happy. Seemingly obvious questions such as 'would you prefer to become paraplegic or win the lottery?' are obscured by the extraordinary fact that one year after each event, both groups report being equally happy with their lives. A preference for one alternative over another can be measured in its ability to confer happiness, and, contrary to all of our impulses, there can be no rational preference in this example when considered over a sufficiently long time-period, as there is no reported qualitative difference between the two levels of happiness after a single year.
This is a result of the impact bias, the tendency of our emotional simulator to overestimate the intensity of future emotional states, making you believe that the difference in two outcomes is greater than it really is. In short, things that we would unthinkingly consider important, like getting a promotion or not, passing an exam, or not or gaining or losing a romantic partner, frequently have far less impact, of a much lower intensity and a much lower duration than we expect them to have. Indeed, in an astonishing study published in 1996, it was found that even major life traumas had no effect on subjective well-being (with very few exceptions) if they had not occurred in the past three months 1.
The reason for this remarkable ability is that our views of the world change to make us feel better about whatever environment we find ourselves in over a period of time. Everything is relative, and we make happiness where we would otherwise believe there to be none. To truncate a well-known quotation from Milton, “The mind is its own place, and in itself can make a heaven of hell”. Daniel Gilbert, Professor of Psychology at Harvard, calls this 'synthesizing happiness'.
Synthetic happiness differs from 'natural' happiness in that natural happiness is what we feel when we get what we wanted, and synthetic happiness is what we (eventually) feel when we don't get what we wanted. The mistake we make is believing that synthetic happiness is inferior to natural happiness. This mistake is perpetuated by a society driven by an economic system which relies on people believing that getting what you want makes you happier than not getting what you want ever could. We can resist this falsehood by remembering that we possess within ourselves the ability to synthesize the commodity that we always pursue, and that we consistently overrate the emotional differences between two choices.
- 1. Suh, Eunkook, Ed Diener, and Frank Fujita. "Events and Subjective Well-being: Only Recent Events Matter." Journal of Personality and Social Psychology 70.5 (1996): 1091-102. Print.
Optical illusions are a visual proof of a built-in irrationality in the way we reason. In some illusions we can be shown two lines of equal lengths and yet perceive one to be longer than the other. Even when we see visual proof that the lines are in fact of equal length, it's impossible to overcome the sense that the lines are different – it's as if we cannot learn to override our intuitions. In the case of optical illusions, our intuition is fooled in a repeatable, predictable fashion, and there is not much we can do about it without modifying the illusion itself, either by measuring it or by obscuring some part of it.
Dan Ariely, a behavioural psychologist currently teaching at Duke University, reminds us that optical illusions are a big deal. Vision is one of the best things that we do – we are evolutionarily designed to be good at it, and a large part of our brain is dedicated to being good at it, larger than is dedicated to anything else. The fact that we make such consistent mistakes, and are repeatedly fooled by optical illusions should be troubling. If we make mistakes in vision, what kind of mistakes will we make in those things that we have no evolutionary reason to be any good at? In new and elaborate environments like financial markets, we don't have a specialized part of the brain to help us, and we don't have a convenient visual illustration with which to easily demonstrate the mistakes we make. Is our sense of our decision making abilities ever consistently compromised?
Ariely suggests that we are victims of decision making illusions in much the same way we are victims of optical illusions. When answering a survey, for instance, we feel like we are making our own decisions, but many of those decisions in fact lie with the person who designed the form. This is strikingly shown by the disparity in the percentage of people in different European countries who indicated that they would be interested in donating their organs after death, as illustrated by a 2004 paper by Eric Johnson and Daniel Goldstein. Consent rates in France, Belgium, Hungary, Poland, Portugal, France and Austria were over 99%, whilst the UK, Germany and Denmark all had rates of below 20%. This huge difference didn't arise due to strong cultural differences, but through a simple difference in the way the question on the form was presented. In countries with a low consent rate, the question was as an opt-in choice, as in 'Check the box if you wish to participate in the organ donor programme'. People didn't check the box, leaving the form in its 'default' state. Those presented with the inverse question, an explicit opt-out rather than explicit opt-in, also left the box unchecked. Both groups tended to accept whatever the form tacitly suggested the default position was. The two types of forms created strongly separated groups of consenting donors and non-consenting donors across the countries, separated by nearly 60% as a direct result of how the question was phrased.
This is just one example of how we can reliably be led into making a choice that isn't a choice at all, suggesting that our awareness of our own cognitive abilities isn't quite as complete as perhaps we would like. Recognizing this in-built limitation like Laurie Santos, Ariely stresses that the more we understand these cognitive limitations, the better we will be able to design and ameliorate our world.
| 3.732643 |
Algebraic reasoning in grades two through five: Effects of teacher practices, characteristics and professional development
Algebra is a gatekeeper (Moses & Cobb, 2001). Long before students enter an Algebra I course in the high school, the foundations for algebra are being developed in elementary school through algebraic reasoning. Research (Rowan, Chiang, and Miller, 1997) confirms a direct correlation ( r=.03, p=.05) between teachers' content knowledge and student achievement in the learning and understanding of mathematics. The need for teachers to be well equipped to develop students' algebraic reason is apparent (Lambdin, 1999, Ma, 2000).^ This study explores how teachers' practices, characteristics, and professional development relate to student achievement and; thereby, gain greater understanding and awareness of the role algebraic reasoning performs in teaching and learning at the elementary school level. The following two research questions directed this study: (1) To what extent and in what manner can variation in student achievement on problems involving algebraic reasoning be explained by teacher practices, characteristics and professional development? (2) What teaching practices focused on algebraic reasoning have the greatest impact on student achievement?^ This study utilized a mixed method research design that examined classroom practices of Grades 2-6 elementary teachers, N=62, and their N=1550 students in 17 urban and suburban schools in Rhode Island. Data were gathered through a participant questionnaire utilizing a 1-5 point Likert-scale survey instrument developed by the researcher. Following the collection of the data, focus groups, n=18, were conducted with volunteer participants. The qualitative data obtained from the focus groups were analyzed by generating themes and patterns to describe the findings. Descriptive statistics (frequencies, percents, and means) were computed for each variable. Multiple regression analysis was use to determine the magnitude of the relationship between teaching practices and professional development related to student achievement.^ The findings of the study revealed the variables, teaching practices and professional development, used to calculate multiple regression (r=.055, p=.70) were not found to be significantly related. In addition, current professional development on algebraic reasoning is not meeting the needs of the teachers and the connections between teacher knowledge/practices and algebra content require strengthening. The findings emanated from the focus groups and from the open ended questions on the questionnaire suggested teachers are not equipped to teach algebraic reasoning. These findings have recently been collaborated in the National Math Panel Advisory Panel (2008) report. Recommendation are made as well as suggestions for additional future research specific to professional development and algebraic reasoning.^
Education, Mathematics|Education, Elementary
Judith A Lundsten,
"Algebraic reasoning in grades two through five: Effects of teacher practices, characteristics and professional development"
(January 1, 2008).
Dissertation & Theses Collection.
| 3.219055 |
A Queen’s University study of fruit flies that may revolutionize the way birth defects are studied has identified the genes affected by a widely prescribed drug known to cause birth defects.
Methotrexate (MTX), a popular cancer-fighting drug also used to treat psoriasis, ectopic pregnancies, rheumatoid arthritis, and lupus, lasts a long time in the body and causes birth defects in children from women who have it in their systems. The study of the drug’s effect on fruit flies has allowed Queen’s researchers including graduate student Joslynn Affleck to identify the genes on which the drug acts.
“We hope that through this model system we can provide insight into mammalian birth defects, which may be expected to increase in frequency in the future, due to the recent elevated use of MTX,” says Affleck.
Many of the genes found to be affected by MTX are involved in cell cycle regulation, signal transduction, transport, defense response, transcription, or various aspects of metabolism.
“This study shows that MTX treatment has multiple targets,” says Affleck. “And this provides us with a novel invertebrate model for the study of drugs that cause birth defects.” The findings are set to be published by Toxicological Sciences in the New Year.
“This is not a journal in the habit of publishing insect studies,” notes biologist Dr. Virginia Walker, who co-authored the study. “The neat thing about this work is that fruit flies treated with this drug show ‘birth defects’ that are hauntingly similar to birth defects in human babies. Babies have bent limbs, tufts of hair and bulging eyes and the fruit flies have bent legs (and wings), tufts of bristles and rough eyes.”
While identifying this gene array is significant in its own right, the successful use of fruit flies in this kind of study is a revelation to the researchers who view it as an efficient model for the initial testing of “rescue” therapies to try to prevent birth defects. Scientists can study the effect of the drug on the genes of as many as three generations of fruit flies in a month using readily available scientific tools, speeding up study times while keeping costs low.
“It also adds to the growing list of roles fruit flies can take,” says Walker. Fruit flies are already used as models for aging, neural disease and cancer.
From Queens University
| 3.32485 |
Student Learning Outcomes
Students who complete the French Program will be able to:
- Communicate in a meaningful context in French.
- Analyze the nature of language through comparisons of the French language and their own.
- Demonstrate knowledge of and sensitivity to aspects of behavior, attitudes, and customs of France and other French speaking countries.
- Connect with the global community through study and acquisition of the French language.
| 3.516153 |
In information technology, a repository (pronounced ree-PAHZ-ih-tor-i) is a central place in which an aggregation of data is kept and maintained in an organized way, usually in computer storage. The term is from the Latin repositorium, a vessel or chamber in which things can be placed, and it can mean a place where things are collected. Depending on how the term is used, a repository may be directly accessible to users or may be a place from which specific databases, files, or documents are obtained for further relocation or distribution in a network. A repository may be just the aggregation of data itself into some accessible place of storage or it may also imply some ability to selectively extract data. Related terms are data warehouse and data mining.
| 3.185645 |
Saturday, February 2, 2013
Tuesday, January 29, 2013
A story plan is the first step in writing fiction.
A story plan will be a sketch and is for planning purposes. One way to get started is to think of a real incident that you or someone you know has experienced. Base your story plan on that incident, but change anything you wish if it makes your story more interesting or exciting.
Consider the following elements when you begin your story plan. You can skip around if you wish, but be sure to complete all the steps below.
Decide on the characters. Name the characters and describe their role in the story and their relationship to one another. For example: Marie Martin, heroine, secretary to the president of the bank. (If you are not ready to use names, just use descriptions: librarian, doctor, etc.)
Choose a setting. Decide when and where the events will take place. Be as specific as you can because that will help you when you begin your research. (For example: the South in the 60’s vs. Birmingham, Alabama in 1965.)
Decide on the main conflict in the story. What is the problem that your main character faces when the story begins? (For example: Marie Martin has been accused of stealing money from the bank?)
Decide on a series of events in the plot. Briefly describe what happens in a few sentences. (You can add to these, subtract from these, and rearrange these later.)
Determine the climax of the story. Describe the moment in the story after which nothing will be the same.
Determine the resolution of the story. What happens at the very end
when the loose ends are tied up? (Some writers determine the ending first and work
backwards from there.)
Here's hoping 2013 is off to a great start!
| 3.67344 |
Before people learned to make glass, they had found two forms of natural glass. When lightning strikes sand, the heat sometimes fuses the sand into long, slender glass tubes called fulgurites, which are commonly called petrified lightning. The terrific heat of a volcanic eruption also sometimes fuses rocks and sand into a glass called obsidian. In the early times, people would shape obsidian into knives, arrowheads, jewelry, and money. We do not know exactly when, where, or how people first learned to make glass. It is generally believed that the first manufactured glass was in the form of a glaze on ceramic vessels, about 3000 B.C. The first glass vessels were produced about 1500 B.C. in Egypt and Mesopotamia. The glass industry was extremely successful for the next 300 years, and then declined. It was revived in Mesopotamia in the 700′s B.C. and in Egypt in the 500′s B.C. For the next 500 years, Egypt, Syria, and the other countries along the eastern shore of the Mediterranean Sea were glassmaking centers.
Early glassmaking was slow and costly, and it required hard work. Glass blowing and glass pressing were unknown, furnaces were small, the clay pots were of poor quality, and the heat was hardly sufficient for melting. But glassmakers eventually learned how to make colored glass jewelry, cosmetics cases, and tiny jugs and jars. People who could afford them—the priests and the ruling classes—considered glass objects as valuable as jewels. Soon merchants learned that wines, honey, and oils could be carried and preserved far better in glass than in wood or clay containers.
The blowpipe was invented about 30 B.C., probably along the eastern Mediterranean coast. This invention made glass production easier, faster, and cheaper. As a result, glass became available to the common people for the first time. Glass manufacture became important in all countries under Roman rule. In fact, the first four centuries of the Christian Era may justly be called the First Golden Age of Glass. The glassmakers of this time knew how to make a transparent glass, and they did offhand glass blowing, painting, and gilding (application of gold leaf). They knew how to build up layers of glass of different colors and then cut out designs in high relief. The celebrated Portland vase, which was probably made in Rome about the beginning of the Christian Era, is an excellent example of this art. This vase is considered one of the most valuable glass art objects in the world.
In the 1500′s, the Dutch developed ways to make custom eyeglasses as well as lenses which led to the first microscopes and telescopes.
The first glass factory in the United States was built in Jamestown, Virginia, in 1608. The venture failed within a year because of a famine that took the lives of many colonists. The Jamestown colonists tried glassmaking again in 1621, but an Indian attack in 1622 and the scarcity of workers ended this attempt in 1624. The industry was reestablished in America in 1739, when Caspar Wistar built a glassmaking plant in what is now Salem County, New Jersey. This plant operated until 1780.
In 1903, the first fully automated glass bottle making machine was used in Toledo, Ohio.
Glass has been a major part in many revolutionary inventions. Had it not been for glass, we could be living in a World with no thermometers, televisions or light bulbs.
Cool Info About Glass
A process called “vitrification” can turn Nuclear waste into Hard glass blocks for long-term storage.
Glass takes 1,000,000 years to decompose.
Glass never wears out-it can be recycled forever.
Glass recycling saves resources-each ton of recycled glass replaces 1.2 tons of raw material (sand, limestone and soda ash).
Glass: a transparent inorganic material produced by combining silica sand with burnt lime or limestone and soda ash.
Silica sand: a pure form of silicon dioxide that is the most common ingredient in glass manufacturing.
Soda ash: also known as sodium oxide, this is an ingredient in glass manufacturing. It helps sand melt at a lower temperature.
Glasphalt: similar to asphalt, but it contains ground glass instead of gravel.
Limestone: a type of rock that is blended with soda ash in glass manufacturing to stabilize the glass so it will not dissolve in water.
When the Model T Ford car was first introduced, the glass windscreen was an optional extra.
Bulletproof glass is made of several layers called laminating. In between the glass is a polycarbonate material that absorbs the energy of what has been fired at it. The thicker the glass, the higher impact it can withstand. There is even one-way bulletproof glass enabling the target victim to shoot back.
If glass had not been invented, windows would not have come about so what would your PC operating system be called?
What is Glass?
Glass is not a crystalline solid, but a random jumble of molecules. Because of its random structure it does not have a clear melting temperature and there is no temperature where it can be said to be definitely solid. Instead, it gradually becomes harder as it cools.
Historians have pointed out that the glass in some centuries-old windows is thicker at the bottom, as if the glass had slumped over the years.
Scientists still debate whether room-temperature glass is a solid or some other state of matter. But according to one study, to see the flow of cool glass we would have to wait ten billion times the age of the universe. So those ancient windows are thicker at the bottom because they were made that way, not because of any later flowing of the glass.
| 3.693737 |
Hugh Pickens writes writes "Until recently, geothermal power systems have exploited only resources where naturally occurring heat, water, and rock permeability are sufficient to allow energy extraction but now geothermal energy developers plan use a new technology called Enhanced Geothermal Systems (EGS) to pump 24 million gallons of water into the side of the dormant Newberrry Volcano, located about 20 miles south of Bend, Oregon in an effort to use the earth's heat to generate power. "We know the heat is there," says Susan Petty, president of AltaRock Energy, Inc. of Seattle. "The big issue is can we circulate enough water through the system to make it economic." Since natural cracks and pores do not allow economic flow rates, the permeability of the volcanic rock can be enhanced with EGS by pumping high-pressure cold water down an injection well into the rock, creating tiny fractures in the rock, a process known as hydroshearing. Then cold water is pumped down production wells into the reservoir, and the steam is drawn out. Natural geothermal resources only account for about 0.3 percent of U.S. electricity production, but a 2007 Massachusetts Institute of Technology report projected EGS could bump that to 10 percent within 50 years, at prices competitive with fossil-fuels. "The important question we need to answer now," says USGS geophysicist Colin Williams, "is how geothermal fits into the renewable energy picture, and how EGS fits. How much it is going to cost, and how much is available.""
| 3.174656 |
Here are the penultimate penguins: Jackass penguin (Spheniscus demersus) and Galapagos penguin (S. mendiculus). The Jackass penguin is also known as the black-footed penguin, or the African penguin. Penguins, in Africa? Are you bonkers? What’s next, polar bears in the Sahara? No, don’t be silly. There is a cold-water current responsible for this bird’s distribution, the Benguela Current, bringing nutrient-rich cold water from the Southern Ocean to the south-west Atlantic via South Africa and Namibia.
The Galapagos penguin is the most northerly species of penguin, it is even found at the Equator! This is even more crazy than the idea of an African penguin. Why would a classically cold-climate bird be found in the Tropics? The answer also lies in the ocean currents: cold waters from the Antarctic flow up the Pacific coast of South America towards the Galapagos Islands, bringing nutrients. This current, the so-called Humboldt Current, also gives its name to another species of penguin from the coast of South America, covered in the next post. As a result of warmer air temperatures, the penguins of Galapagos, south Africa and other places are smaller. Other than the little blue penguin of Australia and New Zealand, the four penguins of the genus Spheniscus are the smallest.
Spheniscus demersus (Linnaeus, 1758)
Adult Jackass penguin (Spheniscus demersus)
Distribution: southern Africa from 24o38’S to 33o50’S; vagrant to other parts of Africa.
Size: 70 cm (27½”); males and females weigh 2.4-4.2 kg (5 lb 5 oz – 9 lb 9 oz), with males larger than females.
Habitat: breeds on Benguela Current influenced coasts, in burrows with suitable substrate or using bushes and boulders as shelter.
Diet: small fish, cephalopods (such as squid), crustaceans and polychaete worms.
Etymology: Spheniscus = “little wedge” in Greek; demersus = “diving” in Latin.
Immature Jackass penguin (Spheniscus demersus)
Spheniscus mendiculus Sundevall, 1871
Adult Galapagos penguin (Spheniscus mendiculus)
Distribution: restricted to the Galapagos Islands; breeds on Fernandina and Isabela, and maybe on Bartholomew and Santiago Islands; non-breeding range extends to other islands of the archipelago.
Size: 53 cm (21”); the smallest Spheniscus penguin; males weigh 1.7-2.6 kg (3 lb 11 oz – 5 lb 11 oz); females weigh 1.7-2.5 kg (3 lb 11 oz – 5½ lb).
Habitat: low-lying volcanic coastal desert.
Diet: fish (such as mullet and sardine) and crustaceans (such as krill).
Etymology: Spheniscus = as S. demersus; mendiculus = “little beggar” in Latin.
Immature Galapagos penguin (Spheniscus mendiculus)
Oh, and while we're on the topic of Galapagos... if you're in London for the next few months, make sure you visit the new exhibition at the Natural History Museum, Darwin: The Big Idea. There is a stuffed Galapagos penguin there, as well as numerous other animals and plants from the archipelago, and two live animals: Charlie the green iguana (Iguana iguana) and Sumo the Argentine horned frog (Ceratophrys ornata). Not to mention a lot of original material from Charles Robert Darwin's epic voyage that first sparked his theory of evolution by natural selection.
| 3.447582 |
Modern Wallace Medium Weight Tartan. In the 12th century Richard Wallace obtained lands at Riccarton, Ayrshire, and his son, Henry, later acquired lands in Renfrewshire. From Henry descended Sir Malcolm Wallace of Elderslie who was the father of Sir William Wallace, the Scottish Patriot. The Wallaces refused to do homage to Edward I of England and Sir William, who led a band of patriots, harried the English, and his constant raids on their fortresses made them hate and fear him. His bravery and leadership inspired others to support his struggle for Scottish independence. He was betrayed to the English and taken to London, where he was executed in 1305.
| 3.18419 |
The year is 1565. On the island of Malta, 600 Knights of St. John, commanding a force of some 8000 men, prepare to defend their island fortress from attack.
These same Catholic Knights had been driven from their previous stronghold, the Isle of Rhodes, in 1522, by the Ottoman Turks. Under Suleyman the Magnificent, the Moslems were pressing hard across Arabia, Syria, Iraq, into Egypt and northern Africa, and had established a strong foothold on the north coast of the Black Sea, the gateway to all of Europe itself. In 1526, the Hungarians had been defeated at the Battle of Mohacs, and only the Austrian Habsburgs now stood in the way of the Moslem advance. Vienna came under attack in 1529, but the Moslems were unable to take the capital, and their over-extended campaign failed.
Now, the Turks had raised a fleet of 181 ships, carrying some 30,000 soldiers, and Malta was the prize they sought. Their goal was to plunder and sweep all the ships of Christian Europe from the Mediterranean. Then, in control of the sea lanes and trade routes, with their naval and economic power supreme, all of Europe would be set to fall before them.
Our Lord and Our Lady, top left, blessing the Catholic fleet at Lepanto
For more pictures, click here
The Turkish fleet appeared off the coast of Malta, and laid siege to the island. All through the summer of 1565 the contest for Malta raged. In the end, the Knights of St. John (Knights of Malta) were victorious, and the Turks were forced to withdraw in defeat. It did not, however, end the threat from the Ottoman Turks.
In 1566, Pius V ascended to the Chair of St. Peter in Rome. Pius V was a Dominican Monk with a reputation for piety and austerity. A teacher of philosophy and theology for 16 years, unlike some previous Popes, he was a humble man who continued to lead the ascetic life of a simple monk even after becoming Pope.
Pius V was also very serious about defending Christendom against the Ottoman Turks. He knew they were not just going to go away and leave Europe in peace. Vienna and the eastern borders continued to be threatened by Moslem military power and incursions, and the Papal States themselves could soon be at risk. Cyprus came under attack again in 1570. Seeing the increasing danger to Christendom, Pius V called on "The Holy League," consisting of the Papal States, Spain, Genoa, Venice, and the Knights of Malta, to address the Moslem threat.
A Christian naval fleet was assembled under the overall command of Admiral Don John of Austria. Although young (in his twenties), Don John was a capable naval commander. The Spaniards were led by Santa Cruz, the Genoese by Andrea Doria, and the Venetians commanded by Agostin Barbarigo and Sebastian Veniero. The fleet under Don John's command was some 300 ships strong, with over 100 ships and 30,000 men being supplied by Philip II of Spain alone. The Pope personally outfitted and supplied 12 Papal galleys, and provided funding for many of the others as well. The Venetian contingent was around 100 ships, manned in part by additional Spanish soldiers. In the Venetian fleet were six galleasses. Heavier, broader, and much slower than conventional galleys, they were nonetheless technologically advanced - the heavy gun platforms and battleships of their day. All total, over 50,000 men served the fleet as rowers, and another 30,000 were fighting soldiers.
Don Juan of Austria, Chief Commander of the Holy League's fleet, inflicted the largest naval defeat on the Muslims in History
In September of 1571, Don John moved the Catholic fleet east to intercept the Turks at Corfu, but the Turks had already landed, terrorized the population, and then moved on. While anchored off the coast of Cephalonia, news reached Don John that the Christian stronghold at Famagusta on Cyprus had fallen to the Turks, with all prisoners being tortured and then executed by the Moslems.
Don John then pulled up anchor and moved to engage the Turkish fleet in the Gulf of Lepanto, off the southern coast of Greece. The Turkish fleet, some 330 ships strong, under the command of Ali Pasha, had been reinforced by Uluch Ali, the Bey of Algiers, and head of the notorious band of Moslem corsairs (pirates) that had long terrorized Catholic ships in the Mediterranean.
On the night of October 6, with a favorable wind behind him, Ali Pasha moved his fleet westward toward the mouth of the Gulf of Patras to intercept the approaching ships of the Holy League. The clash that was to come would be the largest naval engagement since the Battle of Actium in 30 B.C.
At dawn, on October 7, 1571, the two fleets met. Don Juan split his fleet into three sections: on the left (north), the Venetians under Agostin Barbarigo; on the right (south), Andrea Doria led the Genoese and Papal galleys; in the center, Don John commanded his flagship and galleys. Santa Cruz, with a force of 35 Spanish and Venetian ships, was held in reserve. He ordered his captains not to fire until “close enough to be splattered with Moslem blood.” The iron rams were removed from the Christian ships, as the plan was for boarding and close quarter fighting. Two of the large Venetian galleasses were towed into position in front of each of the three Christian divisions.
Don Juan of Austria in battle, at the bow of the ship, painted by Juan Luna y Novicio
Ali Pasha's fleet approached in a giant crescent formation, and seeing the opposing fleet, he also ordered his fleet split into three divisions. Ali Pasha himself took up the middle position opposite Don John, and charged forward to engage Don John's ships. The Venetian galleasses opened fire, and almost immediately eight Moslem ships were hit and began to sink. The Catholic galleys, their decks filled with soldiers, opened fire with arquebuses (1) and crossbows as the Moslem ships drew alongside. Ali Pasha's men attempted to board the Catholic ships, but the Spanish soldiers were experienced and well disciplined. Attack after attack was beaten back with deadly shots from their crossbows and arquebuses.
Don John ordered the ship of Ali Pasha to be boarded and taken. Two times the boarding attack of the Spanish soldiers was beaten back, but on the third attempt they swarmed over the deck, now awash in blood, and took the ship. Ali Pasha was captured and beheaded on the spot (against the wishes of Don John), and the Battle Flag of the Ottoman Fleet came down off the mainmast. The head of the Turkish admiral was spitted on a long pike and raised on high for all the enemy ships to see. The Turkish attack in the center collapsed, and Don John sent his ships in pursuit of the retreating Turks, and also turned to aid in the battles raging on his flanks.
Fresco of the Lepanto battle plan by Antonio Danti
On the Catholic right, Uluch Ali and his pirates had broken through Doria's lines and managed to capture the flagship of the Knights of St. John. Santa Cruz, seeing what had happened, came up to the rescue, and Uluch Ali was forced to abandon his prize. The Genoese were in a fight for their lives with the remainder of Uluch Ali's ships, but after Don John had broken the enemy fleet in the center, he turned and came to the aid of the Genoese. The Algerian corsairs were finally overcome, and fled for their lives in full retreat.
Admiral Mahomet Sirocco, commanding the Turkish right (on the Catholic left), sailed close to the rocks and shallows on the northern shore of the gulf and was able to outflank Barbarigo's Venetian galleys. Barbarigo's flagship was surrounded by eight enemy galleys, and the Catholic Admiral fell dead from Turkish arrows. His flagship was taken for a time, but aid finally arrived, and Sirocco's flagship galley was sunk. The Turkish admiral was yanked out of the water, and, like Ali Pasha, killed right on the spot.
The engagement lasted, all total, around four to five hours. When it was all over, 8,000 men who had sailed with Don John were dead and another 16,000 wounded. The Turks and Uluch Ali's corsairs had over 25,000 dead, and untold thousands more wounded and captured. Over 12,000 Catholic galley slaves had also been rescued from the Moslems. The Venetian galleasses had taken a heavy toll on the Turkish fleet. It was a major victory for the Holy League and Christendom.
At dawn, on October 7, 1571, as recorded in the Vatican Archives, Pope Pius V, accompanied by a group of the faithful, entered the Basilica of Santa Maria Maggiore to pray the Rosary and ask Our Lady to intercede for a Catholic victory. The prayers continued in Rome as the Catholic and Moslem fleets battled far away in the Gulf of Lepanto. Later in the day, the Pope is said to have suddenly interrupted his business with some Cardinals, and looking up, cried out,
Saints Peter, Roch, Justine and Mark ask Our Lady for the Catholic fleet - Paolo Veronese
"A truce to business! Our great task at present is to thank God for the victory which He has just given the Catholic army."
The Pope, of course, had no way of knowing that the battle was taking place and being decided on that very day. (No cell phones in 1571!)
When news of the victory finally reached Europe, church bells rang out in cities all across the continent. The Battle of Lepanto was a decisive victory, with only 40 of the over 300 Moslem ships surviving the engagement. The Turkish force of some 75,000 men was in ruins.
The battle, although a great victory for Catholic Europe, did not end the threat of invasion, or completely break the power of the Ottoman Turks. More naval and land battles would follow in the years to come, and Vienna itself would come under attack again, and yet again.
Today, the long clash between Christendom and Islam is still evident in the political and ethnic geography of Europe, Africa, Byzantium, and north into Russia. The battle also extends, in varying degrees, throughout the Near and Far East, and the Islands of the Pacific as well.
Many Christian knights, soldiers, and sailors have died defending Christendom against the onslaughts of Islam down through the centuries. Today, the borders of many European countries, Canada, and the United States are practically wide open, and the old enemy is invited to come in and make himself at home. And many 'Christians' in the West are just too busy enjoying their material prosperity to be bothered with unpleasant history.
But the enemy has not forgotten history. He remembers it all too well, and he is still deadly serious about his religion. His goal over the years has not changed in the slightest, and he is very patient. The enemy within is now smiling, just biding his time.
And long dead Christian knights, our ancestors in the Faith, are probably turning over in their graves right about now, trying desperately to shout out a warning. The final chapter, it seems, has yet to be written...
The Battle of Lepanto, October 7, 1571
History | Home | Books | CDs | Search | Contact Us | Donate
©2002-2013 Tradition in Action, Inc. All Rights Reserved
| 3.038638 |
When Madison Minton was six months old, her parents noticed that her breathing was frequently labored. Now in second grade, the child is on eight medications for asthma and other pulmonary ailments.
“Madison’s situation is typical,” says Deborah Payne, Energy and Health Coordinator of the Kentucky Environmental Foundation. “People in Eastern Kentucky often don’t have the financial capacity to move away so they live with the consequences of being downwind of a coal processing plant. This means that Madison is exposed to high quantities of dust every single day.”
Payne calls coal mining “one piece of the birth defect puzzle” and says that at every stage, coal is problematic, from its extraction, to its processing, transport, and eventual burning. “At each step there are negative health consequences for adults, children, and fetal life,” she continues.
And it’s gotten worse. As mountaintop removal [MTR] has horned-in on underground mining, the health maladies of residents of eastern Kentucky, southwest Virginia, eastern Tennessee, and southwest West Virginia—Appalachia—have begun to pile up.
Here’s why. MTR requires the use of explosives to reach coal streams, a process that makes it even more perilous than underground mining. According to Physicians for Social Responsibility [PSR], MTR blasts release selenium, iron, and aluminum into the air. Selenium is particularly hazardous, PSR says, because it accumulates in tissue where it can cause circulatory, kidney, liver, and nerve damage.
But that’s not all: Later, chemically treated liquids are used to wash the coal and, more often than not, this brew ends up in groundwater. Even more frightening, a group called Appalachian Mountain Advocates estimates that when the time comes to turn coal into electricity, arsenic, cadmium, lead, and mercury -- in the form of coal ash -- gets spewed into the oxygen we breathe and the water we drink.
Not surpringly, this hasn’t fazed the coal companies. In fact, by all accounts, MTR has been a boon for them, allowing the removal of two-and-a-half times more coal per hour than traditional underground mining. Already, the rush to extract -- four million tons a year is taken from each coal-rich county -- has cleared nearly 2200 square miles of forests, reduced 470 mountain summits to rubble, buried 2000 miles of streams, and damaged the ecosystems needed by fish and wildlife.
Still, it is the human toll that is causing the lion’s share of brow furrowing. A first-of-its-kind study released in June 1011 -- “The Association Between Mountaintop Mining and Birth Defects Among Live Births in Appalachia, 1996-2003” -- brought six scientists together to analyze more than 1.8 million hospital birth records for the central portion of the region.
Their findings should jolt both advocates of reproductive justice and those who purport to support the right to life. Indeed, the scientists were cautious, recognizing that most birth defects come from a mix of genetic and environmental factors. Nonetheless, in areas where MTR is used, health abnormalities -- including spina bifida, heart, lung, and genital malformations, cleft palate, hydrocephalus, and club feet -- greatly exceeded defects in non-MTR areas: 235 per 10,000 versus 144. The study also found that living in an area with mountaintop removal increases the chance of having a child with a circulatory defect by 181 percent.
Adults, researchers say, also suffer. Numerous health surveys have confirmed that adults living in areas where there is mountaintop removal have significantly more illnesses than others of comparable age -- with high rates of diabetes, Chronic Obstructive Pulmonary Disease, asthma, liver disease, hypertension, heart problems, and kidney failure. Factor in poverty -- which affects nearly a third of Appalachia’s residents -- and it’s hard not to despair.
Despite these realities, scientists say that they still have a lot to learn about the risks associated with MTR. “We have not yet investigated the health of pregnant women,” says Dr. Michael Hendryx, Director of the West Virginia Rural Health Research Center. “We know that at certain times during pregnancy there is a greater risk of toxins passing through the placenta. That has to be studied. Throughout Appalachia we hear stories about kids developing cancer at early ages, having asthma and other serious respiratory symptoms, getting frequent rashes and skin blisters. We also hear about kids with digestive and dental problems, kids losing their adult teeth while they’re teenagers. If they drink water from a well that water is usually not treated and we suspect that it is tainted by chemicals that come off a mining site and then rot their teeth. Other people have different kinds of air-related problems. In some places people have to wipe a thick layer of coal dust-- it comes from the processing plants--off their furniture every day or two. The health problems vary depending on what people are exposed to -- but they need to be documented and then analyzed.”
That said, some facts are incontrovertible: For one, coal-mining communities experience significantly more birth defects than communities where mining doesn’t occur. Secondly, adults living in MTR districts are, on average, sicker than adults who live elsewhere.
So what to do? Coal is currently responsible for generating almost half of the electricity used in the US, something that is unlikely to change unless viable alternatives are developed. At the same time, the companies that see coal as a cheap and abundant fossil fuel need to be reminded that here is nothing cheap about human health.
When I was a kid my grandmother frequently repeated a phrase that I found ridiculous: “If you have your health, you have everything.” Who would have imagined that, years later, that truism would resonate.
| 3.186772 |
« Back to Pet Talk
Cancer and Your Pets: What You Need to Know
Almost everyone has known a friend or loved one who has been
affected by cancer. While cancer in humans is definitely prevalent,
our pets are also afflicted with this disease.
According to Dr. Heather Wilson, assistant professor of oncology
at the Texas A&M University College of Veterinary Medicine and
Biomedical Sciences, 50 percent of all dogs and 30 percent of all
cats over the age of 10 will be diagnosed with some form of
Types of cancers most common in dogs include: lymphoma (tumor of
the lymph nodes), mast cell tumors (skin tumors), and osteosarcoma
(tumor of the bones). Some common types of cancer in cats are:
lymphoma, squamous cell carcinoma (which affects the head, neck and
mouth), and vaccine associated sarcomas.
"Cats are not nearly as prone to cancer as dogs, but one of the
most common cancers in cats comes from vaccine injection sites,"
notes Wilson. "While you can pick and choose some vaccinations,
rabies vaccinations are required by law. However, there is a
non-adjuvanted rabies vaccine for cats that is less irritating,
thus less likely to cause cancer and is available at most
The type of cancer your pet has can also be closely associated
with its breed. In dogs, lymphoma is most common in Golden
Retrievers, Boxers, and Labs. Mast cell tumors are common in dogs
with short noses such as boxers, pugs, and bulldogs. Large breed
dogs such as Rottweilers and Great Danes are more prone to
"There is very little distinction across breeds when it comes to
cancer in cats," states Wilson. "However, cancer most commonly
affects the Siamese breed of cats."
Once your veterinarian has diagnosed your pet with cancer you
will then want to find a veterinary oncologist in your area that
specializes in your pet's specific cancer.
"There are veterinary oncologists that specialize in medical
oncology and radiation oncology. There are also surgeons that
specialize in surgical oncology," explains Wilson. "The best way to
find a medical oncologist in your area is to go to the American
College of Veterinary Internal Medicine (ACVIM) website at
Treatment options range from chemotherapy, surgery, radiation
therapy, and immunotherapy and are administered depending on the
type and severity of the cancer.
"Chemotherapy is the number one treatment option for animals
with lymphoma," says Wilson."While cure rates in dogs vary greatly
with the type of cancer, overall response rates for dogs with
lymphoma treated with the CHOP chemotherapy protocol (a multidrug
protocol given weekly over 19 weeks) is greater than 80
Response rates for dogs with mast cell tumors varies depending
on the grade, but with complete surgical excision plus radiation
for low grade tumors the control rates is often greater than 80
percent at three years.
"Unfortunately, the majority of dogs with osteosarcoma and
metastatic disease do not achieve a cure," states Wilson. "Also,
most cancers in cats are also very hard to cure. When we do achieve
remission in cats with vaccine associated sarcomas, they often live
18-24 months before they have a recurrence."
Cost is another important thing to consider when deciding on the
treatment of an animal for cancer. While costs range widely, the
average cost for a surgery is $2,000-$3000; Chemotherapy regimen is
$1,200-$3,000, and radiation averages $3,000.
"As cost is prohibitive to some families, a good option may be
to enter your pet into a clinical trial if possible," notes Wilson.
"Many of these trials have a financial incentive such as a free
treatment regimen, and they also help with future research for both
veterinary and human oncology."
For more information on clinical trials at Texas A&M
University's College of Veterinary Medicine, go to vetmed.tamu.edu/clinical-trials.
While cancer in pets can be extremely stressful for owners, the
good news is that with the amount of resources and specialists that
are now available to treat cancer in pets, owners now have the
power to make informed and responsible decisions to get their
beloved pets through this illness.
About Pet Talk
Pet Talk is a service of the College of Veterinary Medicine
& Biomedical Sciences, Texas A&M University.
Stories can be viewed on the Web at http://tamunews.tamu.edu/.
Suggestions for future topics may be directed to [email protected]
Angela G. Clendenin
Director, Communications & Public Relations
Ofc - (979) 862-2675
Cell - (979) 739-5718
↑ Back to Top
« Back to Pet Talk
| 3.04541 |
In June every year, the US celebrates Father’s Day to honor fathers, who take care of their families and contribute so much for their growth. Father’s Day was first celebrated in 1910 in Spokane, Washington, after the prevailing efforts of a young girl named Sonora Dodd. Sonora was raised by her father alone after her mother died. When Sonora heard a sermon on Mother’s Day, this young girl thought fathers too deserve to be recognized for their efforts and that a day should be declared as Father’s Day in their honor. So, it all started because one little girl thought, it is important to celebrate fathers efforts and spread across the world.
Fathers contribute so much in the lives of their children and family. With guidance from father, young kids learn to gain strength in facing the world to reach their best. Sometimes it may be difficult to appreciate father’s role in raising a family, how despite working so hard and not keeping much to himself, an ordinary father gives rise to such great sons and daughters? What a great father Karam Chand Gandhi must have been to present the nation and the world, Mohandas Karamchand Gandhi who become the Father of the Nation? He was a loving father when the child was unable to grasp right from wrong, he made him realize what truth is and elevated him to reach the heights of humanity. Fathers of empathy with their willingness to train their children, guide them, mentor them and pay the ultimate price till the end, of breathing the last breath and reaching Him. When the baby is taking its first steps, father is elated, and when the baby utters the first words, dada or papa or nana or baaba, father’s happiness has no bounds.
It is a great thing to be a father, because that is when they see their own creation in flesh and blood and love them immensely.
Sometimes, it is also the child that teaches a mom or a dad. When the darling child is sleeping in the arms of its papa, it will be too difficult for the father to remove them off of their chest and rest them on the bed or crib. They wait until the child is so sound asleep that it will not wake up and get disturbed. Child teaches them to be patient and more understanding of the needs of their offspring.
When a child is ill with fever or wheezing, fathers rush them to the doctor, give them needed care and may just carry them in arms all night long, without the worrying about having to attend meetings the next day or teach or attend the office or design bridges and have them constructed or do whatever that they do for their living. He does all this and also takes care of the home.
When the mom is ill, it is once again father to the rescue, in single families. He kneads flour, makes bread, cooks meals, makes the home spic and span, and yet he attends to his work without even so much as a complaint, especially when the kids are young.
This kind father does not remain young and able all the while. He, the invincible hero in the eyes of his kids, does grow old. He reaches his time, and kids should remember, it is not always possible for the dad to be active and alert as always.
He might grow old, unable to recognize even his own kids, and may become dependent on others for every little thing. Like a cycle, life goes on and the grand child develops a bond with the grand dad, and reaches a full circle.
I remember listening to my dad when he spoke, “talking to others is like a penance”. It is not right to raise voice and lose temper with parents, especially to mom. If one cannot control their temper because the other person is testing their nerves, they have to remember their temper should not be affected by others but by their own thinking. One can control their own thinking and not lose balance, because it is ones own attitude that one can control. He would always tell us the saying from Swamy Dayanand Saraswathi ji, “Satyam bruyath, Priyam bruyath; Na bruyath satyam apriyam”.
When I think about talking in a pleasing manner, as the sayings of the Swamy Dayanand Saraswathi ji, I recall in May 2012, there was a special visit by a very well known singer to Houston, TX, Dr. Ghazal Srinivas. He presented Houstonians with his melodious music that was almost divine in nature. Dr. Ghazal Srinivas garu was singing with a message, “Devalayo Rakshati, Rakshitah,” meaning ‘Temples protect, when protected’. He sang many compositions about the theme, but the best part of it was the song about Dad. Dad is not given his due love either in families or in culture and society, mother taking that place! However, dad’s place is irreplaceable especially in guiding and making the children socially responsible and raising them as good citizens. When a dad is raising the baby into the air and holding it, he is not only raising the baby but also raising it above himself, so he can be a greater person than himself. When Dr. Ghazal Srinivas was singing this song, I was so much touched that one of the member from the audience asked me if I remembered my dad. I said, “yes indeed”. It was just this year in February that my father reached the Divine Father, and until I heard that song, I never realized that when the parents are raising the child into the air and holding them, it is because they want to see their child in higher position than themselves. It was an eye opener regarding the sentiments about father. The amount of happines Dr. Ghazal Srinivas was showing was greatest when he let his daughter take the stage and present us with her melody. He is a proud father, training his daughter to reach heights with a sound foundation of classical music. He stands tall as an exemplar to this society.
Happy Father’s Day to all the fathers out there, who are doing their best!
- Uma Pochampally
| 3.125209 |
MIT professor’s book digs into the eclectic, textually linked reading choices of people in medieval London.
CAMBRIDGE, Mass. -- Following the 1997 creation of the first laser to emit pulsed beams of atoms, MIT researchers report in the May 16 online version of Science that they have now made a continuous source of coherent atoms. This work paves the way for a laser that emits a continuous stream of atoms.
MIT physicists led by physics professor Wolfgang Ketterle (who shared the 2001 Nobel prize in physics) created the first atom laser. A long-sought goal in physics, the atom laser emitted atoms, similar in concept to the way an optical laser emits light.
"I am amazed at the rapid progress in the field," Ketterle said. "A continuous source of Bose-Einstein condensate is just one of many recent advances."
Because the atom laser operates in an ultra-high vacuum, it may never be as ubiquitous as optical lasers. But, like its predecessor, the pulsed atom laser, a continuous-stream atom laser may someday be used for a variety of applications in fundamental physics.
It could be used to directly deposit atoms onto computer chips, and improve the precision and accuracy of atomic clocks and gyroscopes. It could aid in precision measurements of fundamental constants, atom optics and interferometry.
A continuous stream laser could do all of these things better than a pulsed atomic laser, said co-author Ananth P. Chikkatur , a physics graduate student at MIT. "Similar to the optical laser revolution, a continuous stream atom laser might be useful for more things than a pulsed laser," he said.
In addition to Ketterle and Chikkatur, authors include MIT graduate students Yong-Il Shin and Aaron E. Leanhardt; David F. Kielpinski, postdoctoral fellow in the MIT Research Laboratory of Electronics (RLE); physics senior Edem Tsikata; MIT affiliate Todd L. Gustavson; and David E. Pritchard, Cecil and Ida Green Professor of Physics and a member of the MIT-Harvard Center for Ultracold Atoms and the RLE.
A NEW FORM OF MATTER
An important step toward the first atom laser was the creation of a new form of matter - the Bose-Einstein condensate (BEC). BEC forms at temperatures around one millionth of a degree Kelvin, a million times colder than interstellar space.
Ketterle's group had developed novel cooling techniques that were key to the observation of BEC in 1995, first by a group at the University of Colorado at Boulder, then a few months later by Ketterle at MIT. It was for this achievement that researchers from both institutions were honored with the Nobel prize last year.
Ketterle and his research team managed to merge a bunch of atoms into what he calls a single matter-wave, and then used fluctuating magnetic fields to shape the matter-wave into a beam much like a laser.
To test the coherence of a BEC, the researchers generated two separate matter-waves, made them overlap and photographed a so-called "interference pattern" that only can be created by coherent waves. The researchers then had proof that they had created the first atom laser.
Since 1995, all atom lasers and BEC have been produced in a pulsed manner, emitting individual pulses of atoms several times per minute. Until now, little progress has been made toward a continuous BEC source.
While it took about six months to create a continuous optical laser after the first pulsed optical laser was produced in 1960, the much more technically challenging continuous source of coherent atoms has taken seven years since Ketterle and colleagues first observed BEC in 1995.
A NEW CHALLENGE
Creating a continuous BEC source involved three steps: building a chamber where the condensate could be stored in an optical trap, moving the fresh condensate and merging the new condensate with the existing condensate stored in the optical trap. (The same researchers first developed an optical trap for BECs in 1998.)
The researchers built an apparatus containing two vacuum chambers: a production chamber where the condensate is produced and a "science chamber" around 30 centimeters away, where the condensate is stored.
The condensate in the science chamber had to be protected from laser light, which was necessary to produce a fresh condensate, and also from hot atoms. This required great precision, because a single laser-cooled atom has enough energy to knock thousands of atoms out of the condensate. In addition, they used an optical trap as the reservoir trap, which is insensitive to the magnetic fields used for cooling atoms into a BEC.
The researchers also needed to figure out how to move the fresh condensate - chilled to astronomically low temperatures - from the production chamber to the science chamber without heating them up. This was accomplished using optical tweezers - a focused laser light beam that traps the condensate.
Finally, to merge the new condensate with the existing condensate in the science chamber, they moved the new condensate in the tweezers into the science chamber by merging the condensates together.
A BUCKET OF ATOMS
If the pulsed atom laser is like a faucet that drips, Chikkatur says the new innovations create a sort of bucket that collects the drips without wasting or changing the condensate too dramatically by heating it. This way, a reservoir of condensate is always on hand to replenish an atom laser.
The condensate pulses are like a dripping faucet, where the drops are analogous to the pulsed BEC production. "We have now implemented a bucket (our reservoir trap), where we collect these drips to have continuous source of water (BEC)," Chikkatur said. "Although we did not demonstrate this, if we poke a hole in this bucket, we will have a steady stream of water. This hole would be an outcoupling technique from which we can produce a continuous atom laser output.
"The big achievement here is that we have invented the bucket, which can store atoms continuously and also makes sure that the drips of water do not cause a lot of splashing (heating of BECs)," he said.
The next step would be to improve the number of atoms in the source, perhaps by implementing a large-volume optical trap. Another important step would be to demonstrate a phase-coherent condensate merger using a matter wave amplification technique pioneered by the MIT group and a group in Japan, he said.
This work is funded by the National Science Foundation, the Office of Naval Research, the Army Research Office, the Packard Foundation and NASA.
| 3.528685 |
Nova Scotia is one of Canada’s three Maritime provinces and is the most populous province of the four in Atlantic Canada. Located almost exactly halfway between the Equator and the North Pole, its provincial capital is Halifax. Nova Scotia is the second-smallest province in Canada with an area of 55,284 square kilometres (21,300 sq mi), including Cape Breton and some 3,800 coastal islands. As of 2011, the population was 921,727, making Nova Scotia the second-most-densely populated province in Canada.
Nova Scotia was already home to the Mi’kmaq people when French colonists established Port Royal, Nova Scotia, the first permanent European settlement in North America north of Florida in 1605. Almost one hundred and fifty years later, the first English and German settlers arrived with the founding of Halifax (1749). The first Scottish migration was on the Hector (1773) and then the first Black migration happened after the American Revolution (1783). Despite the diversity of the cultural heritage of Nova Scotia, much of the twentieth-century tourism efforts focused primarily on all things Scottish. Many recent tourism efforts embrace and showcase Nova Scotia’s diversity.
In 1867 Nova Scotia was one of the three founding provinces of the Canadian Confederation.
| 3.452651 |
Forth Lesson 0
Why Bother?
Forth is weird compared to most popular computer languages. Until you learn how, it is hard to read because it is not based on the syntax of algebraic expressions.
But it is worth learning because a running Forth system gives you an extraordinary degree of low-level control over the system. Unlike most other programming environments that put up walls to hide or block access to "unauthorized" things, Forth makes it easy to get at anything, at any level from low to high.
Forth syntax
Here is syntactically-valid line of Forth code:
this is a test 123 456
Don't try to guess what it does; in fact it doesn't necessarily actually work, because some of the symbols might not be defined. But it is syntactically valid. It consists of 6 words, "this" "is" "a" "test" "123" "456". Words are separated by white space - spaces, tabs, and newlines. In most cases, spaces and newlines are the same.
Another syntactically valid line:
asdf foo jello @W#$%^,T/%$ 1a2qw2 gibbet
That's 6 words. One of them is pretty strange, consisting mostly of punctuation, but it is a word nevertheless. Any string of printing characters is a word, though most Forth implementations limit valid word names to 31 or fewer characters.
Left to right execution
The Forth interpreter is very simple. It parses the next word (i.e. it skips whitespace, then collects characters until it sees another whitespace character) and executes it.
That is it in a nutshell. So if you are trying to understand a Forth program in detail, you have to look at each word in turn and work out what it does. That sounds simple, but it will trip you up if you insist on looking for algebra. Just go left to right, one word at a time.
With practice, you will learn enough of the Forth vocabulary (the meanings of standard words) so that you can see what is going on at a glance, without having to puzzle out each individual word. It is just like learning to read - it is tedious until you get the basic vocabulary down, then it is easy.
Thus endeth the lesson.
| 3.176729 |
Definitely Probably One
Definitely Probably One:
A Generation Comes of Age Under China's One-Child Policy
Had China not imposed its controversial but effective one-child policy a quarter-century ago, its population today would be larger than it presently is by 300 million-roughly the whole population of the United States today, or of the entire world around the time of Genghis Khan.
The Chinese population-control policy of one child per family is 25 years old this year. A generation has come of age under the plan, which is the official expression of the Chinese quest to achieve zero population growth. China's adoption of the one-child policy has avoided some 300 million births during its tenure; without it, the Chinese population would currently be roughly 1.6 billion-the number at which the country hopes to stabilize its population around 2050. Many experts agree that it is also the maximum number that China's resources and carrying capacity can support. Standing now at a pivotal anniversary of the strategy, China is asking itself, Where to from here?
China's struggle with population has long been linked to the politics of national survival. China scholar Thomas Scharping has written that contradictory threads of historical consciousness have struggled to mold Chinese attitudes towards population issues. China possesses a "deeply ingrained notion of dynastic cycles" that casts large populations as "a symbol of prosperity, power, and the ability to cope with outside threat." At the same time, though, "historical memory has also interpreted a large population as an omen of approaching crisis and downfall." It was not until economic and development issues re-emerged as priorities in post-Mao Zedong's China that the impetus toward the one-child policy began to build rapidly. During Mao's rule population control was often seen as inhibiting the potential of a large population, but in the years following his death it became apparent that China's population presented itself as more of a liability than an asset. Policymakers eager to reverse the country's backwardness saw population control as necessary to ensure improved economic performance. (In 1982, China's per-capita GDP stood at US$218, according to the World Bank. The U.S. per-capita GDP, by way of comparison, was about $14,000.)
The campaign bore fruit when Mao's successor, Hua Guofeng, along with the State Council, including senior leaders such as Deng Xiaoping, decided on demographic targets that would curb the nation's high fertility rates. In 1979 the government announced that population growth must be lowered to a rate of natural increase of 0.5 percent per year by 1985. In fact, it took almost 20 years to reach a rate of 1 percent per year. (The overestimating was in part due to the lack of appropriate census data in 1979; it had been 15 years since the last population count and even then the numbers provided only a crude overview of the country's demography.) Nevertheless the Chinese government knew that promoting birth-planning policies was the only way to manifest their dedication and responsibility for future generations. In 1982 a new census was taken, allowing for more detailed planning. The government then affirmed the target of 1.2 billion Chinese for the year 2000. Demographers, however, were skeptical, predicting a resurgence in fertility levels at the turn of the century.
The promotion of such ambitious population plans went hand in hand with the need for modernization. Though vast and rich in resources, China's quantitative advantages shrink when viewed from the per-capita perspective, and the heavy burden placed on its resources by China's sheer numbers dictates that population planning remain high on the national agenda. The government has also stressed the correlation between population control and the improved health and education of its citizens, as well as the ability to feed and employ them. In September 2003, the Chinese magazine Qiushi noted that "since population has always been at the core of sustainable development, it is precisely the growth of population and its demands that have led to the depletion of resources and the degradation of the environment. The reduction in birth rate, the changes in the population age structure, especially the improvement in the quality of the population, can effectively control and relieve the pressure on our nation's environment and resources and strengthen our nation's capability to sustain development."
The Reach of the One-Child Policy
Despite the sense of urgency, the implementation of such a large-scale family planning program proved difficult to control, especially as directives and regulations were passed on to lower levels. In 1981, the State Council's Leading Group for Birth Planning was transformed into the State Population and Family Planning Commission. This allowed for the establishment of organizational arrangements to help turn the one-child campaign into a professional state family planning mechanism. Birth-planning bureaus were set up in all counties to manage the directives handed down from the central government.
Documentation on how the policy was implemented and received by the population varies from area to area. There are accounts of heavy sanctions for non-compliance, including the doubling of health insurance and long-term income deductions as well as forced abortions and sterilizations. Peasant families offered the most significant opposition; rural families with only one daughter often insisted that they be given the right to have a second child, in hopes of producing a son. On the other hand, in some regions married couples submitted written commitments to the birth-planning bureaus stating they would respect the one-child policy. Despite this variation, it is commonly accepted that preferential treatment in public services (education, health, and housing) was normally given to one-child families. Parents abiding by the one-child policy often obtained monthly bonuses, usually paid until the child reached the age of 14.
Especially in urban areas it has become commonplace for couples to willingly limit themselves to one child. Cities like Shanghai have recently eased the restrictions so that divorcees who remarry may have a second child, but there, as well as in Beijing and elsewhere, a second child is considered a luxury for many middle-class couples. In addition to the cost of food and clothing, educational expenses weigh heavily: As in many other countries, parents' desire to boost their children's odds of entering the top universities dictates the best available education from the beginning-and that is not cheap. The end of free schooling in China-another recent landmark-may prove to be an even more effective tool for restricting population growth than any family planning policy. Interestingly, the Frankfurter Allgemeine Zeitung has reported that Chinese students who manage to obtain a university education abroad often marry foreigners and end up having more than one child; when they return to China with a foreign spouse and passport they are exempt from the one-child policy.
There are other exceptions as well-it is rumored that couples in which both members are only children will be permitted to have two children of their own, for instance-and it is clear that during the policy's existence it has not been applied even-handedly to all. Chinese national minorities have consistently been subject to less restrictive birth planning. There also appears to have been a greater concentration of family planning efforts in urban centers than in rural areas. By early 1980, policy demanded that 95 percent of urban women and 90 percent of rural women be allowed only one child. In the December 1982 revision of the Chinese constitution, the commitment to population control was strengthened by including birth planning among citizens' responsibilities as well as among the tasks of lower level civil administrators. It is a common belief among many Chinese scholars who support the one-child policy that if population is not effectively controlled the pressures it imposes on the environment will not be relieved even if the economy grows.
More Services, Fewer Sanctions
Over time, Chinese population policy appears to have evolved toward a more service-based approach consistent with the consensus developed at the 1994 International Conference on Population and Development in Cairo. According to Ru Xiao-mei of the State Population and Family Planning Commission, "We are no longer preaching population control. Instead, we are emphasizing quality of care and better meeting the needs of clients." Family planning clinics across the country are giving women and men wider access to contraceptive methods, including condoms and birth-control pills, thereby going beyond the more traditional use of intrauterine devices and/or sterilization after the birth of the first child. The Commission is also banking on the improved use of counseling to help keep fertility rates down.
Within China, one of the most prevalent criticisms of the one-child policy has been its implications for social security, particularly old-age support. One leading scholar envisions a scenario in which one grandchild must support two parents and four grandparents (the 4-2-1 constellation). This development is a grave concern for Chinese policymakers (as in other countries where aging populations stand to place a heavy burden on social security infrastructures as well as the generations now working to support them).
A related concern, especially in rural China where there is a lack of appropriate pension systems and among families whose only child is a daughter, is that it is sons who have traditionally supported parents in old age. The one-child policy and the preference for sons has also widened the ratio of males to females, raising alarms as the first children born into the one-child generation approach marriage age. The disparity is aggravated by modern ultrasound technology, which enables couples to abort female fetuses in hopes that the next pregnancy produces a son; although this practice is illegal, it remains in use. The 2000 census put the sex ratio at 117 boys to 100 girls, and according to The Guardian newspaper, China may have as many as 40 million single men by 2020. (There are several countries where the disparity is even greater. The UN Population Fund reports that countries such as Bahrain, Oman, Qatar, Saudi Arabia, and United Arab Emirates have male-to-female ratios ranging between 116:100 and 186:100.)
However, the traditional Chinese preference for sons may be on the decline. Dr. Zhang Rong Zhou of the Shanghai Population Information Center has argued that the preference for boys is weakening among the younger generation, in Shanghai at least, in part because girls cost less and are easier to raise. The sex ratio in Shanghai accordingly stands at 105 boys to every 100 girls, which is the international average. Shanghai has distinguished itself over the past 25 years as one of the first urban centers to adopt the one-child policy, and it promises to be a pioneer in gradually relaxing the restrictions in the years to come. Shanghai was the first region in China to have negative fertility growth; 2000 census data indicated that the rate of natural increase was -0.9 per 1,000.
A major concern remains that as the birth rate drops a smaller pool of young workers will be left to support a large population of retirees. Shanghai's decision to allow divorced Chinese who remarry to have a second child is taking advantage of the central government's policy, which lets local governments decide how to apply the one-child rule. Although Shanghai has devoted much effort to implementing the one-child policy over the past 25 years, the city is now allowing qualifying couples to explore the luxury of having a second child. This is a response to rising incomes (GDP has grown about 7 percent per year over the past 20 years) and divorce rates. As noted above, however, many couples, although often better off then their parents, remain hesitant to have more than one child because of the expense.
The first generation of only children in China is approaching parenthood accustomed to a level of economic wealth and spending power-and thus often to lifestyles-that previous generations could not even have imagined. However, China also faces a rapidly aging population. In the larger scheme of things, this may be the true test of the government's ability to provide for its citizens. The fate of China's family planning strategy-in a context in which social security is no longer provided by family members alone but by a network of government and/or private services-may be decided by the tension between the cost of children and the cost of the elderly. There seems little doubt, however, that family planning will be a key element of Chinese policymaking for many years to come.
Claudia Meulenberg, a former Worldwatch intern, received her master's degree from the George Washington University's Elliott School of International Affairs in May and now works at the Institute for International Mediation and Conflict Resolution at The Hague in her home country of the Netherlands.
References and readings for each article are available at www.worldwatch.org/pubs/mag/.
| 3.00546 |
|First Detailed Look at RNA Dicer|
Scientists have gotten their first detailed look at the molecular structure of an enzyme that Nature has been using for eons to help silence unwanted genetic messages. A team of researchers with Berkeley Lab and the University of California, Berkeley, used x-ray crystallography at ALS Beamlines 8.2.1 and 8.2.2 to determine the crystal structure of Dicer, an enzyme that plays a critical role in a process known as RNA interference. The Dicer enzyme is able to snip a double-stranded form of RNA into segments that can attach themselves to genes and block their activity. With this crystal structure, the researchers learned that Dicer serves as a molecular ruler, with a clamp at one end and a cleaver at the other end a set distance away, that produces RNA fragments of an ideal size for gene-silencing.
RNA—ribonucleic acid—has long been known as a multipurpose biological workhorse, responsible for carrying DNA's genetic messages out from the nucleus of a living cell and using those messages to make specific proteins in a cell's cytoplasm. In 1998, however, scientists discovered that RNA can also block the synthesis of proteins from some of those genetic messages. This gene-silencing process is called RNA interference and it starts when a double-stranded segment of RNA (dsRNA) encounters the enzyme Dicer.
Dicer cleaves dsRNA into smaller fragments called short interfering RNAs (siRNAs) and microRNAs (miRNAs). Dicer then helps load these fragments into a large multiprotein complex called RISC, for RNA-Induced Silencing Complex. RISC can seek out and capture messenger RNA (mRNA) molecules (the RNA that encodes the message of a gene) with a base sequence complementary to that of its siRNA or miRNA. This serves to either destroy the genetic message carried by the mRNA outright or else block the subsequent synthesis of a protein.
Until now, it has not been known how Dicer is able to recognize dsRNA and cleave those molecules into products with lengths that are exactly what is needed to silence specific genes. The Berkeley researchers were able to purify and crystallize a Dicer enzyme from Giardia intestinalis, a one-celled microscopic parasite that can infect the intestines of humans and animals. This Dicer enzyme in Giardia is identical to the core of a Dicer enzyme in higher eukaryotes, including humans, that cleaves dsRNA into lengths of about 25 bases.
In this work, the researchers describe a front view of the structure as looking like an axe. On the handle end there is a domain that is known to bind to small RNA products, and on the blade end there is a domain that is able to cleave RNA. Between the clamp and the cleaver is a flat-surfaced region that carries a positive electrical charge. The researchers propose that this flat region binds to the negatively charged dsRNA like biological Velcro, enabling Dicer to measure out and snip specified lengths of siRNA. When you put the clamp, the flat area, and the cleaver together, you get a pretty good idea as to how Dicer works. The research team is now using this structural model to design experiments that might reveal what triggers Dicer into action.
In addition, one size does not fit all for Dicer: different forms of the Dicer enzyme are known to produce different lengths of siRNA, ranging from 21 to 30 base pairs in length or longer. Having identified the flat-surfaced positively charged region in Dicer as the "ruler" portion of the enzyme, the researchers speculate that it may be possible to alter the length of a long connector helix within this domain to change the lengths of the resulting siRNA products. The researchers would like to see what happens when you take a natural Dicer and change the length of its helix.
Research conducted by I.J. MacRae and K. Zhou (University of California, Berkeley, and Howard Hughes Medical Institute); F. Li, A. Repic, A.N. Brooks, and W.Z. Cande (University of California, Berkeley); P.D. Adams (Berkeley Lab); and J.A. Doudna (University of California, Berkeley, Howard Hughes Medical Institute, and Berkeley Lab).
Research funding: National Institutes of Health. Operation of the ALS is supported by the U.S. Department of Energy, Office of Basic Energy Sciences.
Publication about this research: I.J. MacRae, K. Zhou, F. Li, A. Repic, A.N. Brooks, W.Z. Cande, P.D. Adams, and J.A. Doudna, "Structural basis for double-stranded RNA processing by dicer," Science311, 195 (2006).
| 3.819931 |
Paradnaja lestnitza doma Tolstykh
Overview and History
Odessa is the largest city on the coastline of the Black Sea and was once the third largest city in Russia, after Moscow and St. Petersburg. Her nicknames are "the Pearl of the Black Sea", "Odessa Mama" and "Southern Palmira."
The name probably comes from the earliest recorded inhabitants, a Greek colony called Odessos which disappeared around the fourth century AD. Here's a lightning overview of Odessa's rulers, from the beginning. First there were the ancient Greeks, then miscellaneous nomadic tribes, the Golden Horde of Mongolia, the Grand Duchy of Lithuania, the Crimean Khanate, the Ottoman Empire, the Russian Empire, the U.S.S.R, and finally Ukrainian independence in 1991.
The founding of the first city in this location dates to 1240 AD and is credited to a Turkish Tatar named Hacibey Khan. Its name at that time was Khadjibey. The first fortress was built in the fourteenth century, when Odessa was already becoming a major trading center. The fortress served to protect the harbor. Khadjibey became part of the Ottoman Empire in the early sixteenth century. Its fortress was rebuilt by the Ottomans and named Yeni Dunya, around 1764 AD.
The eighteenth century saw Odessa change hands from Turkish to Russian control. Russia captured Odessa in 1789 under the command of Jose de Ribas, a Spaniard who became a Russian admiral and played a major part in the victory. Jose de Ribas gets the credit for founding the modern city of Odessa -- his name is remembered in the most prominent street through the heart of Odessa -- Deribasovskaya Street.
In the treaty of Jassy in 1792, Turkey gave over control of a wide swath of land encompassing modern-day Ukraine and Odessa. The city was rebuilt to be a fort, commercial port and naval base. During the nineteenth century Odessa attracted immigrants from Greece, Bulgaria, Romania, Armenia and all over Europe, enjoying its status as a free port.
Odessa was bombed by British and French weaponry during the Crimean War of the 1850's. After the destruction was repaired, a railroad joined Odessa to Kiev in 1866 and the city rapidly developed as the main export center for grain production. It became a center of Ukranian nationalism at the turn of the 20th century and in 1905 Odessa was the scene of a worker's uprising, led by sailors from the battleship Potemkin. During the uprising hundreds of citizens were murdered on the staircase that has come to be called "the Potemkin Steps."
During WWI Odessa was bombarded by the Turkish fleet and after the Bolshevik Revolution the city was occupied by the Central Powers, the French and the Red Army. In 1922 Odessa was unified with the Ukranian Soviet Socialist Republic. There was terrible suffering in the famine which took place after the Russian revolution in 1921.
Odessa was taken by German forces in 1941, and almost 300,00 civilians were killed. It remained under Romanian administration during WWII until its liberation by the Soviet Army in 1944. The city went through another rapid growth period after WWII, with industries of ship-building, oil refineries and chemical processing. The city became part of newly-independent Ukraine in 1991 after the fall of communism.
By air, the International Airport of Odessa is where you'll arrive and it's linked to the city by buses. Passenger ships from Istanbul, Haifa and Varna connect with the port. The Marine terminal is at the bottom of the Potemkin steps. When you get to the top you'll be greeted by the Duke of Richelieu, one of the city's founding fathers. This staircase also forms an optical illusion; looking down from the top, the steps are invisible and the side walls of the staircase appear to run parallel. Don't be fooled.
The main railway station is in the southern part of the city and it's connected with trams and buses, as usual, to get you around.
People and Culture
Things to do, Recommendations
The Opera House is the oldest and most famous in Odessa, built in 1810 with rich decorative rococo style. Here's a look at the Opera Theater at night. The Palais-Royal is adjoined to the Opera Theater and is also worth a trip to see.
On the "must-see" list, Deribasovskaya Street is the very heart of Odessa. Its unique character lasted even when adherence to Soviet-design styles was strictly promoted -- so here you can find amazing architecture, outdoor cafes and restaurants, cobblestone streets and no vehicle traffic.
Here's a look in the Passage shopping mall and hotel in the city center, a cool place to walk around.
Visit the Spaso-Preobrazhensky Cathedral, the largest Orthodox Church in the city. It's been newly reconstructed after its destruction by Bolsheviks in the 1930's.
Architectural curiosities: go and find the one-wall building when you run out of things to do. This would be first on my list, actually. Here's another mixup of architectural styles to have a look at.
Finally, go and visit Empress Ekaterina, one of the main founders the city, at her monument.
Text by Steve Smith.
| 3.206414 |
[Author's note: Although I am employed by the Japanese American National Museum, this article should not be construed as coming from the National Museum. Instead, this article is my personal opinion and should be taken as such.]
Over the last month, I have posted articles about my grandfather and what happened to him during the Second World War. Much of my grandfather’s story was not unique. Approximately 120,000 Japanese Americans were illegally incarcerated during the war, their only crime was looking like the enemy. The majority of those incarcerated were American citizens.
When most people refer to where the Japanese American were held, they use the term: internment camp. But the term is not only inaccurate but also hides what they really were: concentration camps.
Before you get angry or offended, let me explain.
According to the Merrian Webster dictionary, a concentration camp is “a camp where persons (as prisoners of war, political prisoners, or refugees) are detained or confined.” The definition of a concentration camp describes exactly what happened to the Japanese Americans during WWII, where they were political prisoners confined in a camp.
One of the reasons people are reluctant to use the term is because they don’t want to imply what happened in the United States was similar to what happened to Jews and others in Europe. But I believe what happened in Europe was not a concentration camp but much much worse. A more accurate term would be “death camp,” because the main purpose of the European camps was to torture and kill its prisoners.
In the book, Common Ground: the Japanese American National Museum and the Culture of Collaborations, the Museum curators addressed this debate:
A “concentration camp” is a place where people are imprisoned not because of any crimes they committed, but simply because of who they are. Although many groups have been singled out for such persecution throughout history, the term “concentration camp” was first used at the turn of the century in the Spanish American and Boer Wars.
During World War II, America’s concentration camps were clearly distinguishable from Nazi Germany’s. Nazi camps were places of torture, barbarous medical experiments and summary executions: some were extermination centers with gas chambers. Six million Jews were slaughtered in the Holocaust. Many others, including Gypsies, Poles, homosexuals and political dissidents were also victims of the Nazi concentration camps.
In recent years, concentration camps have existed in the former Soviet Union, Cambodia and Bosnia.
Despite differences, all had one thing in common: the people in power removed a minority group from the general population and the rest of society let it happen.
It should be noted that United States government and military officials (including the President) often referred to these places as concentration camps. It is also important to note that not all Japanese Americans agree with the use of the term. Some Japanese Americans would prefer to use the government terminology. Although I disagree with them, it is their right to do so.
If concentration camps is the historically most accurate term, is saying interment camp wrong? Yes, because internment camp is a euphemism. According to the Merrian Webster dictionary, a euphemism is “the substitution of an agreeable or inoffensive expression for one that may offend or suggest something unpleasant.” A good example of a euphemism is saying someone was “eliminated” versus “killed.”
Think it doesn’t make a difference? What images are evoked when you hear concentration camp versus when you hear internment camp? Internment seems benign at worst while concentration camp is always construed negatively. That difference is intentional.
Mako Nakagawa, a former teacher and Japanese American activist, spoke about the negative effects of the euphemism on the general perception of the World War II experiences of Japanese Americans in an interview with the Nichi Bei, a Japanese American newspaper:
Government-created euphemistic language led to some people actually believing that the Japanese Americans were being protected and even pampered in the camps. The use of inaccurate terms can, and too often does, distort facts into outright fantasies.
The old adage that “sticks and stones may break bones/But words will never hurt” is not true. Words have power. They can create and they can destroy. Every time I write anything, whether it’s a screenplay, blog, or email, I remember the following quote from Pearl Strachan to remind myself how important my words can be: “Handle them carefully, for words have more power than atom bombs.”
Internment camp wasn’t the only euphemism. Here is a short list of some of the other more egregious ones:
This last one is so unbelievable (and not very well known), I feel it is important to expand on it a little. In the evacuation order, it states:
All Japanese persons, both alien and non-alien, will be evacuated from the above designated area by 12:00 o’clock noon Tuesday, April 7, 1942.
If an alien is someone who is not a citizen, a non-alien is a citizen. But they couldn’t say, “All Japanese person, both citizen and non-citizen will be evacuated” because it would be too obviously unconstitutional. But if we say non-alien most people wouldn’t give it a second thought.
Hiding the truth of what happened behind euphemistic language doesn’t allow us as a country to learn from our mistake and make sure it doesn’t happen again. That’s why when people ask me about my family’s experience in World War II, I always make sure to start by saying that they were incarcerated for almost six years in America’s concentration camps.
If you want to learn more, I recommend reading Words Can Lie or Clarify: Terminology of the World War II Incarceration of Japanese Americans by Aiko Herzig-Yoshinaga. (Aiko was one of the people responsible for proving that the incarceration of Japanese Americans was not based on a military necessity but racism.)
Finally, Mako Nakagawa will be speaking at the Japanese American National Museum on August 27, 2011 at 2pm. She will discuss euphemisms and the importance of using accurate terminology.
| 3.398358 |
located in Jiangxi Province is the largest freshwater
Fresh water is naturally occurring water on the Earth's surface in ice sheets, ice caps, glaciers, bogs, ponds, lakes, rivers and streams, and underground as groundwater in aquifers and underground streams. Fresh water is generally characterized by having low concentrations of dissolved salts and...
A lake is a body of relatively still fresh or salt water of considerable size, localized in a basin, that is surrounded by land. Lakes are inland and not part of the ocean and therefore are distinct from lagoons, and are larger and deeper than ponds. Lakes can be contrasted with rivers or streams,...
Chinese civilization may refer to:* China for more general discussion of the country.* Chinese culture* Greater China, the transnational community of ethnic Chinese.* History of China* Sinosphere, the area historically affected by Chinese culture...
It has a surface area of 3,585 km², a volume of 25 km³ and an average depth of eight meters. The lake provides a habitat for half a million migratory birds, and is a favorite destination for birding. It is fed by the Gan, Xin, and Xiu
Xiu is a Chinese language web site for online shopping, operated in the People's Republic of China.Founded by Mr. Ji Wenhong and Mr. Jin Huang in March 2008, it's a vertical e-commerce company, offering middle to luxury brand name fashion products - clothing, shoes, bags, ornaments, cosmetics and...
rivers, which connect to the Yangtse through a channel.
During the winter, the lake becomes home to a large number of migrating Siberian crane
The Siberian Crane also known as the Siberian White Crane or the Snow Crane, is a bird of the family Gruidae, the cranes...
s, up to 90% of which spend the winter there.
Historically, although Poyang Lake has also been called Pengli Marsh (彭蠡澤) they are not the same. Before the Han Dynasty
The Han Dynasty was the second imperial dynasty of China, preceded by the Qin Dynasty and succeeded by the Three Kingdoms . It was founded by the rebel leader Liu Bang, known posthumously as Emperor Gaozu of Han. It was briefly interrupted by the Xin Dynasty of the former regent Wang Mang...
, the Yangtze followed a more northerly course through what is now Lake Longgan (龍感湖) whilst Pengli Marsh formed the lower reaches of the Gan River. The area that is now Poyang Lake was a plain along the Gan River. Around AD 400, the Yangtze River switched to a more southerly course, causing the Gan River to back up and form Lake Poyang. The backing up of the Gan River drowned Poyang County
Poyang County is a county under the administration of Shangrao city in Jiangxi Province of the People's Republic of China. It is located on the eastern side of Lake Poyang....
and Haihun County, forcing a mass migration to Wucheng Township in what is now Yongxiu County
Yongxiu County is a county under Jiujiang City in Jiangxi Province, China. The total area is square kilometre, and the population is as of 200?.-External links:...
. Wucheng thus became one of the great ancient townships of Jiangxi Province. This migration gave birth to the phrase, "Drowning Haihun County gives rise to Wucheng Township"「淹了海昏縣,出了吳城鎮」.
Lake Poyang reached its greatest size during the Tang Dynasty
The Tang Dynasty was an imperial dynasty of China preceded by the Sui Dynasty and followed by the Five Dynasties and Ten Kingdoms Period. It was founded by the Li family, who seized power during the decline and collapse of the Sui Empire...
, when its area reached 6000 km².
There has been a fishing ban in place since 2002.
In 2007 fears were expressed that China's finless porpoise
The finless porpoise is one of six porpoise species. In the waters around Japan, at the northern end of its range, it is known as the sunameri . A freshwater population found in the Yangtze River in China is known locally as the jiangzhu or "river pig". There is a degree of taxonomic uncertainty...
, known locally as the jiangzhu
("river pig"), a native of the lake, might follow the baiji
Baiji may refer to:* The Baiji or Yangtze River Dolphin * Baiji, Iraq, a city of northern Iraq.* "Baiji" is the pinyin Romanization for Baekje....
, the Yangtze river dolphin, into extinction.
Calls have been made for action to be taken to save the porpoise, of which there are about 1,400 left living, with between 700 and 900 in the Yangtze, with about another 500 in Poyang and Dongting Lakes.
2007 population levels are less than half the 1997 levels, and the population is dropping at a rate of 7.3 per cent per year.
Sand dredging has become a mainstay of local economic development in the last few years, and is an important source of revenue in the region that borders Poyang Lake. But at the same time, high-density dredging projects have been the principal cause of the death of the local wildlife population.
Dredging makes the waters of the lake muddier, and the porpoises cannot see as far as they once could, and have to rely on their highly-developed sonar systems to avoid obstacles and look for food. Large ships enter and leave the lake at the rate of two a minute and such a high density of shipping means the porpoises have difficulty hearing their food, and also cannot swim freely from one bank to the other.
In 1363, the Battle of Lake Poyang
The naval battle of Lake Poyang took place 30 August – 4 October AD 1363 and was one of the final battles fought in the fall of China's Mongol-led Yuan Dynasty...
took place there, and it is claimed to be the largest naval battle in history
The title of "largest naval battle in history" is disputed between adherents of criteria which include the numbers of personnel and/or vessels involved in the battle, and the total tonnage of the vessels involved...
The lake has also been described as the "Chinese Bermuda Triangle". Many ships have disappeared while sailing in it.
On 16 April 1945, a Japanese troop ship vanished without a trace with 200 sailors.
| 3.115833 |
AccessMyLibrary provides FREE access to millions of articles from top publications available through your library.
Over the years, countless scholarly works have been written about the African-American experience. In addition to examining the African-American's place in American history, some scholars have taken special interest in the black family, black youth, and black urban life (Billingsley, 1968; Frazier, 1939; Glasgow, 1980; Glick & Mills, 1974; Gutman, 1976; Hill, 1973; Moynihan, 1965; Wilson, 1987). As early as 1908, W. E. B. DuBois wrote The Negro American Family and, years before that, described black life in the city of Philadelphia (1899). Since that time, except for that of a handful of scholars, interest
in these subjects has waxed and waned, invariably increasing after incidents of urban unrest and turmoil such as the riots of the 1960s. The most recent riot, occurring in Los Angeles in May 1992, and escalating incidences of senseless urban violence have combined to renew scholarly, as well as public, concern for discovering why such events occur.
One has only to look at statistical data compiled and interpreted in book form (Hacker, 1992) or data directly from the U.S. Bureau of the Census or from other government agencies to see why there might be unrest, despair, and even a sense of hopelessness in the black urban ghetto in general and among black young adults in particular. It is clear that these young people have much to contend with and have fewer and fewer tools to overcome the obstacles before them, obstacles that have the power to defeat them even before they are out of infancy. These are forces that weaken the black family, that undermine education, and that glorify violence.
The 1990 census report indicates that African-Americans make up 11.9 percent of the total population of this country, yet they disproportionately contribute to statistics which, when translated, portray the face of ongoing human tragedy. To begin with, almost two-thirds of all black babies are now born outside marriage. This means that a large percentage of black families are headed by females. In fact, 56.2 percent of all black families are headed by women and 55.1 percent of these women have never been married (Hacker, 1992, pp. 67-74). More disturbing is the tendency of black teenagers to begin sexual activity at a relatively early age. It is estimated that, by age fifteen, 68.6 percent of black teenagers have engaged in sexual intercourse. The results of this activity are that some 40.7 percent of all black teenage girls become pregnant by age eighteen. Some 99.3 percent of these girls elect to keep and raise their babies (p. 76). Many of these girls live in multigenerational households with a mother, other children, and the daughter's children (p. 72).
Perhaps the most devastating statistics have to do with the effect these lifestyle patterns have on the way many of these black families live. Fifty-six percent of black single parent families have incomes less than the poverty level of $10,530 for a family of three. In fact, 39.8 percent of families receiving federally sponsored Aid for Dependent Children (AFDC) are black. This means that they are, because of income, relegated, for the most part, to substandard housing, inadequate health care, and inferior schools.
The litany continues, but the statistics concerning black men are particularly disturbing. Nationwide, 500,000 black men are serving time in 'ails and prisons for criminal offenses. More than 1 million have criminal records (p. 74). Violent death now accounts for more deaths among young black men than other cause. If a black man is fifteen to twenty-five years old, he is 3.25 times more likely to die than his black female counterpart. What is most dismaying is that the leading cause of death among black men in this age group is gunshot wounds (Hacker, 1992, p. 75).
Historically, African-Americans have, in very large numbers, been poor. In 1990, they made up 10.1 percent of the work force but received only 7.8 percent of all earnings. In that same year, the median income for all black families was $21,423 as compared with $36,915 for all white families. In 1990, 37 percent of all black families earned less than $15,000 a year, and 44.8 percent of all black children lived below the poverty line (Hacker, 1992, pp. 98-99). Even with added education, there still remains an income disparity between blacks and whites. With a high school diploma, black men earn approximately $797 for every $1,000 earned by a white man with the same diploma. With a college degree, black men earn only $798 compared with the $1,000 earned by their white counterparts. Black women, on the other hand, are much closer to achieving parity with the earnings of white women at every educational level (Hacker, 1992, p. 95).
The majority of poor African-Americans live in the central cities of this country, and 70 percent are concentrated in low income neighborhoods. Here, it is difficult to find work or to get to the place of employment even if one is fortunate enough to have a job. With few factory jobs available--the mainstay of the black working class--unemployment remains high. The unemployment rate among blacks since 1974 has been in double digit figures. In 1983, it was at a high of 20 percent and has consistently remained twice that of the white unemployed (Hacker, 1992, pp. 102-03).
In the area of education, 63.3 percent of all black school-age children still attend segregated schools (Hacker, 1992, p. 162). This statistic reflects not only school segregation but housing segregation as well since blacks tend to be concentrated in predominantly black neighborhoods. Looking more closely at black school attendance patterns on a state by state basis, Illinois tops the list of segregated schools, with 83.2 percent of its black students attending segregated classes. New York is not too far behind, with 80.8 percent attending segregated schools, followed closely by Mississippi, with an 80.3 percentage, rate (p. 163). To …
| 3.280065 |
LABELING AND RATING SYSTEMS
An Interpretation of the LIBRARY BILL OF RIGHTS
Libraries do not advocate the ideas found in their collections or in resources accessible through the library. The presence of books and other resources in a library does not indicate endorsement of their contents by the library. Likewise, providing access to digital information does not indicate endorsement or approval of that information by the library. Labeling and rating systems present distinct challenges to these intellectual freedom principles.
Labels on library materials may be viewpoint-neutral directional aids designed to save the time of users, or they may be attempts to prejudice or discourage users or restrict their access to materials. When labeling is an attempt to prejudice attitudes, it is a censor’s tool. The American Library Association opposes labeling as a means of predisposing people’s attitudes toward library materials.
Prejudicial labels are designed to restrict access, based on a value judgment that the content, language, or themes of the material, or the background or views of the creator(s) of the material, render it inappropriate or offensive for all or certain groups of users. The prejudicial label is used to warn, discourage, or prohibit users or certain groups of users from accessing the material. Such labels sometimes are used to place materials in restricted locations where access depends on staff intervention.
Viewpoint-neutral directional aids facilitate access by making it easier for users to locate materials. The materials are housed on open shelves and are equally accessible to all users, who may choose to consult or ignore the directional aids at their own discretion.
Directional aids can have the effect of prejudicial labels when their implementation becomes proscriptive rather than descriptive. When directional aids are used to forbid access or to suggest moral or doctrinal endorsement, the effect is the same as prejudicial labeling.
Many organizations use rating systems as a means of advising either their members or the general public regarding the organizations’ opinions of the contents and suitability or appropriate age for use of certain books, films, recordings, Web sites, games, or other materials. The adoption, enforcement, or endorsement of any of these rating systems by a library violates the Library Bill of Rights. When requested, librarians should provide information about rating systems equitably, regardless of viewpoint.
Adopting such systems into law or library policy may be unconstitutional. If labeling or rating systems are mandated by law, the library should seek legal advice regarding the law’s applicability to library operations.
Libraries sometimes acquire resources that include ratings as part of their packaging. Librarians should not endorse the inclusion of such rating systems; however, removing or destroying the ratings—if placed there by, or with permission of, the copyright holder—could constitute expurgation. In addition, the inclusion of ratings on bibliographic records in library catalogs is a violation of the Library Bill of Rights.
Prejudicial labeling and ratings presuppose the existence of individuals or groups with wisdom to determine by authority what is appropriate or inappropriate for others. They presuppose that individuals must be directed in making up their minds about the ideas they examine. The American Library Association affirms the rights of individuals to form their own opinions about resources they choose to read or view.
Adopted July 13, 1951, by the ALA Council; amended June 25, 1971; July 1, 1981; June 26,1990; January 19, 2005; July 15, 2009.
| 3.583467 |
Although Tunisia has enacted several laws pertaining to environmental protection, enforcement of environmental legislation has not been consistent until recently, due both to the lack of staff and resources. In addition, the legal instruments available in the past were not highly effective.
The creation of the National Environmental Protection Agency (ANPE) in 1988, however, led to the development of a National Action Plan for the Environment (NAPE), which attempts to draw together existing environmental legislation and programs and to provide a strategy for natural resource conservation, pollution control and land-use management. To that end, article 8 of Air Pollution and Noise Emissions Law No. 88-91 dictates that any industrial, agricultural or commercial establishment as well as any individual or corporate entity carrying out activity that may cause pollution to the environment, is obliged to eliminate or to reduce discharges and, eventually, to recycle rejected matter. The ANPE may initiate legal proceedings against violators or reach a compromise with the polluting entity.
Legislation pertaining to environmental protection includes the Wildlife Protection Law No. 88-20; the Water Pollution Law No. 75-16; and the Marine Pollution Law No. 75-16.
In addition, Tunisia is a member of ISO. In June, 1997, the Technical Committee for the Elaboration of Standards adopted the ISO 14,000 Series relating to industrial atmospheric emission standards.
Tunisia has entered into several international conventions and agreements dealing with environmental problems and aspects, including:
- Convention of the Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons, and on Their Destruction;
- Convention on International Trade in Endangered Species of Wild Fauna and Flora;
- Convention for the Protection of the Mediterranean Sea Against Pollution;
- Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space and Under Water; and
- International Convention Relating to Intervention on the High Seas in Cases of Oil Pollution Casualties.
| 3.309786 |
Science Fair Project Encyclopedia
Cryonics is the practice of preserving organisms, or at least their brains, for possible future revival by storing them at cryogenic temperatures where metabolism and decay are almost completely stopped.
An organism held in such a state (either frozen or vitrified) is said to be cryopreserved. Barring social disruptions, cryonicists believe that a perfectly vitrified person can be expected to remain physically viable for at least 30,000 years, after which time cosmic ray damage is thought to be irreparable. Many scientists in the field, most notably Ralph Merkle and Brian Wowk, hold that molecular nanotechnology has the potential to extend even this limit many times over.
To its detractors, the justification for cryonics is unclear, given the primitive state of preservation technology. Advocates counter that even a slim chance of revival is better than no chance. In the future, they speculate, not only will conventional health services be improved, but they will also quite likely have expanded even to the conquering of old age itself (see links at the bottom). Therefore, if one could preserve one's body (or at least the contents of one's mind) for, say, another hundred years, one might well be resuscitated and live indefinitely long. But critics of the field contend that, while an interesting technical idea, cryonics is currently little more than a pipedream, that current "patients" will never be successfully revived, and that decades of research, at least, must occur before cryonics is to be a legitimate field with any hope of success.
Probably the most famous cryopreserved patient is Ted Williams. The popular urban legend that Walt Disney was cryopreserved is false; he was cremated, and interred at Forest Lawn Memorial Park Cemetery. Robert Heinlein, who wrote enthusiastically of the concept, was cremated and his ashes distributed over the Pacific Ocean. Timothy Leary was a long-time cryonics advocate, and signed up with a major cryonics provider. He changed his mind, however, shortly before his death, and so was not cryopreserved.
Obstacles to success
Damage from ice formation
Cryonics has traditionally been dismissed by mainstream cryobiology, of which it is arguably a part. The reason generally given for this dismissal is that the freezing process creates ice crystals, which damage cells and cellular structures—a condition sometimes called "whole body freezer burn "—so as to render any future repair impossible. Cryonicists have long argued, however, that the extent of this damage was greatly exaggerated by the critics, presuming that some reasonable attempt is made to perfuse the body with cryoprotectant chemicals (traditionally glycerol) that inhibit ice crystal formation.
According to cryonicists, however, the freezer burn objection became moot around the turn of the millennium, when cryobiologists Greg Fahy and Brian Wowk, of Twenty-First Century Medicine developed major improvements in cryopreservation technology, including new cryoprotectants and new cryoprotectant solutions, that greatly improved the feasibility of eliminating ice crystal formation entirely, allowing vitrification (preservation in a glassy rather than frozen state). In a glass, the molecules do not rearrange themselves into grainy ice crystals as the solution cools, but instead become locked together while still randomly arranged as in a fluid, forming a "solid liquid" as the temperature falls below the glass transition temperature. Alcor Life Extension Foundation, the world's largest cryonics provider, has since been using these cryoprotectants, along with a new, faster cooling method, to vitrify whole human brains. They continue to use the less effective glycerol-based freezing for patients who opt to have their whole bodies preserved, since vitrification of an entire body is beyond current technical capabilities. The only other full-service cryonics provider in the world, the Cryonics Institute, is currently testing its own vitrification solution.
Current solutions being used for vitrification are stable enough to avoid crystallization even when a vitrified brain is warmed up. This has recently allowed brains to be vitrified, warmed back up, and examined for ice damage using light and electron microscopy. No ice crystal damage was found. However, if the circulation of the brain is compromised, protective chemicals may not be able to reach all parts of the brain, and freezing may occur either during cooling or during warming. Cryonicists argue, however, that injury caused during cooling can be repaired before the vitrified brain is warmed back up, and that damage during rewarming can be prevented by adding more cryoprotectant in the solid state, or by improving rewarming methods.
Some critics have speculated that because a cryonics patient has been declared legally dead, their organs are dead, and thus unable to allow cryoprotectants to reach the majority of cells. Cryonicists respond that it has been empirically demonstrated that, so long as the cryopreservation process begins immediately after legal death is declared, the individual organs (and perhaps even the patient as a whole) remain biologically alive, and vitrification (particularly of the brain) is quite feasible.
Critics have often quipped that it is easier to revive a corpse than a cryonically frozen body. Many cryonicists might actually agree with this, provided that the "corpse" were fresh, but they would argue that such a "corpse" may actually be biologically alive, under optimal conditions. A declaration of legal death does not mean that life has suddenly ended—death is a gradual process, not a sudden event. Rather, legal death is a declaration by medical personnel that there is nothing more they can do to save the patient. But if the body is clearly biologically dead, having been sitting at room temperature for a period of time, or having been traditionally embalmed, then cryonicists would hold that such a body is far less revivable than a cryonically preserved patient, since any process of resuscitation will depend on the quality of the structural and molecular preservation of the brain, which is largely destroyed by ischemic damage (from lack of blood flow) within minutes or hours of cardiac arrest, if the body is left to sit at room temperature. Traditional embalming also largely destroys this crucial neurological structure.
Cryonicists would also point out that the definitions of "death" and "corpse" currently in use may change with future medical advances, just as they have changed in the past, and so they generally reject the idea that they are trying to "raise the dead", viewing their procedures instead as highly experimental medical procedures, whose efficacy is yet to be either demonstrated or refuted. Some also suggest that if technology is developed that allows mind transfer, revival of the frozen brain might not even be required; the mind of the patient could instead be "uploaded" into an entirely new substrate.
The biggest drawback to current vitrification practice is a costs issue. Because the only really cost-effective means of storing a cryopreserved person is in liquid nitrogen, possibly large-scale fracturing of the brain occurs, a result of cooling to −196°C, the temperature of liquid nitrogen. Fracture-free vitrification would require inexpensive storage at a temperature significantly below the glass transition temperature of about −125°C, but high enough to avoid fracturing (−150°C is about right). Alcor is currently developing such a storage system. Alcor believes, however, that even before such a storage system is developed, the current vitrification method is far superior to traditional glycerol-based freezing, since the fractures are very clean breaks that occur even with traditional glycerol cryoprotection, and the loss of neurological structure is still less than that caused by ice formation, by orders of magnitude.
While cryopreservation arrangements can be expensive (currently ranging from $28,000 to $150,000), most cryonicists pay for it with life insurance. The elderly, and others who may be uninsurable for health reasons, will often pay for the procedure through their estate. Others simply invest their money over a period of years, accepting the risk that they might die in the meantime. All in all, cryonics is actually quite affordable for the vast majority of those in the industrialized world who really want it, especially if they make arrangements while still young.
Even assuming perfect cryopreservation techniques, many cryonicists would still regard eventual revival as a long shot. In addition to the many technical hurdles that remain, the likelihood of obtaining a good cryopreservation is not very high because of logistical problems. The likelihood of the continuity of cryonics organizations as businesses, and the threat of legislative interference in the practice, don't help the odds either. Most cryonicists, therefore, regard their cryopreservation arrangements as a kind of medical insurance—not certain to keep them alive, but better than no chance at all and still a rational gamble to take.
Brain vs. whole-body cryopreservation
During the 1980s, the problems associated with crystallization were becoming better appreciated, and the emphasis shifted from whole body to brain-only or "neuropreservation", on the assumption that the rest of the body could be regrown, perhaps by cloning of the person's DNA or by using embryonic stem cell technology. The main goal now seems to be to preserve the information contained in the structure of the brain, on which memory and personal identity depends. Available scientific and medical evidence suggests that the mechanical structure of the brain is wholly responsible for personal identity and memories (for instance, spinal cord injury victims, organ transplant patients, and amputees appear to retain their personal identity and memories). Damage caused by freezing and fracturing is thought to be potentially repairable in the future, using nanotechnology, which will enable the manipulation of matter at the molecular level. To critics, this appears a kind of futuristic deus ex machina, but while the engineering details remain speculative, the rapidity of scientific advances over the past century, and more recently in the field of nanotechnology itself, suggest to some that there may be no insurmountable problems. And the cryopreserved patient can wait a long time. With the advent of vitrification, the importance of nanotechnology to the cryonics movement may begin to decrease.
Some critics, and even some cryonicists, question this emphasis on the brain, arguing that during neuropreservation some information about the body's phenotype will be lost and the new body may feel "unwanted", and that in case of brain damage the body may serve as a crude backup, helping restore indirectly some of the memories. Partly for this reason, the Cryonics Institute preserves only whole bodies. Some proponents of neuropreservation agree with these concerns, but still feel that lower costs and better brain preservation justify preserving only the brain.
Historically, cryonics began in 1962 with the publication of The Prospect of Immortality by Robert Ettinger. In the 1970s, the damage caused by crystallization was not well understood. Two early organizations went bankrupt, allowing their patients to thaw out, bringing the matter to the public eye, at which point the problem with cellular damage became more well known and the practice gained something of the reputation of a scam. During the 1980s, the extent of the damage from the freezing process became much clearer and better known, and the emphasis of the movement began to shift from whole-body to neuropreservation.
Alcor currently preserves about 60 human bodies and heads in Scottsdale, Arizona. Before the company moved to Arizona from Riverside, California in 1994, it was the center of several controversies, including a county coroner's ruling that a client was murdered with barbiturates before her head was removed by the company's staff. Alcor contended that the drug was administered after her death. No charges were ever filed.
- engineered negligible senescence
- life extension,
- Interstellar travel,
- Immortality Institute,
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
| 3.422245 |
Science Fair Project Encyclopedia
Trier (French: Trèves), is Germany's oldest city. It is situated on the western bank of the Moselle River in a valley between low vine-covered hills of ruddy sandstone. It is located in the state of Rhineland-Palatinate near the German border with Luxembourg. Trier had around 100,000 inhabitants at the end of 2002. There is also an important wine-growing-region nearby: Mosel-Saar-Ruwer.
The Romans under Julius Caesar subdued the Celtic Treverans in 58 to 50 BC. When the Roman provinces in Germany were reorganised in 16 BC, Augustus decided that Trier, then called Augusta Treverorum, should become the regional capital. From 259 to 274 Trier was the capital of the break away Gallic Empire. Later for a few years (383 - 388) it was the capital of Magnus Maximus, who ruled most of the western Empire.
Sacked by Attila in 451, it passed to the Franks in 463, to Lorraine in 843, to Germany in 870, and back to Lorraine in 895, and was finally united to Germany by the Emperor Henry I. The Archbishop of Trier was, as chancellor of Burgundy, one of the electors of the empire, a right which originated in the 12th or 13th century, and which continued till the French Revolution. The last elector removed to Koblenz in 1786; and Treves was the capital of the French department of Sarre from 1794 till 1814, after which time it belonged to Prussia.
The city is well known for its well-preserved Roman buildings, among them the Porta Nigra, the best preserved Roman city gate north of the Alps, a complete amphitheatre, ruins of several Roman baths, and the huge Basilica, a basilica in the original Roman sense, being the 67m-length throne hall of Roman Emperor Constantine; it is today used as a Protestant church.
Trier is the oldest seat of a Christian bishop in Germany. In the Middle Ages, the Archbishop of Trier was an important ecclesiastical prince, controlling land from the French border to the Rhine. He was also one of the seven electors of the Holy Roman Empire.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
| 3.812762 |
- Historic Sites
Is Our Civic Life Really In Decline?
VOTER TURNOUT MAY BE DOWN IN RECENT YEARS, BUT THE INVOLVEMENT OF THE COMMON CITIZEN HAS GROWN TO FAR SURPASS ANYTHING THE FOUNDING FATHERS EVER DREAMED OF
October 1999 | Volume 50, Issue 6
Today we are at the tail end of the long Progressive Era. We know it has got us less than we hoped, but we don’t know how to picture a mode of citizenship that might give us more. Not that it is at all the whole story of citizenship in this century. Citizenship has changed again in the past fifty years, as the civil rights movement and the “rights revolution” broadly added the courtroom to the voting booth as a locus for civic participation. Social movements and political organizations that in the past could hope for change only through legislative action have found that the judicial system can offer a powerful alternative route to their goals.
The civil rights movement opened the door to a widening web of constitutionally guaranteed citizen rights and state and federal laws that expanded citizens’ entitlements and the reach of due process. This affected not only the civil and political rights of African-Americans but the rights of women and of the poor and, increasingly, of minority groups of all sorts. A new notion of citizenship has thus overflowed the banks of electoral activity, opening a new channel in the courts and streaming on across the terrain of everyday life into schools, workplaces, and homes.
The acts and avenues of citizenship today are a world away from anything Jefferson and Washington lived with or conceived of. Today we are left with the legacy of each of the past eras of citizenship. Our inheritance of the politics of assent, the politics of partisanship and loyalty, the politics of the informed citizen, and the politics of rights, together, in all their variety, gives us our historical resource for forging a new model of citizenship for the new century.
| 3.136102 |
Use Project-Based Learning to Meet the National Educational Technology StandardsPosted by Andrew K. Miller on Apr 26, 2012 in Blog, Edutopia | 0 comments
This post originally appeared on Edutopia, a site created by the George Lucas Educational Foundation, dedicated to improving the K-12 learning process by using digital media to document, disseminate, and advocate for innovative, replicable strategies that prepare students. View Original >
The ISTE NETS (National Educational Technology Standards) are more than just simple content standards and learning objectives. If examined closely, they truly can foster an educational shift to engaging, relevant, technology-rich learning. In terms of project-based learning (PBL), the ISTE NETS, not only align, but can truly support a PBL environment. After my own examination, I felt we must have a #pblchat on the subject.
Weeks ago, this was our topic. Feel free review the storify archive of the whole chat to get more ideas. Here are some of my ideas and take-aways as well as inspirations from others on how some the ISTE Student NETS can support PBL. We will focus on five of the Student NETS this time, but keep in mind there are more, as well as the NETS for teachers, administrators and coaches!
Student NET #1: Creativity and Innovation
Students demonstrate creative thinking, construct knowledge, and develop innovative products and processes using technology.
Okay, I’m going to be a bit crass with this description. PBL requires that students create something new, innovate with content, and develop products that show this deeper learning. Students do not gorge on content and then throw it up in a pretty new genre or technology tool. This NET can help teachers ensure that they’re asking for products that require innovation of the content and not regurgitation. Through an innovative project idea and driving question, your students are not only learning content, but creating something new with it.
Student NET #2: Communication and Collaboration
Students use digital media and environments to communicate and work collaboratively, sometimes at a distance, to support individual learning and contribute to the learning of others.
Two of the key 21st century skills in PBL are communication and collaboration. PBL projects balance the learning not only of content, but also 21st century skills that are transferable across disciplines and into life after K-12 schooling. Through this standard, students can communicate and collaborate, both in person with their teams and across the globe, giving an opportunity for global education. Using the right tools for the authentic purposes of collaboration and communication, students can engage in innovative PBL projects.
Student NET #3: Research and Information Fluency
Students apply digital tools to gather, evaluate and use information.
When we unpack this standard, one of the key words here is “inquiry.” Students are not simply doing research. PBL projects require students to engage in in-depth inquiry on a specific topic through posing questions, researching and interpreting data, and reporting it. However, as students move through this cycle of inquiry, they may find incomplete data, require further information or make mistakes. This NET lets students know that revision and reflection are critical to the inquiry process. In addition, it leverages higher-order thinking skills like synthesis and evaluation, which can ensure that PBL projects are stimulating deep learning.
Student NET #4: Critical Thinking, Problem Solving, and Decision Making
Students use critical thinking skills to plan and conduct research, manage projects, solve problems and make informed decisions using appropriate digital tools and resources.
PBL projects must engage students in critically thinking around content, and they often have students attempt to solve a problem. In addition, this standard really pushes for student-centered learning. It is on the students to manage themselves, make decisions and more. The teacher’s role is more of guide on the side, with “just in time” moments of instruction to help students with critical thinking and problem solving. PBL projects also leverage the 21st century skill of critical thinking and problem solving through assessment.
Student NET #5: Digital Citizenship
Students understand human, cultural and societal issues related to technology, and practice legal and ethical behavior.
As students engage in technology-rich projects, it is important to model and practice digital citizenship. Explicit instruction, lessons and activities must take place to ensure that students are creating good “digital footprints.” In addition, this is a great theme inspiration for a PBL project. From a technology class to a language arts class, you can have students make recommendations about digital policy or teach other members of the school community and beyond how to be good digital citizens.
As you build your PBL projects, consider how the ISTE NETS can support your work. The NETS will not only help to hone and refine a PBL project, but also serve as an advocacy piece to stakeholders and other “naysayers.” They can help you focus how to use the technology and keep that focus on student learning for the 21st century. Consider assessing these standards to leverage them! How are you using the NETS in your classroom?
| 3.737401 |
|Why are these strange little spheres on Mars?
rover Opportunity chanced across
these unusually shaped beads earlier this month while exploring a place named
Kirkwood near the rim of Mars'
The above image taken by Opportunity's
Microscopic Imager shows that some ground near the rover is filled with these unusual spheres, each spanning only about 3 millimeters.
At first glance, the sometimes-fractured balls appear similar to the small rocks dubbed
blueberries seen by Opportunity eight years ago, but these spheres are densely compacted and have little iron content.
Although it is thought that
these orbs formed naturally, which natural processes formed them remain unknown.
Opportunity, an older sibling to the recently deployed
Curiosity rover, will continue to study these spheres with the hope that they will provide a new clue to the ancient history of the surface of the
Mars Exploration Rover Mission,
| 3.205347 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.