text
stringlengths
247
520k
nemo_id
stringlengths
29
29
Lennox Island is a Mi'kmaq First Nation on Prince Edward Island, Canada, with its headquarters in Lennox Island, northeast of Tyne Valley. The band consists of a single reserve occupying all of Lennox Island. The Lennox Island First Nation was originally known as "L'nui Minegoo" or the Indian/People's Island, and later known as the Lennox Island Reserve or the Lennox Island Band. It was named after Charles Lennox, Duke of Richmond, by Samuel Holland; surveyor. It also included the reserves that now comprise the Abegweit First Nation. Original permanent inhabitants included Chief Francis Francis who resided there after the Mi'kmaq were displaced from Cortin Island. The Saint Ann Mission was later established on the island. Coordinates: 46°36′39.1″N 63°51′9.9″W / 46.610861°N 63.852750°W
fwe2-CC-MAIN-2013-20-43813000
Master data management ||This article needs additional citations for verification. (April 2012)| In computing, Master Data Management (MDM) comprises a set of processes, governance, policies, standards and tools that consistently defines and manages the master data (i.e. non-transactional data entities) of an organization (which may include reference data). An MDM tool can be used to support Master Data Management by removing duplicates, standardizing data (Mass Maintaining), incorporating rules to eliminate incorrect data from entering the system in order to create an authoritative source of master data. Master data are the products, accounts and parties for which the business transactions are completed. The root cause problem stems from business unit and product line segmentation, in which the same customer will be serviced by different product lines, with redundant data being entered about the customer (aka party in the role of customer) and account in order to process the transaction. The redundancy of party and account data is compounded in the front to back office life cycle, where the authoritative single source for the party, account and product data is needed but is often once again redundantly entered or augmented. MDM has the objective of providing processes for collecting, aggregating, matching, consolidating, quality-assuring, persisting and distributing such data throughout an organization to ensure consistency and control in the ongoing maintenance and application use of this information. The term recalls the concept of a master file from an earlier computing era. At a basic level, MDM seeks to ensure that an organization does not use multiple (potentially inconsistent) versions of the same master data in different parts of its operations, which can occur in large organizations. A common example of poor MDM is the scenario of a bank at which a customer has taken out a mortgage and the bank begins to send mortgage solicitations to that customer, ignoring the fact that the person already has a mortgage account relationship with the bank. This happens because the customer information used by the marketing section within the bank lacks integration with the customer information used by the customer services section of the bank. Thus the two groups remain unaware that an existing customer is also considered a sales lead. The process of record linkage is used to associate different records that correspond to the same entity, in this case the same person. Other problems include (for example) issues with the quality of data, consistent classification and identification of data, and data-reconciliation issues. Master data management of disparate data systems requires data transformations as the data extracted from the disparate source data system is transformed and loaded into the master data management hub. To synchronize the disparate source master data, the managed master data extracted from the master data management hub is again transformed and loaded into the disparate source data system as the master data is updated. As with other Extract, Transform, Load-based data movement, these processes are expensive and inefficient to develop and to maintain which greatly reduces the return on investment for the master data management product. One of the most common reasons some large corporations experience massive issues with MDM is growth through mergers or acquisitions. Two organizations which merge will typically create an entity with duplicate master data (since each likely had at least one master database of its own prior to the merger). Ideally, database administrators resolve this problem through deduplication of the master data as part of the merger. In practice, however, reconciling several master data systems can present difficulties because of the dependencies that existing applications have on the master databases. As a result, more often than not the two systems do not fully merge, but remain separate, with a special reconciliation process defined that ensures consistency between the data stored in the two systems. Over time, however, as further mergers and acquisitions occur, the problem multiplies, more and more master databases appear, and data-reconciliation processes become extremely complex, and consequently unmanageable and unreliable. Because of this trend, one can find organizations with 10, 15, or even as many as 100 separate, poorly integrated master databases, which can cause serious operational problems in the areas of customer satisfaction, operational efficiency, decision-support, and regulatory compliance. Processes commonly seen in MDM solutions include source identification, data collection, data transformation, normalization, rule administration, error detection and correction, data consolidation, data storage, data distribution, data classification, taxonomy services, item master creation, schema mapping, product codification, data enrichment and data governance The tools include data networks, file systems, a data warehouse, data marts, an operational data store, data mining, data analysis, data virtualization, data federation and data visualization. One of the newest tools, virtual master data management (also called virtual mdm) utilizes data virtualization and a persistent metadata server to implement a multi-level automated MDM hierarchy. The selection of entities considered for MDM depends somewhat on the nature of an organization. In the common case of commercial enterprises, MDM may apply to such entities as customer (Customer Data Integration), product (Product Information Management), employee, and vendor. MDM processes identify the sources from which to collect descriptions of these entities. In the course of transformation and normalization, administrators adapt descriptions to conform to standard formats and data domains, making it possible to remove duplicate instances of any entity. Such processes generally result in an organizational MDM repository, from which all requests for a certain entity instance produce the same description, irrespective of the originating sources and the requesting destination. Criticism of MDM solutions The value and current approaches to MDM have come under criticism due to some parties claiming large costs and low return on investment from major MDM solution providers. - Reference data - Master data - Record linkage - Data steward - Data visualization - Customer data integration - Data Integration - Information as a service - Product information management - Identity resolution - Enterprise Information Integration - Linked data - Semantic Web - Data governance - Operational data store - Form, fit and function - Single Customer View - Master data management at the Open Directory Project - Microsoft: The What, Why, and How of Master Data Management - Microsoft: Master Data Management (MDM) Hub Architecture - PolarLake: Reference Data Management (RDM) and Governance - Open Methodology for Master Data Management - Semarchy: Why do I Need MDM? (Video) - MDM Community - MDM Landscape
fwe2-CC-MAIN-2013-20-43815000
|Music and dance during a One Flower ceremony, from the Florentine Codex.| |Regions with significant populations| |Related ethnic groups| Other Nahua peoples ||This section relies largely or entirely upon a single source. (September 2009)| The Mexica (Nahuatl: Mēxihcah, [meːˈʃiʔkaʔ]; the singular is Mēxihcatl [meːˈʃiʔkat͡ɬ]) or Mexicas — called Aztecs in occidental historiography, although this term is not limited to the Mexica — were an indigenous people of the Valley of Mexico, known today as the rulers of the Aztec empire. The Mexica were a Nahua people who founded their two cities Tenochtitlan and Tlatelolco on raised islets in Lake Texcoco around AD 1200. After the rise of the Tenochca Mexica, they came to dominate the other Mexica city-state Tlatelolco. The Mexica are eponymous of the placename Mexico Mēxihco [meːˈʃiʔko]. This refers to the interconnected settlements in the valley which became the site of what is now Mexico City, which held natural, geographical, and population advantages as the metropolitan center of the region of the future Mexican state. This area was expanded upon in the wake of the Spanish conquest and administered from the former Aztec capital as New Spain. Like many of the peoples around them, the Mexica spoke Nahuatl. The form of Nahuatl used in the 16th century, when it began to be written in the alphabet brought by the Spanish, is known as Classical Nahuatl. Nahuatl is still spoken today by over 1.5 million people. - Nahuatl Dictionary. (1997). Wired Humanities Project. University of Oregon. Retrieved August 29, 2012, from link - Andrews (2003): p. 500.
fwe2-CC-MAIN-2013-20-43816000
A notary public (or notary or public notary) in the common law world is a public officer constituted by law to serve the public in non-contentious matters usually concerned with estates, deeds, powers-of-attorney, and foreign and international business. A notary's main functions are to administer oaths and affirmations, take affidavits and statutory declarations, witness and authenticate the execution of certain classes of documents, take acknowledgments of deeds and other conveyances, protest notes and bills of exchange, provide notice of foreign drafts, prepare marine or ship's protests in cases of damage, provide exemplifications and notarial copies, and perform certain other official acts depending on the jurisdiction. Any such act is known as a notarization. The term notary public only refers to common-law notaries and should not be confused with civil-law notaries. With the exceptions of Louisiana, Puerto Rico, Quebec, whose private law is based on civil law, and British Columbia, whose notarial tradition stems from scrivener notary practice, a notary public in the rest of the United States and most of Canada has powers that are far more limited than those of civil-law or other common-law notaries, both of whom are qualified lawyers admitted to the bar: such notaries may be referred to as notaries-at-law or lawyer notaries. Therefore, at common law, notarial service is distinct from the practice of law, and giving legal advice and preparing legal instruments is forbidden to lay notaries such as those appointed throughout most of the United States of America. Notaries are appointed by a government authority, such as a court or lieutenant governor, or by a regulating body often known as a society or faculty of notaries public. For lawyer notaries, an appointment may be for life, while lay notaries are usually commissioned for a briefer term, with the possibility of renewal. In most common law countries, appointments and their number for a given notarial district are highly regulated. However, since the majority of American notaries are lay persons who provide officially required services, commission numbers are not regulated, which is part of the reason why there are far more notaries in the United States than in other countries (4.5 million vs. approx. 740 in England and Wales and Approx. 1,250 in Australia and New Zealand). Furthermore, all U.S. and some Canadian notarial functions are applied to domestic affairs and documents, where fully systematized attestations of signatures and acknowledgment of deeds are a universal requirement for document authentication. By contrast, outside North American common law jurisdictions, notarial practice is restricted to international legal matters or where a foreign jurisdiction is involved, and almost all notaries are also qualified lawyers. For the purposes of authentication, most countries require commercial or personal documents which originate from or are signed in another country to be notarized before they can be used or officially recorded or before they can have any legal effect. To these documents a notary affixes a notarial certificate which attests to the execution of the document, usually by the person who appears before the notary, known as an appearer or constituent (U.S.). In places where lawyer notaries are the norm, a notary may also draft legal instruments known as notarial acts or deeds which have probative value and executory force, as they do in civil law jurisdictions. Originals or secondary originals are then filed and stored in the notary's archives, or protocol. Notaries are generally required to undergo special training in the performance of their duties. Some must also first serve as an apprentice before being commissioned or licensed to practice their profession. In many countries, even licensed lawyers, e.g., barristers or solicitors, must follow a prescribed specialized course of study and be mentored for two years before being allowed to practice as a notary (e.g., British Columbia, England). However, notaries public in the U.S., of which the vast majority are lay people, require only a brief training seminar and are expressly forbidden to engage in any activities that could be construed as the practice of law unless they are also qualified attorneys. Yet, despite these apparent differences, notarial practice is universally considered to be distinct and separate from that of attorney (solicitor/barrister). In England and Wales,there is a course of study for notaries which is conducted under the auspices of the University of Cambridge and the Society of Notaries of England and Wales. In the State of Victoria, Australia, applicants for appointment must first complete a Graduate Diploma of Notarial Practice which is administered by the Sir Zelman Cowen Centre in Victoria University, Melbourne. In bi-juridical jurisdictions, such as South Africa or Louisiana, the office of notary public is a legal profession with educational requirements similar to those for attorneys. Many even have institutes of higher learning that offer degrees in notarial law. Therefore, despite their name, "notaries public" in these jurisdictions are in effect civil law notaries. Notaries public (also called "notaries", "notarial officers", or "public notaries") hold an office which can trace its origins back to the ancient Roman Republic, before Cicero 106-43 B.C., when they were called scribae ("scribes"), tabellius ("writer"), or notarius ("notary"). They are easily the oldest continuing branch of the legal profession worldwide. The history of notaries is set out in detail in Chapter 1 of Brooke's Notary (13th edition): - The office of a public notary is a public office. It has a long and distinguished history. The office has its origin in the civil institutions of ancient Rome. Public officials, called scribae, that is to say, scribes, rose in rank from being mere recorders of facts and judicial proceedings, copiers and transcribers to a learned profession prominent in private and public affairs. Some were permanent officials attached to the Senate and courts of law whose duties were to record public proceedings, transcribe state papers, supply magistrates with legal forms, and register the decrees and judgments of magistrates. - In the last century of the Republic, probably in the time of Cicero, and apparently by his adoptive son Marcus Tullius Tiro, after whom they were named 'notae Tironianae' a new form of shorthand was invented and certain arbitrary marks and signs, called notae, were substituted for words in common use. A writer who adopted the new method was called a notarius. Originally, a notary was one who took down statements in shorthand using these notes, and wrote them out in the form of memoranda or minutes. Later, the title notarius was applied almost exclusively to registrars attached to high government officials, including provincial governors and secretaries to the Emperor. - Notwithstanding the collapse of the Western Empire in the 5th century AD, the notary remained a figure of some importance in many parts of continental Europe throughout the Dark Ages. When the civil law experienced its renaissance in medieval Italy from the 12th century onwards, the notary was established as a central institution of that law, a position which still obtains in countries whose legal systems are derived from the civil law, including most of Europe and South America. The office of notary reached its apogee in the Italian city of Bologna in the twelfth century, its most distinguished scion being Rolandino Passeggeri generally known as Rolandino of Bologna, who died in 1300 AD, whose masterwork was the Summa Artis Notariae. - The separate development of the common law in England, free from most of the influences of Roman law, meant that notaries were not introduced into England until later in the 13th and 14th centuries. At first, notaries in England were appointed by the Papal Legate. In 1279 the Archbishop of Canterbury was authorized by the Pope to appoint notaries. Not surprisingly, in those early days, many of the notaries were members of the clergy. In the course of time, members of the clergy ceased to take part in secular business and laymen, especially in towns and trading centres, began to assume the official character and functions of a modern common law notary. - The Reformation produced no material change in the position and functions of notaries in England. However, in 1533 the enactment of "the Act Concerning Peter's Pence and Dispensations" (the Ecclesiastical Licences Act 1533) terminated the power of the Pope to appoint notaries and vested that power in the King who then transferred it to the Archbishop of Canterbury who in assigned it to the Court of Faculties and the Master of the Faculties. - Traditionally, notaries recorded matters of judicial importance as well as private transactions or events where an officially authenticated record or a document drawn up with professional skill or knowledge was required. Common law jurisdictions The duties and functions of notaries public are described in Brooke's Notary on page 19 in these terms: - Generally speaking, a notary public [...] may be described as an officer of the law [...] whose public office and duty it is to draw, attest or certify under his official seal deeds and other documents, including wills or other testamentary documents, conveyances of real and personal property and powers of attorney; to authenticate such documents under his signature and official seal in such a manner as to render them acceptable, as proof of the matters attested by him, to the judicial or other public authorities in the country where they are to be used, whether by means of issuing a notarial certificate as to the due execution of such documents or by drawing them in the form of public instruments; to keep a protocol containing originals of all instruments which he makes in the public form and to issue authentic copies of such instruments; to administer oaths and declarations for use in proceedings [...] to note or certify transactions relating to negotiable instruments, and to draw up protests or other formal papers relating to occurrences on the voyages of ships and their navigation as well as the carriage of cargo in ships." [Footnotes omitted.] A notary, in almost all common law jurisdictions other than most of North America, is a practitioner trained in the drafting and execution of legal documents. Notaries traditionally recorded matters of judicial importance as well as private transactions or events where an officially authenticated record or a document drawn up with professional skill or knowledge was required. The functions of notaries specifically include the preparation of certain types of documents (including international contracts, deeds, wills, and powers of attorney) and certification of their due execution, administering of oaths, witnessing affidavits and statutory declarations, certification of copy documents, noting and protesting of bills of exchange, and the preparation of ships' protests. Documents certified by notaries are sealed with the notary's seal or stamp and are recorded by the notary in a register (also called a "protocol") maintained and permanently kept by him or her. These are known as "notarial acts". In countries subscribing to the Hague Convention Abolishing the Requirement of Legalization for Foreign Public Documents or Apostille Convention, only one further act of certification is required, known as an apostille, and is issued by a government department (usually the Foreign Affairs Department or similar). For countries which are not subscribers to that convention, an "authentication" or "legalization" must be provided by one of a number of methods, including by the Foreign Affairs Ministry of the country from which the document is being sent or the embassy, Consulate-General, consulate or High Commission of the country to which it is being sent. Information on individual countries In all Australian States and Territories (except Queensland) notaries public are appointed by the Supreme Court of the relevant State or Territory. Very few have been appointed as a notary for more than one State or Territory. Most Australian notaries are lawyers, but the overall number of lawyers who choose to become a notary is relatively low. For example, in South Australia (a State with a population of 1.5 million), of the over 2,500 lawyers in that state only about 100 are also notaries and most of those do not actively practice as such. In Melbourne, Victoria, in 2002 there were only 66 notaries for a city with a population of 3.5 million and only 90 for the entire state. Compare this with the United States where it has been estimated that there are nearly 5 million notaries for a nation with a population of 296 million. As Justice Debelle of the Supreme Court of South Australia said in the case of In The Matter of an Application by Marilyn Reys Bos to be a Public Notary SASC 320, delivered 12 September 2003, in refusing the application by a non-lawyer for appointment as a notary: As a general rule, an applicant [for appointment as a notary] should be a legal practitioner of several years standing at least. Even a cursory perusal of texts on the duties and functions of a public notary demonstrates that a number of those functions and duties require at the very least a sound working knowledge of Australian law and commercial practice. In other words, the preparation of a notarial act plainly requires a sound knowledge of law and practice in Australia especially of the due preparation and execution of commercial and contractual instruments. It is essential that notaries in this State have a sufficient level of training, qualification and status to enable them efficiently and effectively to discharge the functions of the office. Historically there have been some very rare examples of patent attorneys or accountants being appointed, but that now seems to have ceased. However, there are three significant differences between notaries and other lawyers. - the duty of a notary is to the transaction as a whole, and not just to one of the parties. In certain circumstances a notary may act for both parties to a transaction as long as there is no conflict between them, and in such cases it his or her duty is to ensure that the transaction that they conclude is fair to both sides. - a notary will often need to place and complete a special clause onto or attach a special page (known as an eschatocol) to a document in order to make it valid for use overseas. - In the case of some documents which are to be used in some foreign countries it may also be necessary to obtain another certificate known either as an "authentication" or an "apostille" (see above) (depending on the relevant foreign country) from the Department of Foreign Affairs and Trade. - a notary identifies himself or herself on documents by the use of his or her individual seal. Such seals have historical origins and are regarded by most other countries as of great importance for establishing the authenticity of a document. Their principal duties include: - attestation of documents and certification of their due execution for use in Australia and internationally - preparation and certification of powers of attorney, wills, deeds, contracts and other legal documents for use in Australia and internationally - administering of oaths for use in Australia and internationally - witnessing affidavits, statutory declarations and other documents for use in Australia and internationally - certification of copy documents for use Australia and internationally - exemplification of official documents for use internationally - noting and protesting of bills of exchange - preparation of ships' protests - providing certificates as to Australian law and legal practice Although it was once usual for Australian notaries to use an embossed seal with a red wafer, some now use a red inked stamp that contains the notary's full name and the words "notary public". It is also common for the seal or stamp to include the notary's chosen logo or symbol. In South Australia and Scotland, it is acceptable for a notary to use the letters "NP" after their name. Thus a South Australian notary may have "John Smith LLB NP" or similar on his business card or letterhead. Australian notaries do not hold "commissions" which can expire. Generally, once appointed they are authorized to act as a notary for life and can only be "struck off" the Roll of Notaries for proven misconduct. In certain States, for example, New South Wales and Victoria, they cease to be qualified to continue as a notary once they cease to hold a practising certificate as a legal practitioner. Even judges, who do not hold practising certificates, are not eligible to continue to practise as notaries. All Australian jurisdictions also have Justices of the Peace (JP) or Commissioners for Affidavits and other unqualified persons who are qualified to take affidavits or statutory declarations and to certify documents. However they can only do so if the relevant affidavit, statutory declaration or copy document is to be used only in Australia rather than in a foreign country, with the possible exception of a few Commonwealth countries not including the United Kingdom or New Zealand except for very limited purposes. Justices of the Peace (JPs) are (usually) laypersons who have minimal, if any, training (depending on the jurisdiction) but are of proven good character. Therefore a US notary resembles an Australian JP rather than an Australian notary. Canadian notaries public (except in the Province of British Columbia and Quebec) are very much like their American counterparts, generally restricted to administering oaths, witnessing signatures on affidavits and statutory declarations, providing acknowledgements, certifying true copies, and so forth. In British Columbia, a notary public is more like a British or Australian notary. Notaries are appointed for life by the Supreme Court of British Columbia and as a self-regulating profession, the Society of Notaries Public of British Columbia is the regulatory body overseeing and setting standards to maintain public confidence. Furthermore, BC notaries exercise far greater power, able to dispense legal advice and draft public instruments including: - notarizations/attestations of signatures, affidavits, statutory declarations, certified true copies, letters of invitation for foreign travel, authorization of minor child travel, execution/authentications of international documents, passport application documentation, proof of identity for travel purposes - Real estate law - Wills & estate planning - Contract law - preparation of contracts and agreements, commercial lease and assignments - easements and right of way - insurance loss declarations - marine bills of sale & mortgages - marine protestations - personal property security agreements - purchaser's side for foreclosures - subdivisions & statutory building schemes - zoning applications In Nova Scotia a person may be a notary public, a commissioner of oaths, or both. A notary public and a commissioner of oaths are regulated by the provincial Notaries and Commissioners Act. Individuals hold a commission granted to them by the Minister of Justice. Under the Act a notary public in has the "power of drawing, passing, keeping and issuing all deeds and contracts, charter-parties and other mercantile transactions in this Province, and also of attesting all commercial instruments brought before him for public protestation, and otherwise of acting as is usual in the office of notary, and may demand, receive and have all the rights, profits and emoluments rightfully appertaining and belonging to the said calling of notary during pleasure." Under the Act a commissioner of oaths is "authorized to administer oaths and take and receive affidavits, declarations and affirmations within the Province in and concerning any cause, matter or thing, depending or to be had in the Supreme Court, or any other court in the Province." Every barrister of the Supreme Court of Nova Scotia is a commissioner of oaths but must receive an additional commission to act as a notary public. "A Commissioner of Oaths is deemed to be an officer of the Supreme Court of Nova Scotia. Commissioners take declarations concerning any matter to come before a court in the Province.". Additionally, individuals with other specific qualifications, such as being a current Member of the Legislative Assembly, commissioned officer of the Royal Canadian Mounted Police or Canadian Forces make act as if explicitly being a Commissioner of Oaths. In Quebec civil-law notaries (notaires)are full lawyers licensed to practice notarial law. Quebec notaries draft and prepare major legal instruments (notarial acts), provide complex legal advice, represent clients (out of court) and make appearances on their behalf, act as arbitrator, mediator, or conciliator, and even act as a court commissioner in non-contentious matters. To become a notary in Quebec, a candidate must hold a Bachelor's degree in civil law and a one-year Master's in notarial law and serve a traineeship (stage) before being admitted to practice. The concept of notaries public in Quebec does not exist. Instead, the province has Commissioners of Oaths (commissaires de l<assermentation) which serve to authenticate legal documents at a fixed maximal rate of 5.00$CAD. The Central Government appoints notaries for the whole or any part of the country. State Governments, too, appoint notaries for the whole or any part of the States. On an application being made, any person who had been practicing as a Lawyer for at least 10 years is eligible to be appointed a notary. The applicant, if not a legal practitioner, should be a member of the Indian Legal Service or have held an office under the Central or State Government, requiring special knowledge of law, after enrollment as an advocate or held an office in the department of Judge, Advocate-General or in the armed forces. Notary public is a trained lawyer that should pass some special exams to be able to open his office and start his work. Persian meaning of this word is سردفتر means head of the office and his assistant called دفتریار. Both these persons should have Bachelor degree in law or Master degree in civil-law. There is archival evidence showing that public notaries, acting pursuant to papal and imperial authority, practised in Ireland in the 13th century and it is reasonable to assume that notaries functioned here before that time. In Ireland, public notaries were at various times appointed by the Archbishop of Canterbury and the Archbishop of Armagh. The position remained so until the Reformation. After the Reformation, persons appointed to the office of public notary either in Great Britain or Ireland received the faculty by royal authority and appointments under faculty from the Pope and the emperor ceased. In 1871, under the Matrimonial Causes and Marriage Law (Ireland) Amendment 1870, the jurisdiction previously exercised by the Archbishop of Armagh in the appointment of notaries was vested in and became exercisable by the Lord Chancellor of Ireland. In 1920, the power to appoint notaries public was transferred to the Lord Lieutenant of Ireland. The position in Ireland changed once again in 1924 following the establishment of the Irish Free State. Under the Courts of Justice Act, 1924 the jurisdiction over notaries public was transferred to the Chief Justice of the Irish Free State. In 1961, under the Courts (Supplemental Provisions) Act of that year, and the power to appoint notaries public became exercisable by the Chief Justice. This remains the position in Ireland, where notaries are appointed on Petition to the Supreme Court, after passing prescribed examinations. The governing body is the Faculty of Notaries Public in Ireland. The vast majority of notaries in Ireland are also solicitors. A non-solicitor, who was successful in the examinations as set by the governing body, applied to the Chief Justice to be appointed a notary public. The Chief Justice heard the adjourned application on 3 March 2009 and appointed the non-solicitor as a Notary on 18 July 2011. Unless excluded under dominion or colonial law, the Master of the Faculties formerly had authority to appoint notaries public in a dominion or colony. The admission of notaries in the Commonwealth was governed specifically by the Public Notaries Act 1833 (UK). The provisions of the Public Notaries Act 1801-43 requiring a notary to be a solicitor did not apply overseas, nor need a notary have a practicing certificate as a solicitor, or from the Court of Faculties. The usual procedure followed is that the applicant lodges with the Court of Faculties a memorial counter-signed by local merchants, shipping companies, bankers and other persons of substance, which show the local need of a notary and the fitness of the applicant. They also lodge their certificate of admission as a solicitor. A fee accompanies the application. The applicant, with the support of two other notaries public, who vouch that the applicant is well skilled in the affairs of notarial concern, petitions the Master of the Faculties. The chief consideration for the approval of an application is whether there is sufficient need in the district, regarding the convenience of bankers, ship-owners and merchants. The local society of notaries must be satisfied that a need exists for an additional notary in the area served by the applicant. Priority is given, as a matter of practice, to an applicant within the same firm, as a replacement in the case of the death of a notary, or where a practicing notary is reducing his or her workload because of age or infirmity. The Master of the Faculties continues to appoint notaries overseas in the exercise of the general authorities granted by s 3 of the Ecclesiastical Licenses Act 1533 (Eng). In these cases he is guided by local considerations of public convenience. Until 1973 a separate group of lawyers existed to carry out litigation known as Proctors. A proctor was not a practitioner in a court of law. These were also known as Notaries. However since 1973 the legal practitioners were classed solely as Attorneys at law combining the former advocates and proctors. This new position of attorney at law brought with it automatic appointment as a notary public when the practitioner took oaths as an attorney at law, thus becoming legally qualified for litigation.In General notary is practiced by legal lawyers. England and Wales After the passage of the 1533 Act, which was a direct result of the Reformation in England, all notary appointments were issued directly through the Court of Faculties. The Court of Faculties is attached to the office of the Archbishop of Canterbury. In England and Wales there are several classes of notaries. English notaries who, like solicitors, barristers, legal executives and licensed conveyancers, are also commissioners for oaths, also acquire the same powers as solicitors and other law practitioners, with the exception of the right to represent others before the courts (unless also members of the bar or admitted as a solicitor) once they are licensed or commissioned notaries. In practice almost all English notaries, and all Scottish ones, are also solicitors, but typically do not perform such services. Commissioners of oaths are able to undertake the bulk of routine domestic attestation work within the UK, and many documents, including signatures for normal property transactions, do not need professional attestation of signature at all, a lay witness being sufficient. In practice the need for notaries in purely English legal matters is very small; for example they are not involved in normal property transactions. Since a great many solicitors also perform the function of commissioners for oaths and can witness routine declarations etc. (all are qualified to do so, but not all offer the service), most work performed by notaries relates to international matters in some way, and documents needing to be used abroad, and many of the small number of English notaries have strong foreign language skills and often a foreign legal qualification. The work of notaries and solicitors in England is separate although most notaries are solicitors. The Notaries Society gives the number of notaries in England and Wales as "about 1,000," all but seventy of whom are solicitors. There are also scrivener notaries, who get their name from the Scriveners' Company; until 1999, when they lost this monopoly, they were the only notaries permitted to practise in the City of London. They used not to have to first qualify as solicitors, but they had knowledge of foreign laws and languages. Currently to qualify as a notary public in England and Wales it is necessary to have earned a law degree or qualified as a solicitor or barrister in the past five years, and then to take a two-year distance-learning course styled the Postgraduate Diploma in Notarial Practice. At the same time, any applicant must also gain practical experience. The few who go on to become scrivener notaries require further study of two foreign languages and foreign law and a two-year mentorship under an active Scrivener notary. The other notaries in England are either ecclesiastical notaries whose functions are limited to the affairs of the Church of England or other qualified persons who are not trained as solicitors or barristers but satisfy the Master of the Faculties of the Archbishop of Canterbury that they possess an adequate understanding of the law. Both the latter two categories are required to pass examinations set by the Master of Faculties. The regulation of notaries was modernized in the 1990s as a result of section 57 of the Courts and Legal Services Act 1990. Notarial services generally include: - attesting the signature and execution of documents - authenticating the execution of documents - authenticating the contents of documents - administration of oaths and declarations - drawing up or noting (and extending) protests of happenings to ships, crews and cargoes - presenting bills of exchange for acceptance and payment, noting and protesting bills in cases of dishonour and preparing acts of honour - attending upon the drawing up of bonds - drawing mercantile documents, deeds, sales or purchases of property, and wills in English and (via translation), in foreign languages for use in Britain, the Commonwealth and other foreign countries - providing documents to deal with the administration of the estate of people who are abroad, or owning property abroad - authenticating personal documents and information for immigration or emigration purposes, or to apply to marry, divorce, adopt children or to work abroad - verification of translations from foreign languages to English and vice versa - taking evidence in England and Wales as a Commissioner for Oaths for foreign courts - provision of notarial copies - preparing and witnessing powers of attorney, corporate records, contracts for use in Britain or overseas - authenticating company and business documents and transactions - international Internet domain name transfers Notaries public have existed in Scotland since the 13th century and developed as a distinct element of the Scottish legal profession. Those who wish to practice as a notary must petition the Court of Session. This petition is usually presented at the same time as a petition to practice as a solicitor, but can sometimes be earlier or later. However, to qualify, a notary must hold a current Practising Certificate from the Law Society of Scotland, a new requirement from 2007, before which all Scottish solicitors were automatically notaries. Whilst notaries in Scotland are always solicitors, the profession remains separate in that there are additional rules and regulations governing notaries and it is possible to be a solicitor, but not a notary. Since 2007 an additional Practising Certificate is required, so now most, but not all, solicitors in Scotland are notaries - a significant difference from the English profession. They are also separate from notaries in other jurisdictions of the United Kingdom. The profession is administered by the Council of the Law Society of Scotland under the Law Reform (Miscellaneous Provisions) (Scotland) Act 1990. In Scotland, the duties and services provided by the notary are similar to England and Wales, although they are needed for some declarations in divorce matters for which they are not in England. Their role declined following the Law Agents (Scotland) Amendment Act 1896 which stipulated only enrolled law agents could become notaries and the Conveyancing (Scotland) Act 1924 which extended notarial execution to law agents. The primary functions of a Scottish notary are: - oaths, affidavits, and affirmations - affidavits in undefended divorces and for matrimonial homes - maritime protests - execution or certification for foreign jurisdictions, e.g., estates, court actions, powers of attorney, etc. - notarial execution for the blind or illiterate - entry of a person to overseas territories - completion of the documentation required for the registration of a company in certain foreign jurisdictions; and - drawing for repayment of Bonds of Debenture In the United States, a notary public is a person appointed by a state government, e.g., the governor, lieutenant governor, state secretary, or in some cases the state legislature, and whose primary role is to serve the public as an impartial witness when important documents are signed. Since the notary is a state officer, a notary's duties may vary widely from state to state and in most cases bars a notary from acting outside his or her home state unless they also have a commission there as well. In 32 states the main requirements are to fill out a form and pay a fee; many states have restrictions concerning notaries with criminal histories, but the requirements vary from state to state. Notaries in 18 states and the District of Columbia are required to take a course, pass an exam, or both; the education or exam requirements in Delaware and Kansas only apply to notaries who will perform electronic notarizations. A notary is almost always permitted to notarize a document anywhere in the state where their commission is issued. Some states simply issue a commission "at large" meaning no indication is made as to from what county the person's commission was issued, but some states do require the notary include the county of issue of their commission as part of the jurat, or where seals are required, to indicate the county of issue of their commission on the seal. Merely because a state requires indicating the county where the commission was issued does not necessarily mean that the notary is restricted to notarizing documents in that county, although some states may impose this as a requirement. Some states (Montana, Wyoming, North Dakota, among others) allow a notary who is commissioned in a state bordering that state to also act as a notary in the state if the other allows the same. Thus someone who was commissioned in Montana could notarize documents in Wyoming and North Dakota, and a notary commissioned in Wyoming could notarize documents in Montana, a notary from Wyoming could not notarize documents from North Dakota (or the inverse) unless they had a commission from North Dakota or a state bordering North Dakota that also allowed North Dakota notaries to practice in that state as well. Notaries in the United States are much less closely regulated than notaries in most other common-law countries, typically because U.S. notaries have little legal authority. In the United States, a lay notary may not offer legal advice or prepare documents - except in Louisiana and Puerto Rico - and in most cases cannot recommend how a person should sign a document or what type of notarization is necessary. There are some exceptions; for example, Florida notaries may take affidavits, draft inventories of safe deposit boxes, draft protests for payment of dishonored checks and promissory notes, and solemnize marriages. In most states, a notary can also certify or attest a copy or facsimile. The most common notarial acts in the United States are the taking of acknowledgements and oaths. Many professions may require a person to double as a notary public, which is why US court reporters are often notaries as this enables them to swear in witnesses (deponents) when they are taking depositions, and secretaries, bankers, and some lawyers are commonly notaries public. Despite their limited role, some American notaries may also perform a number of far-ranging acts not generally found anywhere else. Depending on the jurisdiction, they may: take depositions, certify any and all petitions (ME), witness third-party absentee ballots (ME), provide no-impediment marriage licenses, solemnize civil marriages (ME, FL, SC), witness the opening of a safe deposit box or safe and take an official inventory of its contents, take a renunciation of dower or inheritance (SC), and so on. "An acknowledgment is a formal [oral] declaration before an authorized public officer. It is made by a person executing [signing] an instrument who states that it was his [or her] free act and deed." That is, the person signed it without undue influence and for the purposes detailed in it. A certificate of acknowledgment is a written statement signed (and in some jurisdictions, sealed) by the notary or other authorized official that serves to prove that the acknowledgment occurred. The form of the certificate varies from jurisdiction to jurisdiction, but will be similar to the following: Before me, the undersigned authority, on this ______ day of ___________, 20__ personally appeared _________________________, to me well known to be the person who executed the foregoing instrument, and he/she acknowledged before me that he/she executed the same as his/her voluntary act and deed. Oath, affirmation, and jurat A jurat is the official written statement by a notary public that he or she has administered and witnessed an oath or affirmation for an oath of office, or on an affidavit - that is, that a person has sworn to or affirmed the truth of information contained in a document, under penalty of perjury, whether that document is a lengthy deposition or a simple statement on an application form. The simplest form of jurat and the oath or affirmation administered by a notary are: - Jurat: "Sworn (or affirmed) to before me this _______ day of ____________, 20__." - Oath: "Do you solemnly swear that the contents of this affidavit subscribed by you are correct and true?" - Affirmation (for those opposed to swearing oaths): "Do you solemnly, sincerely, and truly declare and affirm that the statements made by you are true and correct?" In the U.S., notarial acts normally include what is called a venue or caption, that is, an official listing of the place where a notarization occurred, usually in the form of the state and county and with the abbreviation "ss." (for Latin scilicet, "to wit") normally referred to as a "subscript", often in these forms: State of .......) )ss: County of.......) State of ________ County of _______, to-wit: The venue is usually set forth at the beginning of the instrument or at the top of the notary’s certificate. If at the head of the document, it is usually referred to as a caption. In times gone by, the notary would indicate the street address at which the ceremony was performed, and this practice, though unusual today, is occasionally encountered. The California Secretary of State, Notary Public & Special Filings Section, is responsible for appointing and commissioning qualified persons as notaries public for four-year terms. Prior to sitting for the notary exam, one must complete a mandatory six-hour course of study. This required course of study is conducted either in an online, home study, or in-person format via an approved notary education vendor. Both prospective notaries as well as current notaries seeking reappointment must undergo an "expanded" F.B.I. and California Department of Justice background check. Various statutes, rules, and regulations govern notaries public. California law sets maximum, but not minimum, fees for services related to notarial acts (e.g., per signature: acknowledgment $10, jurat $10, certified power of attorney $10, et cetera). A finger print (typically the right thumb) may be required in the notary journal based on the transaction in question (e.g., deed, quitclaim deed, deed of trust affecting real property, power of attorney document, et cetera). Documents with blank spaces cannot be notarized (a further anti-fraud measure). California explicitly prohibits notaries public from using literal foreign language translation of their title. The use of a notary seal is required. Notarial acts performed in Colorado are governed under the Notaries Public Act, 12-55-101, et seq. Pursuant to the Act, notaries are appointed by the Secretary of State for a term not to exceed four years. Notaries may apply for appointment or reappointment online at the Secretary of State's website. A notary may apply for reappointment to the notary office 90 days before her commission expires. Beginning in early 2010, all new notaries will be required to take a training course and pass an examination to ensure minimal competence of the Notaries Public Act. A course of instruction approved by the Secretary of State may be administered by approved vendors and shall bear an emblem with a certification number assigned by the Secretary of State's office. An approved course of instruction covers relevant provisions of the Colorado Notaries Public Act, the Model Notary Act, and widely accepted best practices. In addition to courses offered by approved vendors, the Secretary of State offers free certification courses at the Secretary of State's office. To sign up for a free course, visit the notary public training page at the following link. A third party seeking to verify the status of a Colorado notary may do so by visiting the Secretary of State's website at the following link. Constituents seeking an apostille or certificate of magistracy are requested to complete the form found on the following page before sending in their documents or presenting at the Secretary of State's office. Florida notaries public are appointed by the Governor to serve a four-year term. New applicants and commissioned notary public must be bona fide residents of the State of Florida and first time applicants must complete a mandatory three-hour education course administered by an approved educator. Florida state law also requires that a notary public post bond in the amount of $7,500.00. A bond is required in order to compensate an individual harmed as a result of a breach of duty by the notary. Applications are submitted and processed through an authorized bonding agency. Florida is one of three states (Maine and South Carolina are the others) where a notary public can solemnize the rites of matrimony (perform a marriage ceremony). The Department of State appoints civil law notaries, also called "Florida International Notaries", who must be Florida attorneys who have practiced law for five or more years. Applicants must attend a seminar and pass an exam administered by the Department of State or any private vendor approved by the department. Such civil law notaries are appointed for life and may perform all of the acts of a notary public in addition to preparing authentic acts. Notaries public in Illinois are appointed by the Secretary of State for a four-year term. Also, residents of a state bordering Illinois (Iowa, Indiana, Kentucky, Missouri, Wisconsin) who work or have a place of business in Illinois can be appointed for a one year term. Notaries must be United States citizens (though the requirement that a notary public must be a United States citizen is unconstitutional; see Bernal v. Fainter), or aliens lawfully admitted for permanent residence; be able to read and write the English language; be residents of (or employed within) the State of Illinois for at least 30 days; be at least 18 years old; not be convicted of a felony; and not had a notary commission revoked or suspended during the past 10 years. An applicant for the notary public commission must also post a $5,000 bond, usually with an insurance company and pay an application fee of $10. The application is usually accompanied with an oath of office. If the Secretary of State's office approves the application, the Secretary of State then sends the commission to the clerk of the county where the applicant resides. If the applicant records the commission with the county clerk, he or she then receives the commission. Illinois law prohibits notaries from using the literal Spanish translation in their title and requires them to use a rubber stamp seal for their notarizations. The notary public can then perform his or her duties anywhere in the state, as long as the notary resides (or works or does business) in the county where he or she was appointed. Louisiana notaries public are commissioned by the Governor. They are the only notaries to be appointed for life. The Louisiana notary public is a civil law notary with broad powers, as authorized by law, usually reserved for the American style combination "barrister/solicitor" lawyers and other legally authorized practitioners in other states. A commissioned notary in Louisiana is a civil law notary that can perform/prepare many civil law notarial acts usually associated with attorneys and other legally authorized practitioners in other states, except represent another person or entity before a court of law for a fee (unless they are also admitted to the bar). Notaries are not allowed to give "legal" advice, but they are allowed to give "notarial" advice - i.e., explain or recommend what documents are needed or required to perform a certain act - and do all things necessary or incidental to the performance of their civil law notarial duties. They can prepare any document a civil law notary can prepare (to include inventories, appraisements, partitions, wills, protests, matrimonial contracts, conveyances, and, generally, all contracts and instruments in writing) and, if ordered or requested to by a judge, prepare certain notarial legal documents, in accordance with law, to be returned and filed with that court of law. Maine notaries public are appointed by the Secretary of State to serve a seven-year term. Maine is one of three states (Florida and South Carolina are the others) where a notary public can solemnize the rites of matrimony (perform a marriage ceremony). Maryland notaries public are appointed by the governor on the recommendation of the secretary of state to serve a four-year term. New applicants and commissioned notaries public must be bona fide residents of the State of Maryland or work in the state. An application must be approved by a state senator before it is submitted to the secretary of state. The official document of appointment is imprinted with the signatures of the governor and the secretary of state as well as the Great Seal of Maryland. Before exercising the duties of a notary public, an appointee must appear before the clerk of one of Maryland's 24 circuit courts to take an oath of office. A bond is not required. A notary is required to keep a log of all notarial acts, indicating the name of the person, their address, what type of document is being notarized, the type of ID used to authenticate them (or that they are known personally) by the notary, and the person's signature. The notary's log is the only document for which a notary may write their own certificate. Minnesota notaries public are commissioned by the Governor with the advice and consent of the Senate for a five-year term. All commissions expire on 31 January of the fifth year following the year of issue. Citizens and resident aliens over the age of 18 years apply to the Secretary of State for appointment and reappointment. Residents of adjoining counties in adjoining states may also apply for a notary commission in Minnesota. Notaries public have the power to administer all oaths required or authorized to be administered in the state; take and certify all depositions to be used in any of the courts of the state; take and certify all acknowledgments of deeds, mortgages, liens, powers of attorney and other instruments in writing or electronic records; and receive, make out and record notarial protests. The Secretary of State's website () provides more information about the duties, requirements and appointments of notaries public. Montana notaries public are appointed by the Secretary of State and serve a four-year term. A Montana notary public has jurisdiction throughout the states of Montana, North Dakota, and Wyoming. These states permit notaries from neighboring states to act in the state in the same manner as one from that state under reciprocity, e.g., as long as that state grants notaries from neighboring states to act in their state. [Montana Code 1-5-605] The Secretary of State is charged with the responsibility of appointing notaries by the provisions of Chapter 240 of the Nevada Revised Statutes. Nevada notaries public who are not also practicing attorneys are prohibited by law from using "notario", "notario publico" or any non-English term to describe their services. (2005 Changes to NRS 240) Nevada notary duties: administer oaths or affirmations; take acknowledgments; use of subscribing witness; certify copies; and execute jurats or take a verification upon oath or affirmation. The State of Nevada Notary Division Page provides more information about duties, requirements, appointments, and classes. Notaries are commissioned by the State Treasurer for a period of five years. Notaries must also be sworn in by the clerk of the county in which he or she resides. One can become a notary in the state of New Jersey if he or she: (1) is over the age of 18; (2) is a resident of New Jersey OR is regularly employed in New Jersey and lives in an adjoining state; (3) has never been convicted of a crime under the laws of any state or the United States, for an offense involving dishonesty, or a crime of the first or second degree, unless the person has met the requirements of the Rehabilitated Convicted Offenders Act (NJSA 2A:168-1). Notary applications must be endorsed by a state legislator. Notaries in the state of New Jersey serve as impartial witnesses to the signing of documents, attests to the signature on the document, and may also administer oaths and affirmations. Seals are not required; many people prefer them and as a result, most notaries have seals in addition to stamps. Notaries may administer oaths and affirmations to public officials and officers of various organizations. They may also administer oaths and affirmations in order to execute jurats for affidavits/verifications, and to swear in witnesses. Notaries are prohibited from pre-dating actions; lending notary equipment to someone else (stamps, seals, journals, etc.); preparing legal documents or giving legal advice; appearing as a representative of another person in a legal proceeding. Notaries should also refrain from notarizing documents in which they have a personal interest. By statute, New Jersey attorneys may administer oaths and affirmations. New York notaries are empowered to administer oaths and affirmations (including oaths of office), to take affidavits and depositions, to receive and certify acknowledgments or proof of deeds, mortgages and powers of attorney and other instruments in writing; to demand acceptance or payment of foreign and inland bills of exchange, promissory notes and obligations in writing, and to protest these (that is, certify them) for non-acceptance or non-payment. They are not empowered to marry couples, their notarization of a will is insufficient to give the will legal force, and they are strictly forbidden to certify "true copies" of documents. Every county clerk's office in New York must have a notary public available to serve the public free of charge. Admitted attorneys are automatically eligible to be notaries in the State of New York, but must make an application through the proper channels and pay a fee. New York notaries initially must pass a test and then renew their status every 4 years. Oregon notaries public are appointed by the Governor and commissioned by the Secretary of State to serve a four-year term. Oregon notaries are empowered to administer oaths, jurats and affirmations (including oaths of office), to take affidavits and depositions, to receive and certify acknowledgments or proof of deeds, mortgages and powers of attorney and other instruments in writing; to demand acceptance or payment of foreign and inland bills of exchange, promissory notes and obligations in writing, and to protest these (that is, certify them) for non-acceptance or non-payment. They are also empowered to certify "true copies" of most documents. Every court clerk in Oregon is also empowered to act as a notary public, although they are not required to keep a journal. Oregon formerly required that impression seals be used, but now it is optional. The ink seal must be in black ink. Beginning in 2001, all Oregon notaries were required to pass an open-book examination to receive their commission. Beginning in 2006, new notary applicants were also required to complete a free three-hour online or live in-person instructional seminar, however this requirement is waived for notaries who are renewing their commissions, as long as the commission is renewed before its expiration date. Oregon law specifically prohibits the use of the term "notorio publico" by a notary in advertising his or her services, but translation of the title into other languages is not restricted. A notary in the Commonwealth of Pennsylvania is empowered to perform seven distinct official acts: take affidavits, verifications, acknowledgments and depositions, certify copies of documents, administer oaths and affirmations, and protest dishonored negotiable instruments. A notary is strictly prohibited from giving legal advice or drafting legal documents such as contracts, mortgages, leases, wills, powers of attorney, liens or bonds. Pennsylvania is one of the few states with a successful Electronic Notarization Initiative. For more information, visit the Secretary of the Commonwealth's website. Note that as of 9 Jan 2011 Pennsylvania is accepting new applicants for this program. South Carolina notaries public are appointed by the Governor to serve a ten-year term. All applicants must first have that application endorsed by a state legislator before submitting their application to the Secretary of State. South Carolina is one of three states (Florida and Maine are the others) where a notary public can solemnize the rites of matrimony (perform a marriage ceremony). Utah notaries public are appointed by the Lieutenant Governor to serve a four-year term. Utah used to require that impression seals be used, but now it is optional. The seal must be in purple ink. A Virginia notary must either be a resident of Virginia or work in Virginia, and is authorized to acknowledge signatures, take oaths, and certify copies of non-government documents which are not otherwise available, e.g. a notary cannot certify a copy of a birth or death certificate since a certified copy of the document can be obtained from the issuing agency. Changes to the law effective 1 July 2008 imposes certain new requirements; while seals are still not required, if they are used they must be photographically reproducible. Also, the notary's registration number must appear on any document notarized. Changes to the law effective 1 July 2008 will permit notarization of electronic signatures. On July 1, 2012, Virginia became the first state to authorize a signer to be in a remote location and have a document notarized electronically by an approved Virginia electronic notary using audio-visual conference technology by passing the bills SB 827 and HB 2318. In Washington State, any resident or resident of an adjacent state employed in Washington may apply to become a notary public. Applicants must obtain a $10,000 surety bond and present proof at a Department of Licensing. A notary public is appointed for a term of 4 years. Wyoming notaries public are appointed by the Secretary of State and serve a four-year term. A Wyoming notary public has jurisdiction throughout the states of Wyoming and Montana. These states permit notaries from neighboring states to act in the state in the same manner as one from that state under reciprocity, e.g. as long as that state grants notaries from neighboring states to act in their state. A Maryland requirement that to obtain a commission, a notary declare his belief in God, as required by the Maryland Constitution, was found by the United States Supreme Court in Torcaso v. Watkins, 367 U.S. 488 (1961) to be unconstitutional. Historically, some states required that a notary be a citizen of the United States. However, the U.S. Supreme Court, in the case of Bernal v. Fainter 467 U.S. 216 (1984), declared that to be impermissible. In the U.S., there are reports of notaries (or people claiming to be notaries) having taken advantage of the differing roles of notaries in common law and civil law jurisdictions to engage in the unauthorized practice of law. The victims of such scams are typically illegal immigrants from civil law countries who need assistance with, for example, their immigration papers and want to avoid hiring an attorney. Confusion often results from the mistaken premise that a notary public in the United States serves the same function as a Notario Publico in Spanish-speaking countries (which are civil law countries, see below). Prosecutions in such cases are difficult, as the victims are often deported and thus unavailable to testify. Certain members of the United States Armed Forces are given the powers of a notary under federal law (10 U.S.C. section 1044). Some military members have authority to certify documents or administer oaths, without being given all notarial powers. In addition to the powers granted by the federal government, some states have enacted laws granting notarial powers to commissioned officers. Civil Law jurisdictions The role of notaries in civil law countries is much greater than in common law countries. Civilian notaries are full-time lawyers and holders of a public office who routinely undertake non-contentious transactional work done in common law countries by attorneys/solicitors, as well as, in some countries, those of government registries, title offices, and public recorders. The qualifications imposed by civil law countries are much greater, requiring generally an undergraduate law degree, a graduate degree in notarial law and practice, three or more years of practical training ("articles") under an established notary, and must sit a national examination to be admitted to practice. Typically, notaries work in private practice and are fee earners, but a small minority of countries have salaried public service (or "government") notaries (e.g., Baden-Württemberg in Germany, certain cantons of Switzerland). Civil law notaries have jurisdiction over strictly non-contentious domestic civil-private law in the areas of property law, family law, agency, wills and succession, and company formation. The point to which a country's notarial profession monopolizes these areas can vary greatly. On one extreme is France (and French-derived systems) which statutorily give notaries a monopoly over their reserved areas of practice, as opposed to Austria where there is no discernable monopoly whatsoever and notaries are in direct competition with attorneys/solicitors. In the few United States jurisdictions where trained notaries are allowed (such as Louisiana, Puerto Rico), the practice of these legal practitioners is limited to legal advice on purely non-contentious matters that fall within the purview of a notary's reserved areas of practice. Upon the death of President Warren G. Harding in 1923, Calvin Coolidge was sworn in as President by his father, John Calvin Coolidge, Sr., a Vermont notary public. However, as there was some controversy as to whether a state notary public had the authority to administer the presidential oath of office, Coolidge took the oath, again, upon returning to Washington. - Articles about common notarial certificates (varies by jurisdiction): - Peace Commissioner - Justice of the Peace - Medallion signature guarantee - "Notaries Public", Montgomery County, Alabama Probate Judge: [dead link], retrieved on 20 January 2009. - "History of the NNA". Retrieved 9 July 2006. - Notary. (2008). Kent, England: Warners Law LLP. Retrieved on 22 January 2009. - Chapter 1 of Brooke's Notary (13th edition, Stevens, London, 2010) - "AN APPLICATION BY MARILYN REYES BOS TO BE A PUBLIC NOTARY No. SCCIV-02-1688 SASC 320 (12 September 2003)". Australasian Legal Information Institute, A joint facility of UTS and UNSW Faculties of Law. Retrieved 21 May 2011. - The Society of Notaries Public of BC. (2011).: Becoming a Notary. - Notaries and Commissioners Act - Nova Scotia Commissioners of Oaths | Justice | Government of Nova Scotia - A general overview of the notarial profession in Quebec: taken from the website of the Chambre des Notaires du Quebec. - The main page for the Chambre des Notaires du Quebec. - THE NOTARIES RULES, 1956 - The Hindu Business Line : Notes on the notary - The Notaries Society (England & Wales) - The difference between a Notary and a Solicitor? - Law Society of Scotland[dead link] - David A. Brand & Michael P. Clancy, The Modern Notary Public in Scotland: Guidance for Intrant Notaries, thth edn. (2009), The Law Society of Scotland. - Issues and Trends in State Notary Regulation. (2011). National Association of Secretaries of State. pp. 6, 17–18. - Piombino, Alfred E. (1996). Notary Public Handbook: A Guide for Vermont. n.p.: East Coast Press. 91. - California Government Code §8200. - California Secretary of State. (n.d.). Notary Public Check List.[dead link] Viewed 9 January 2008. - California Government Code §8201.1. - California Government Code §8211. - Notary Public Disciplinary Guidelines. (2001). California Secretary of State. p. 25. - [dead link] - [dead link] - Colorado Secretary of State - Verify a Notary - [dead link] - Florida Department of State. (n.d.). Marriage ceremony. Viewed 3 December 2006. - Illinois Secretary of State. 2010). Notary Public Handbook. pp. 4-5. - Illinois Secretary of State. (2010). Notary Public Handbook. pp. 5-6. - Louisiana Notary Association - Maine Department of the Secretary of State. (n.d.). Notary Public Handbook. p. 8 Viewed 3 December 2006. - [dead link] - [dead link] - South Carolina Office of the Secretary of State. (2005). Duties of a South Carolina Notary Public - A Handbook for Virginia Notaries Public.[dead link] (2009). Richmond, Virginia: Office of the Secretary of the Commonwealth. - "CHAPTER 834". Retrieved 2011-04-06. - "Frequently Asked Questions About Becoming a Virginia Electronic Notary". - "Revised Code of Washington Chapter 42.44" - [dead link] - "Notarial Services". U.S. Army. 10 April 1997. Retrieved 4 June 2009. - Short Guide for Vermont Notaries Public. (2011). Vermont Secretary of State. p. i.
fwe2-CC-MAIN-2013-20-43817000
Chewa, also known as Nyanja, is a language of the Bantu language family. The gender prefix chi- is used for languages, so the language is also known as Chichewa and Chinyanja (spelled Cinyanja in Zambia), and locally Nyasa in Mozambique. Chewa is the national language of Malawi. It is also one of the seven official African languages of Zambia, where it is spoken mostly in the Eastern Province. It is also spoken in Mozambique, especially in the provinces of Tete and Niassa, as well as in Zimbabwe where, according to some estimates, it ranks as the third-most widely used local language, after Shona and Northern Ndebele. It was one of the 55 languages featured on the Voyager. An urban variety of Nyanja, sometimes called Town Nyanja, is the lingua franca of the Zambian capital Lusaka and is widely spoken as a second language throughout Zambia. This is a distinctive Nyanja dialect with some features of Nsenga, although the language also incorporates large numbers of English-derived words, as well as showing influence from other Zambian languages such as Bemba. Town Nyanja has no official status, and the presence of large numbers of loanwords and colloquial expressions has given rise to the misconception that it is an unstructured mixture of languages or a form of slang. The fact that the standard Nyanja used in schools differs dramatically from the variety actually spoken in Lusaka has been identified as a barrier to the acquisition of literacy among Zambian children. iSchool.zm, which develops online educational content in Zambian languages, has begun making 'Lusaka Nyanja' available as a separate language of instruction after finding that schoolchildren in Lusaka do not understand standard Nyanja. Chinyanja has its origin in the Eastern Province of Zambia from the 15th century to the 18th century. The language remained dominant despite the breakup of the empire and the Nguni invasions and was adopted by Christian missionaries at the beginning of the colonial period. In Zambia, Chewa is spoken by other peoples like the Ngoni and the Kunda, so a more neutral name, Chinyanja "(language) of the lake" (referring to Lake Malawi), is used instead of Chewa. The first grammar, A grammar of the Chinyanja language as spoken at Lake Nyasa with Chinyanja–English and English–Chinyanja vocabulary, was written by Alexander in 1880 and partial translations of the Bible were made at the end of 19th century. Further early grammars and vocabularies include A vocabulary of English–Chinyanja and Chinyanja–English: as spoken at Likoma, Lake Nyasa and A grammar of Chinyanja, a language spoken in British Central Africa, on and near the shores of Lake Nyasa, by George Henry (1891). The whole Bible was translated by William Percival Johnson and published as Buku Lopatulika ndilo Mau a Mulungu in 1912. A strong historical link of the Nyanja, Bemba and Yao people to the Shona Empire, who can point their earlier origins to Mashonaland, proves linguistically evident today. The ancient Shonas who temporarily dwelt in Malambo, a place in the DRC, eventually shifted into northern Zambia, and then south and east into the highlands of Malawi. ||Town Nyanja (Lusaka) |How are you? ||Nili bwino / Nili mushe |What's your name? ||Dzina lanu ndani? ||Zina yanu ndimwe bandani? |My name is... ||Dzina langa ndine... ||Zina yanga ndine... |How many children do you have? ||Muli ndi ana angati? ||Muli na bana bangati? |I have two children ||Ndili ndi ana awiri ||Nili na bana babili |How much is it? |See you tomorrow - ^ Nationalencyklopedin "Världens 100 största språk 2007" The World's 100 Largest Languages in 2007 - ^ Jouni Filip Maho, 2009. New Updated Guthrie List Online - ^ cf. Kiswahili for the Swahili language. - ^ Williams, E (1998). Investigating bilingual literacy: Evidence from Malawi and Zambia (Education Research Paper No. 24). Department for International Development. - ^ Woodward, M. E. 1895. - ^ Henry, George. 1891. - ^ The Umca in Malawi, p 126, James Tengatenga, 2010: "Two important pieces of work have been accomplished during these later years. First, the completion by Archdeacon Johnson of the Bible in Chinyanja, and secondly, the completed Chinyanja prayer book in 1908." - Paas, Steven, 2012. 3rd edition. Dictionary / Mtanthauziramawu. English – Chichewa / Chinyanja // Chichewa / Chinyanja – English. VTR Publications. ISBN 978-3-941750-87-6 - Mchombo, Sam, 2004. The Syntax of Chichewa. Cambridge Syntax Guides - Hetherwick, Alexander (1907). A Practical Manual of the Nyanja Language .... Society for Promoting Christian Knowledge. Retrieved 25 August 2012. - Gray, Andrew; Lubasi, Brighton; Bwalya, Phallen (2013). Town Nyanja: a learner's guide to Zambia's emerging national language. - Henry, George, 1904. A grammar of Chinyanja, a language spoken in British Central Africa, on and near the shores of Lake Nyasa. - Laws, Robert (1894). An English–Nyanja dictionary of the Nyanja language spoken in British Central Africa. J. Thin. pp. 1–. Retrieved 25 August 2012. - Rebman, John; Church Missionary Society (1877). Dictionary of the Kiniassa language. Gregg. pp. 65–. Retrieved 25 August 2012. - Riddel, Alexander (1880). A Grammar of the Chinyanja Language as Spoken at Lake Nyassa: With Chinyanja–English and English–Chinyanja Vocabularies. J. Maclaren & Son. Retrieved 25 August 2012. - Woodward, M. E., 1895. A vocabulary of English–Chinyanja and Chinyanja–English as spoken at Likoma, Lake Nyasa. Society for Promoting Christian Knowledge. - Missionários da Companhia de Jesus 1963. Dicionário Cinyanja–Português. Junta de Investigaçôes do Ultramar.
fwe2-CC-MAIN-2013-20-43818000
||This article has multiple issues. Please help improve it or discuss these issues on the talk page. In software development and product management, a user story is one or more sentences in the everyday or business language of the end user or user of a system that captures what a user does or needs to do as part of his or her job function. User stories are used with agile software development methodologies as the basis for defining the functions a business system must provide, and to facilitate requirements management. It captures the 'who', 'what' and 'why' of a requirement in a simple, concise way, often limited in detail by what can be hand-written on a small paper notecard. User stories are written by or for the business user as that user's primary way to influence the functionality of the system being developed. User stories may also be written by developers to express non-functional requirements (security, performance, quality, etc.), though primarily it is the task of a product manager to ensure user stories are captured. User stories are a quick way of handling customer requirements without having to create formalized requirement documents and without performing administrative tasks related to maintaining them. The intention of the user story is to be able to respond faster and with less overhead to rapidly changing real-world requirements. A user story is an informal statement of the requirement as long as the correspondence of acceptance testing procedures is lacking. Before a user story is to be implemented, an appropriate acceptance procedure must be written by the customer to ensure by testing or otherwise whether the goals of the user story have been fulfilled. Some formalization finally happens when the developer accepts the user story and the acceptance procedure as a work specific order. Creating user stories When the time comes for creating user stories, one of the developers (or the product owner in Scrum) gets together with a customer representative. The customer has the responsibility for formulating the user stories. The developer may use a series of questions to get the customer going, such as asking about the desirability of some particular functionality, but must take care not to dominate the idea-creation process. As the customer conceives the user stories, they are written down[by whom?] on a note card (e.g. 3x5 inches or 8x13 cm) with a name and a description which the customer has formulated. If the developer and customer find a user story deficient in some way (too large, complicated, imprecise), it is rewritten until it is satisfactory - often using the INVEST guidelines from the Scrum project-management framework. However, Extreme Programming (XP) emphasizes that user stories are not to be definite once they have been written down. Requirements tend to change during the development period, which the process handles by not carving them in stone. "As a <role>, I want <goal/desire> so that <benefit>" "As a <role>, I want <goal/desire>" Chris Matts suggested that "hunting the value" was the first step in successfully delivering software, and proposed this alternative as part of Feature Injection: "In order to <receive benefit> as a <role>, I want <goal/desire>" Another template based on the Five Ws specifies: "As <who> <when> <where>, I <what> because <why>." The <what> portion of the user story should use either "need" or "want" to differentiate between stories that must be fulfilled for proper software operation versus stories that improve the operation, but are not critical for correct behavior. As a user, I want to search for my customers by their first and last names. As a non-administrative user, I want to modify my own schedules but not the schedules of other users. As a mobile application tester, I want to test my test cases and report results to my management. Starting Application The application begins by bringing up the last document the user was working with. As a user closing the application, I want to be prompted to save if I have made any change in my data since the last save. Closing Application Upon closing the application, the user is prompted to save (when ANYTHING has changed in data since the last save!). As a user closing the application, I want to be prompted to save anything that has changed since the last save so that I can preserve useful work and discard erroneous work. The consultant will enter expenses on an expense form. The consultant will enter items on the form like expense type, description, amount, and any comments regarding the expense. At any time the consultant can do any of the following options: (1) When the consultant has finished entering the expense, the consultant will “Submit”. If the expense is under fifty (<50), the expense will go directly to the system for processes. (2) In the event the consultant has not finished entering the expense, the consultant may want to “Save for later”. The entered data should then be displayed on a list (queue) for the consultant with the status of “Incomplete”. (3) In the event the consultant decides to clear the data and close the form, the consultant will “Cancel and exit”. The entered data will not be saved anywhere. As a central part of many agile development methodologies, such as in XP's planning game, user stories define what has to be built in the software project. User stories are prioritized by the customer to indicate which are most important for the system and will be broken down in tasks and estimated by the developers. When user stories are about to be implemented the developers should have the possibility to talk to the customer about it. The short stories may be difficult to interpret, may require some background knowledge or the requirements may have changed since the story was written. Every user story must at some point have one or more acceptance tests attached, allowing the developer to test when the user story is done and also allowing the customer to validate it. Without a precise formulation of the requirements, prolonged nonconstructive arguments may arise when the product is to be delivered. XP and other agile methodologies favor face-to-face communication over comprehensive documentation and quick adaptation to change instead of fixation on the problem. User stories achieve this by: - Being very short. They represent small chunks of business value that can be implemented in a period of days to weeks. - Allowing developer and the client representative to discuss requirements throughout the project lifetime. - Needing very little maintenance. - Only being considered at the time of use. - Maintaining a close customer contact. - Allowing projects to be broken into small increments. - Being suited to projects where the requirements are volatile or poorly understood. Iterations of discovery drive the refinement process. - Making it easier to estimate development effort. - Require close customer contact throughout the project so that the most valued parts of the software get implemented. Story maps A story map is the graphical, two-dimensional product backlog. At the top of the map are big user stories, which can sometimes be considered "epics” as Mike Cohn describes them and other times correspond to "themes" or "activities". These grouping units are created by orienting at the user’s workflow or "the order you'd explain the behavior of the system". Vertically, below the epics, the actual story cards are allocated and ordered by priority. The first horizontal row is a "walking skeleton" and below that represents increasing sophistication. In this way it becomes possible to describe even big systems without losing the big picture. Some of the limitations of user stories in agile methodologies: - They can be difficult to scale to large projects. - They are regarded as conversation starters. User stories and use cases While both user stories and use cases serve the purpose to capture specific user requirements in terms of interactions between the user and the system, there are major differences between them. |User Stories||Use Cases| See also - Acceptance testing - Extreme Programming - Use case - Kanban board - Agile software development - INVEST mnemonic - Daniel H. Steinberg and Daniel W. Palmer: Extreme Software Engineering, Pearson Education, Inc., ISBN 0-13-047381-2 - Mike Cohn, "User Stories Applied", 2004, Addison Wesley, ISBN 0-321-20568-5 - Mike Cohn: Agile Estimating and Planning, 2006, Prentice Hall, ISBN 0-13-147941-5 - Davies, Rachel. "Non-Functional Requirements: Do User Stories Really Help?". Retrieved 12 May 2011. - Patton, Jeff. "The new user story backlog is a map". Retrieved 4 March 2013. - Cockburn, Alistair. "Walking Skeleton". Retrieved 4 March 2013. - "Story mapping". Agile Alliance. Retrieved 4 March 2013. - Advantages of User Stories for Requirements
fwe2-CC-MAIN-2013-20-43823000
Family: Hirundinidae, Swallows view all from this family Description ADULT Has bluish black cap and white-lined bluish black back. Note the pale collar, reddish orange cheeks, and dark throat; forehead is white in most birds. Rump is buffy and square-ended tail is dark. Underparts are mostly pale with darker spots on undertail coverts. JUVENILE Duller than adult, with unmarked back; lacks reddish elements of facial plumage and has paler rump. Throat is dark (cf. juvenile Cave). Dimensions Length: 5-6" (13-15 cm) Habitat Common summer visitor (mainly Apr-Sep) to a wide range of habitats. Winters in South America. Observation Tips Easy to find. Range Texas, Rocky Mountains, Southwest, Southeast, Alaska, California, Eastern Canada, Mid-Atlantic, Florida, Great Lakes, Northwest, Western Canada, Plains, New England Voice Utters various soft twittering notes. Discussion Compact swallow with broad-based, triangular, and relatively short wings. Pale orange-buff rump, obvious in flight, is diagnostic across much of range, but beware of confusion with Cave Swallow within that species' limited North American range. Nests colonially, making mud nests on cliffs and manmade structures. Catches flying insects on the wing. Sexes are similar.
fwe2-CC-MAIN-2013-20-43824000
The Price of European Immigration FrontPage Magazine 28 June 2012 In his 2008 book Et Delt Folk ("A Nation Divided”), The Danish historian and writer Morten Uhrskov Jensen carefully went through publicly available sources. He demonstrated that the opening up of his country for mass immigration was arranged by just part of the population, sometimes in the face of considerable popular opposition. Roughly speaking, those representing the political and media establishment and the upper classes were in favor of open borders, whereas those from the lower classes were often opposed. This divide is viewed by those from the upper segments of society as caused mainly by racism, prejudice, ignorance and xenophobia. Since the educated classes enjoyed a virtual hegemony over public debate, they were able to define all opposition as hate and intolerance, exemplified by people such as Pia Kjarsgaard of the Danish People’s Party. The well-to-do themselves rarely lived in areas with many immigrants and could afford to move, at least for a while, if that was needed. They focused on the abstract and allegedly humanitarian aspects of mass migration. Immigrants are simply referred to as "new countrymen,” who as if by magic always seem to enrich the natives with their presence. In Denmark, multiculturalists have successfully managed to establish the neologism nydansker or "new Danes,” a vibrant new breed of people currently displacing the tired and boring "old Danes.” For poorer people, immigration was a concrete issue, as immigrants moved into their neighborhoods and went to school with their children. To put it bluntly, for those with money, globalization initially meant that they could travel on holidays to exotic lands and treat the world as their playground. For those who were less well off, it meant that the entire world suddenly moved into their street and took over their children’s local playground. When the Titanic during her maiden voyage across the Atlantic Ocean struck an iceberg just before midnight on 14 April 1912, the first people who could see the water pouring in were the third-class passengers who happened to be situated closest to the waterline. Meanwhile, the richest passengers at the top were drinking fine cognac long after the ship had started sinking. They didn’t realize what was going on for quite some time, because they were further removed from the physical problem. The poor passengers still unfortunately suffered the highest fatality rates, because the wealthy benefitted from having privileged access to the lifeboats. We see the same phenomenon on display today, on a much larger scale. Having Islamophobia in Europe today is just as rational as having icebergophobia on board the Titanic in 1912. Uhrskov Jensen in 2012 published another book, Indvandringens Pris ("The Price of Immigration”) about how much money non-European mass immigration costs his native Denmark. His conclusion is that this cost is great in terms of welfare payments and rising crime combined with declining efficiency and technological innovation. He shows through carefully researched statistics that only certain Asian immigrants are able to keep up with northern Europeans in the educational system. A few skilled immigrants from India or elsewhere can compete, but mainly those from East Asia: Japanese, Koreans, Chinese, and to some extent Vietnamese. All other non-Western immigrants show lower levels of skill and competence than Europeans, many of them a lot lower. It should be mentioned here that these numbers correlate quite well with average IQ, where a few other Asians can compete with Europeans, but primarily East Asians. Other ethnic groups cannot do so. Although it has become taboo to say this in the modern Western world, it is well-documented fact that IQ correlates well with economic level, for individuals as well as for nations. The scholar Charles Murray has written much about this. Former professor Helmuth Nyborg at Aarhus University in Denmark has conducted controversial research on the subject of the genetic inheritance of intelligence. His conclusion is that today’s mass immigration of non-Europeans will lead to an overall marked decline in the average intelligence of the population, and by extension a significant decline in social and economic competence, scientific progress, as well as technological innovation. For decades Westerners have been told that immigration from less developed Third World countries is "good for the economy” and will "pay for future pensions.” Morten Uhrskov Jensen proves conclusively that this claim is fundamentally wrong, not just regarding Denmark or Scandinavia but for other Western countries, too. Certain private companies may enjoy short-term benefits by having access to cheap labor and borderless export markets. Socialist parties can cynically import a reliable voter base of backward peoples who overwhelmingly vote for left-wing parties so they can receive generous welfare payments from the high tax payments extracted from the majority population, essentially forcing the white natives to fund their own colonization by foreign peoples. For the country as a whole, however, non-European mass immigration will in the long run turn out to be an unmitigated social and economic disaster. The direct and indirect costs of today’s immigration policies through rising crime, increased corruption and higher welfare costs plus declining competitiveness, innovation and genetic intelligence add escalating costs to countries already in trouble due to rising deficits and mushrooming debt. A Danish think tank has estimated that the net cost of immigration is as much as 50 billion kroner every year, and those were cautious estimates. A study from Denmark found that every second immigrant from the Third World – especially from Muslim countries – lacked the qualifications for even the most menial jobs on the labor market. An ever-growing group of non-Western immigrants in Norway is dependent on welfare. This was the conclusion of a study by Tyra Ekhaugen of the Frisch Centre for Economic Research. Ekhaugen’s research contradicted the common assertion that the labor market depends increasingly on immigrants. The study indicated the reverse. I have previously written about the costs of mass immigration several times, for instance in the essays When Danes Pay Danegeld: The End of the Scandinavian Model or What Does Muslim Immigration Cost Europe? Yet Erling Lae, an openly gay politician for the Conservative Party and then the head of the Oslo city government, warned that the city desperately needs more immigrants and that there would be “complete chaos” without them. In 2005, Trygve G. Nordby, who has worked for the Socialist Left Party, as the director of the Norwegian Directorate of Immigration (UDI), claimed that the country needed more unskilled immigrants and should actively seek them out. It later emerged that UDI under Nordby’s rule had virtually run its own private immigration policy in violation of national law in order to give Iraqi immigrants the right to settle in Norway. Journalist Halvor Tjønn from newspaper Aftenposten, one of the few genuinely critical journalists in Norway who later published a fairly realistic biography of Muhammed, in 2006 cited a report from NHO, the Confederation of Norwegian Enterprise. NHO warned that the current immigration policies constitute a serious threat to the country’s economy. Norway is one of the world’s largest exporters of oil and natural gas due to its offshore resources in the North Sea and elsewhere. Yet according to NHO, there is a risk that much of the profit Norway earns from selling oil could be spent on paying welfare for its rapidly growing immigrant population. These warnings were left unheeded by political leaders, yet the problem hasn’t gone away. In 2012, the business daily Dagens Næringsliv reported that researcher Erling Holmøy from Statistics Norway together with senior advisor Birger Strøm studied how immigration affects government budgets. They concluded that in the long run it would prove to be very costly, stating that mass immigration bears certain similarities to a pyramid scheme. Author Morten Uhrskov Jensen states that the basic trends are identical in Sweden, France, Germany and the USA. The only reasonable conclusion to be drawn from this is, in his view, to stop all non-Western mass immigration. Yet the Western political elites continue to promote such mass immigration, in spite of mounting evidence that this is greatly harmful to their own countries. This dangerous stubbornness could be due to ideological blindness, or may be because the political elites see their positions, prestige and personal privileges tied to maintaining the status quo. In the end, the historian Uhrskov Jensen fears that only a massive traumatic event or a major shock to the system can change the direction the Western world is currently headed and reestablish reasonable and sensible immigration policies that are in line with the long-term interests of the European majority population.
fwe2-CC-MAIN-2013-20-43844000
"To turn the tables" is a figure of speech meaning "to put someone else in the predicament that we have been occupying, or a into a similar one." The term arose during the early 1600s, and seems to have been a reference to a card or board game in which a player, when at a disadvantage, might reverse the position of the board and thereby shift the disadvantage fo his opponent. A similar possibility holds that the original sense of the expression might be the same as we now mean by "duplicate", as in the game duplicate bridge, wherein after a series of hands of cards had been played, the table was turned and the same series of hands was replayed, each player holding the hand previously held by an opponent. There's another interesting theory that proposes that the expression is derived from an ancient fad in Roman men of purchasing costly tables. When a wife was chided for an expensive purchase of her own, then, she would "turn the tables" by reminding him of his extravagance.
fwe2-CC-MAIN-2013-20-43846000
An ancient proverb says that the gloom is heaviest immediately before the dawn. Indeed, in the history of Bulgaria, the year 1876 was seemingly one of the gloomiest eras, filled with bloodshed, suffering, and horrors. Why? “When the fruit is brought forth,” the (Heavenly) Farmer “immediately...putteth in the sickle, because the harvest is come” (St. Mark 4:29). A Russian newspaper thus wrote the following, with regard to this fateful year: “Recently, in neighboring Bulgaria, a pogrom has been underway against the Christians, which—in the words of one of our Hierarchs—has taken us back to the times of the ancient Christian martyrs. Hundreds of Bulgarian towns and villages are in throes and have been drowned in blood. Thousands of men, tens of thousands of old people and women, maidens and children, have been slaughtered, burned alive, or taken into captivity as slaves. Many of the enslaved were forcefully converted to Islam, though not a few preferred death to Islam. In the monasteries and convents, monks and nuns have been cut to pieces; on the roads innocent children are murdered only for having crossed themselves as Orthodox Christians; virgins are raped and burned alive at the stake; unborn babies are cut out of their mothers’ bellies with the sword; and infants are slashed in two or impaled on the yataghan; those whose Bulgarian Faith has remained ineradicable are uprooted from amongst the living.” From amongst the unknown Martyrs for Faith and kin in 1876, alustrous constellation shines over the land of Bulgaria even to this day: that of Batak, a name both dear and unforgettable for every Bulgarian Christian soul! The duration of the Batak massacre was but several days. On the night of May 1, 1876 (Old Style), Batak shone forth like a new sun from the conflagration of the Bashibazouks’ vengeance, illuminating henceforth and for all ages, by its martyrdom, our Christian history. The Batak Golgotha began from the lower end of the village—from the Martyrs in Bogdan’s house. Disarmed by means of deception, the citizens of Batak, lively at the outset, now become Christ’s lambs, doomed to slaughter. Only those children who immediately agreed to accept Islam, upon being asked, were spared their lives. The torturers took even the last shirt or chemise from the Martyrs’ backs, as though to let their souls fly toward the heavens unburdened of all earthly weight. And, by God’s Grace, moments before their demise, heavenly peace descended into the souls of these sufferers (who until then had been weeping and screaming), by their firm decision to be faithful to Christ unto death. One by one, they went to the chopping block in silence. Some pressed their necks tightly to the block, so that the blow might more definitely separate their souls from the flesh. A few mothers pushed their own children forward to be slain before they themselves were killed, so as to be assured that their children would not be taken into Moslem households and lose their Faith, together with their souls. When attempts were made to ravish them moments before their deaths, the maidens of Batak resisted like lionesses, so as to preserve their virginal purity to the last breath. Thus, they were slashed into pieces. At one side of the chopping block rose mountains of martyred bodies, swimming in pools of blood; and separately, on the other side, lesser mountains, consisting of the martyrs’heads, with their eyes half-open, as if looking up towards Heaven itself. Read the full article here.
fwe2-CC-MAIN-2013-20-43854000
A special Child Well-Being Index (CWI) report from Duke University tracks trends over time of middle and high school students exposed to peer-to-peer violence in schools. Key findings include: - From 1991-2009, more teens in middle and high schools were threatened than actually injured; these trends have now merged, in 2010, so that both threats and injuries occur at the same level. - Trends in numbers of middle and high school students exposed to violence began to increase in 2002 and 2003 , peaked between 2007 and 2008, and began to decrease around 2008 and 2009. Violence in schools increased in the early 1990s - prior to the more recent peaks. - The annual numbers of teens injured without a weapon showed the greatest fluctuation. Injury without a weapon increased dramatically from 2003-2007, flattened out in 2008-2009, and decreased slightly in 2009-2010. Visit our website to read more about the report. New Book on the CWI! The Well-Being of America's Children: Developing and Improving the Child and Youth Well-Being Index, edited by Kenneth C. Land, has been released by Springer. This is the first book to address the development and refinement of the CWI to understand how the well-being of America's children can be measured and improved.
fwe2-CC-MAIN-2013-20-43862000
We don’t think about this very much, but as I was looking at some dates related to my philatelic pursuits, it struck me that it was likely that a lot of records wouldn’t have been dated on a Sunday. But how do we know what day of the week any particular date fell on in history? UNIX systems include a utility called “date” in almost all distributions. To those of you who don’t have UNIX or Linux systems handy, which is the great majority of genealogists that I know, take a look at a site currently called The return of Calendar. Using a cgi script, the site calls the date function on the web server with year numbers dating as far back as 1582. The output is a full year’s calendar in the Gregorian calendar (the calendar that most of the Western world uses today).
fwe2-CC-MAIN-2013-20-43868000
How strong is the strong force? I bet you think you asked a simple question. The simple answer is that the strength depends on the range over which it is acting. At short distances the strong force is weak and at long distances it is strong. That is completely different from the other three forces and arises because the forces transmitters, called gluons, are massless and carry strong force charge. I hope that you are still interested in the more complicated answer given below in which I try to explain how this can be so. The strong force attraction between two protons has a complicated shape which depends on the distance between the protons. The strong force between two protons is partially offset by the repelling electromagnetic forces. The strong force binds the protons with about 25 MeV of energy. The electromagnetic forces repel it with slightly less. The result is that about 1 MeV of energy would be required to split the two protons apart. In the rest of this reply I discuss the fundamental forces in more detail so you can get an idea why the strong force is different from the others. The four forces of nature are the strong force, the electromagnetic force, the weak force, and the gravitational force. We study the first three (and experience the last) at Fermilab. We are most familiar with gravity and second-most familiar with the electromagnetic force in our daily routine. So I will start by comparing the strength of them and then show how they compare to the weak and strong forces. First of all, the strength of a force depends on the distance over which it is acting. For gravity, the force exerted by one object on another drops according to the square of the distance between the two objects. The equation for the force exerted by gravity is: where G is a small constant, and M and m are the masses of the two objects. The minus sign merely indicates the force is attractive. We say the "range" of the gravitional force is "unlimited" because it is exertible over an arbitrarily large distance. It just gets smaller the further the two objects are from each other. The electromagnetic force has a similar formula. The replusive force between two electrons is: where C is a big constant, and e (typed in once for each of the two charges) is the charge of the electron. Notice the strength of the force drops with the distance between the charges in a way identical to gravity. Also, if we were talking about an electron and an anti-electron (which has the opposite charge), then there would be a minus sign indicating the force between opposite charges is attractive. We can compare the strength of the gravitational force to the electromagnetic force on two electrons by taking the ratio between the two forces. The distance-squared cancels out and we are left with: F(gravity)/F(EM) = Gmm/Cee. I intentionally dropped the minus sign; I will simply remember that the gravitional force between the electrons is attractive and the electromagnetic force between the two electrons is replusive. Anyway, when I plug in the values for G, m, C, and e, the ratio is 2.4x10^(-43). In words that is pronounced two-point-four times ten to the minus forty-three. That is a very small number. In other words, the gravitational force between two electrons is feeble compared to the electromagnetic force. The reason that you feel the force of gravity, even though it is so weak, is that every atom in the Earth is attracting every one of your atoms and there are a lot of atoms in both you and the Earth. The reason you aren't buffeted around by electromagnetic forces is that you have almost the same number of positive charges as negative ones, so you are (essentially) electrically neutral. The weak force is misnamed. It's thought to be just as strong as the EM force but, unlike the EM force, it's a short-ranged force. In fact, the range is only about 1/100 the size of an atomic nucleus. The weak force is outside the realm of our everyday experience. We study it at Fermilab by using the accelerator to produce the particles which transmit the force. These are real particles called the W-boson and the Z-boson. Because they are very massive, we need a high-energy accelerator to produce them. The large mass of the W-boson and the Z-boson is also the reason the force has a short range. Incidentally, the particle which carries the EM force is called the photon (yes, light). Because photons are massless, the EM force has a long range as I described above. The weak force and the EM force have been found to be linked at high-energy or, equivalently, short range. They both can be described by one set of equations which we call the "electro-weak" theory. This was discovered in 1967-1971 by Steven Weinberg, Sheldon Glashow, and Abdus Salam. They got the Nobel Prize in physics for unifying those forces. Finally I am ready to talk about the strong force. This is way out of the experience we get in everyday life (not that it doesn't have everyday life consequences), so I will be a little more long-winded in describing it. Remember that a proton or neutron is composed of three quarks? These quarks have strong charge and are bound together by the strong force. Unlike the case of the EM force, where there is one electric charge and one anti-charge (plus and minus charges) there are three strong force charges and three anti-charges. We call the strong force charges "red", "blue", and "yellow" and the anti-charges are called "anti-red" and so forth. The particles which transmit the force are called gluons. Gluons are massless, like the photon. But unlike the photon, which is electrically neutral, the gluons carry strong charge and a different strong anti-charge. A gluon could be "red-anti-blue", for example, and there are eight kinds of gluons. We call the three charges "colors" even though they have nothing to do with how we see. Because the gluon is massless, at first you might think the range of the strong force is infinite, like the EM force. But if you study the behavior of the strong force, you find that the three quarks in a proton or neutron behave almost as if they were bouncing around freely in a relaxed, elastic spherical container. None of the quarks can escape the container because when the quark reaches the boundary of the proton or neutron, the force begins to act and gets stronger and stronger the further away the that quark gets from the others. That is very different from the other forces which get weaker at longer distances and it occurs because the gluons have the color and anti-color charge. The strong force also acts between protons and neutrons in an atomic nucleus much in the same way that simple chemicals are held together by the electric force. A nucleus such as helium, which has two (positively EM-charged) protons, is stable because the strong force overcomes the electromagnetic forces. The strong force binds the two protons with about 25-35 MeV of energy. The electromagnetic forces try to push the protons apart. The net result is that approximately 1 million electron-volts of energy are needed to separate the two protons. In contrast, an electron is bound to a proton in a hydrogen atom by only a few electron-volts. By now you know enough to consider the size of the nucleus in comparison to the size of an atom to judge if this is truly a fair comparison! The strong force is, indeed, strong. We think that if we could study the electroweak and strong forces at high enough energy we would find out they were linked together somehow, like electricity and magnetism are to form EM, and like EM and the weak force are to form electro-weak. Such a theory would be called a grand-unified theory. And we also think that it may be possibe to include gravity with the other three. Such a theory would be called a super-grand-unified theory and there is a candidate for that called "superstrings". So you asked a simple question: "How strong is the strong force?". The answer is that it depends on the range. At short distances it is weak and at long distances it is strong. That effect is completely different from the other three forces and arises because the forces transmitters, called gluons, are massless and have strong-charge and different strong anti-charge. If you want to learn more about particle physics and the work we do at Fermilab, the book "The God Particle" by Leon Lederman and Dick Teresi gives a very good and readable explanation. |last modified 1/11/1999 [email protected]|
fwe2-CC-MAIN-2013-20-43874000
You hold in your hands the definitive version of Writings of Halfard. It is one of the most famous books in History, and with good reason, being chronicle of a period of time in the Fourth Fused Universe, 1419-1819. This was, of course, the time of Puff. This version of the book holds all materials ever incorporated into it. This means it will be quite thicker than most university editions, which are non-definitive. Halfard was Puffs Elfish secretary. He lived from about 40,000,000 BC to 2487 AD. It is unclear of the circumstances in which he wrote this, but it may be assumed that most of it was written off his notes, and the rest was gradually fitted in between his writing and the posthumous publication in 2493 AD. Halfard was an interesting person, in that he wrote such an important book, but was barely mentioned in other writings, or even in this. We shall, sadly, know very little about him, but what we do is sufficient for now. Since Halfard was an Elf, we do fortunately have that odd Elfish style which is a sure sign of anyone’s origin. This is mainly how we know certain things were added after the original writing. If Halfard is interesting, Puff is more so. He was born in the dark depths of Russia, and his family emigrated to England in the Yaga-Gnomic War of the 800’s. St. George then promptly killed his family. This leads to an interesting thought. Puff is known for being a great lover of freedom and equality of peoples. But he intensely hated knights, and, during his government, launched wars for their destruction. These contradictions are interesting, but I should not deign to address everything in the introduction. Good reading. John Kivvers, Editor. All material in this book, except that written by myself, was originally written in a foreign language. I have attempted to translate it as literally as possible without making it unintelligible. If a line has been left in its original language, that is because it is a foreign language to the author, and intentionally written in that language. Any of these will be translated in the appendix.
fwe2-CC-MAIN-2013-20-43877000
USGS Multimedia Gallery Title: Deep-Sea Cold Water Coral Description: Fish like this Atlantic Roughy (Hoplostethus occidentalis) congregate near deep-sea corals (background is Lophelia pertusa coral). Usage: This image is public domain/of free use unless otherwise stated. Please refer to the USGS Copyright section for how to credit the photo. Suggest an update to the information/tags?
fwe2-CC-MAIN-2013-20-43898000
To successfully manage our valued resources the following efforts are being undertaken: Encouraging a healthy natural community which increases both plant and wildlife diversity and strengthen the communal eco-system of each is the goal of our forestry management efforts. Professional forestry management is critical in managing the division’s natural resources including: In partnership with Wildlife Resource Division and USDA Wildlife Services, State Parks & Historic Sites has been managing deer populations since 2003. Our primary method of deer reduction is to enlist the use of hunters during managed quota hunts. In coordination with forestry management and deer population control, prescribed fire is a major tool in both habitat health and diversification of species. In 2006, through an arrangement with the Vegetative Management Services Section of Georgia Power, the Division arranged a contract that provides access to arborist, tree crews, emergency equipment and certified operators 24 hours a day in any location within the state. This service is funded through timber revenue. Two significant restoration grants funded by the National Fish and Wildlife Foundation include: This project was designed to attract birds, especially Sand Hill Cranes, to overnight in a protected habitat area on their traditional migrating routes. The project created a 15-acre seasonally inundated impoundment with gated water controls that allow park staff to control water levels at times of flooding. This project began in 2005. The five sites were identified. Slash or plantation type pine stands were removed and , planting containerized long leaf pine seedlings and wire grass to restore a traditional long leaf-wiregrass community. The USDA, Wildlife Services provides many wildlife management services through an annual contractual agreement. Wildlife Services has worked with our Division as a critical team member in our Red Top Mountain deer reduction program. Additional services provided are the lethal control of beavers and impounded waterways and streams which can cause flooding or endanger structures. Many sites require the services of the removal of feral cats, diseased raccoons and pigeons. Annually, Wildlife Services re-locates 150+ Canada Geese from our swimming beaches, day-use areas and golf courses. The re-location eliminates fecal contaminate issues for swimmers, picknickers, boaters and golfers. In 2007 Wildlife Services partnered with us to remove wildlife predators that were endangering newly hatched gopher tortoises at Reed Bingham State Park. This work proved very successful in increasing our survival rates to the highest levels on record.
fwe2-CC-MAIN-2013-20-43905000
No one seemed less well-cast for the role of reformer, in an age of reform, than Abraham Lincoln. To begin with, he was a stranger, emotionally and intellectually, to evangelical Christianity, the great engine of reform in the nineteenth century. Raised in a household of uncompromisingly Calvinistic Baptists who abhorred slavery, the young Lincoln rejected the authority of any religion and never joined any religious congregation. His stepmother, Sarah Bush Lincoln, remembered that her stepson “had no particular religion,” and when pressed on the subject, Lincoln himself would only say that “when he did good he felt good, when he did bad he felt bad.” That, said Lincoln, “is my religion.” Lacking that impetus, Lincoln had little interest in the network of reform movements spun-off by evangelical revivals. He was a tee-totaller, but largely on the grounds of health rather than moral purity, since drink, to him, tasted “unpleasant and always leaves me flabby and undone.” He shunned all of the temperance societies of his day, except for the Washington Temperance Society, and even then, he espoused the Washingtonian movement only because of their secularized strategy of persuasion rather than condemnation. “The warfare heretofore waged against the demon of Intemperance, has . . . been erroneous,” Lincoln said to a gathering of the local Washingtonians in 1842. “Too much denunciation against dram sellers and dram‑drinkers was indulged in,” and precious few flies were attracted by the vinegar of damnation, compared to the Washingtonians’ preference for the sugar of persuasion. No would-be reformer should demand such a moral volte-face from the sinner; this would be to “expect a reversal of human nature,” and Lincoln had none of the reformers’ enthusiastic confidence that people could be upbraided into acts of disinterested benevolence. “What an ignorance of human nature does it exhibit, to ask or expect a whole community to rise up and labor for the temporal happiness of others,” Lincoln warned.Show Full EssayHide Full Essay In the same way, Lincoln opposed slavery, going on record against it for the first time in 1837 when he joined with one other member of the Illinois state legislature in criticizing “the institution of slavery” as “both injustice and bad policy,” and twenty-seven years later, he would still be insisting that “I am naturally anti‑slavery. If slavery is not wrong, nothing is wrong. I cannot remember when I did not so think, and feel.” But he joined no anti-slavery society, and he condemned as reckless the abolitionists’ demands for an immediate elimination of slave-holding. His 1837 protest against slavery was followed immediately by the balancing concession that “the promulgation of abolition doctrines tends rather to increase than to abate its evils.” As late as 1862, he told Horace Greeley that the most effective way to end slavery was through a stage-by-stage buy-out by the federal government. Ending slavery should have “three main features – gradual – compensation – and [the] vote of the people,” and should be urged “persuasively, and not menacingly, upon the South.” There was, deep in the grain of Lincoln’s temperament, a prudence that resisted the demand that conversion and enlightenment be embraced now, totally, without any reckoning of the cost. He chided “Free Soil men” in 1848 for “declaring that they would ‘do their duty and leave the consequences to God,’” since this proclamation of the relentless principle of fiat justitia ruat coelum (“do justice though the heavens fall”) merely gave an excuse for taking a course that they were not able to maintain by a fair and full argument. To make this declaration did not show what their duty was. If it did we should have no use for judgment, we might as well be made without intellect, and when divine or human law does not clearly point out what is our duty, we have no means of finding out what it is by using our most intelligent judgment of the consequences. Much as he might cheer-on temperance and emancipation, Lincoln was too much a “fatalist,” too much a believer that human behavior was guided by selfishness and self-interest, to be confident that the key to the new Jerusalem lay within Americans’ grasp, if only they would put forth the will to seize it. Abolitionists feared that view as the real enemy to their cause. The Brahmin abolitionist Wendell Phillips complained angrily against “these men” who “are ever parading their wish to draw a line between themselves and us, because they must be permitted to wait, - to trust more to reason than feeling,—to indulge a generous charity,—to rely on the sure influences of simple truth, uttered in love, &c., &c.” It was the duty of convinced abolitionists, wrote Arthur Tappan (the wealthy bankroller of the American & Foreign Anti-Slavery Society) in 1832, to “inculcate everywhere, the great fundamental principle of immediate abolition,” to “insist principally on the sin of slavery,” and “reprobate the idea of compensation to slave holders, because it implies the right of slavery. . . . The duty of whites in regard to this cruel prejudice is not to indulge it, but to repent and overcome it.” Let “the woful, blood-stained facts” about slavery “be spread out” and “let the tale of a slave’s wrongs enter the ear,” declared Elizur Wright a year later, and converts to the gospel of abolition would “rise up” to overthrow the idol of slavery—and, as Lincoln feared, “trust God for the consequences.” No wonder, then, that so many of the abolitionist faithful found Lincoln unexciting. “I do not believe in the anti-slavery of Abraham Lincoln,” wrote the black Illinois abolitionist H. Ford Douglass, “because he is on the side of this Slave Power . . . that has possession of the Federal Government.” Wendell Phillips dismissed Lincoln as “not an Abolitionist, hardly an antislavery man,” and tolerable only to the extent that he “consents to represent an antislavery idea.” And yet, it is the name of Abraham Lincoln that appears at the bottom of the most sweeping act of reform in the American nineteenth century, the Emancipation Proclamation; and it was Abraham Lincoln who, as president, strong-armed a reluctant Congress to adopt a Thirteenth Amendment to the federal Constitution banning slavery completely. “There are four millions of people in this country who now regard Abraham Lincoln as their deliverer from bondage,” declared Massachusetts Congressman George Boutwell, four days after Lincoln’s death, “and whose prosperity, through all the coming centuries, will render tribute of praise to his name and memory.” But the contrast between the reality of a chilly and uncommitted Lincoln on one hand and the image of the Great Emancipator on the other continues to pose far more difficulties for placing Lincoln in the line of reformers than Boutwell anticipated. The principal difficulty in understanding Lincoln’s place as a reformer may lie in how easy it is to miss his enthusiasm for a kind of reform that later generations have not often classified as a reform at all, but which was the real engine behind Lincoln’s anti-slavery beliefs. When the American republic emerged from its colonial cocoon as an independent nation, its economic structure remained very much as British imperial planners had designed it—overwhelmingly agricultural, chronically dependent on imported manufactures, with poor internal transportation, and very little in the way of banks and investment capital to fund economic growth. When the United States went to war a second time with Great Britain in 1812, its feeble economic infrastructure virtually collapsed under British pressure. As early as the 1790s, Alexander Hamilton, the first secretary of the Treasury, argued for the promotion of manufacturing and banking by the federal government as the surest road to security from the great empires all around America’s frontiers. But Hamilton encountered strenuous resistance from Thomas Jefferson, the first secretary of state, who argued that an agricultural economy was precisely what promoted civic virtue and discouraged a different kind of peril to liberty within, from concentrations of too much economic power in too few hands. In a republic where farmers made up 90 percent of the population, Jefferson’s arguments had long innings. However, the debacle of the War of 1812 convinced many Americans that economic backwardness spelled an imperiled future for American liberty and independence. By the mid-1830s, American politics had become polarized into two well-organized political parties, Democrats (the heirs of Jefferson and acolytes of Andrew Jackson) and Whigs (led by Henry Clay). The Whigs promoted a three-point program of banking (to generate the capital needed for creating new manufacturing enterprises), tariffs (to protect the new manufactures from foreign competition) and “internal improvements” (transportation projects, funded by the government, to connect the rural hinterlands with manufacturers, and connect farmers with markets instead of merely subsisting on their own produce). And from his first moments of political awakening, it was the Whigs to whom Lincoln gravitated. When the Whig Party foundered in the mid-1850s, Lincoln attached himself to the new Republican Party because of its anti-slavery stance; but he was also attracted by how the Republicans embraced the Whigs’ economic policies. What the Republican agenda offered Lincoln was an entirely different species of reform—the transformation of the self. People like Lincoln, born in backwoods poverty, could climb the economic ladders of opportunity offered by markets and economic development to make for themselves landscapes entirely different from the isolated drudgery of the hinterlands. “We stand at once the wonder and admiration of the whole world,” Lincoln said in 1856. And why? “This cause is that every man can make himself.” Liberty, for Lincoln, was economic liberty, and the genius of the American republic was the allowance it made for everyone to re-make themselves. And no better example of that re-making existed than Lincoln himself. “There is no permanent class of hired laborers amongst us,” Lincoln insisted in 1859: Twenty-five years ago, I was a hired laborer. The hired laborer of yesterday, labors on his own account to-day; and will hire others to labor for him to-morrow. Advancement — improvement in condition — is the order of things in a society of equals. Lincoln was less afraid than his Democratic peers that a society in which some people could transform themselves into the prosperous and wealthy meant that others would be transformed downward into poverty. “I don’t believe in a law to prevent a man from getting rich,” he told an audience of workingmen in 1860, “We do not propose any war upon capital.” What he wished instead was to give “the humblest man an equal chance to get rich with everybody else.” And he did not mind aiding “the humblest man” through the three-fold government-sponsored mechanism of “internal improvements,” banking, and tariffs. Government could—and should—“do for a community of people, whatever they need to have done, but cannot do, at all, or cannot, so well do, for themselves — in their separate, and individual capacities.” But that responsibility pointed government in the direction of economic enablement, not (as the Jacksonians wanted) economic restraint, and if after all the efforts of enablement had been expended, some people still remained mired in poverty, Lincoln saw no virtue in shedding tears over the failures. “If any continue through life in the condition of the hired laborer, it is not the fault of the system,” he told the Wisconsin State Agricultural Fair in 1859, “but because of either a dependent nature which prefers it, or improvidence, folly, or singular misfortune.” Some of you will be successful, and such will need but little philosophy to take them home in cheerful spirits; others will be disappointed, and will be in a less happy mood. To such, let it be said, “Lay it not too much to heart.” Let them adopt the maxim, “Better luck next time;” and then, by renewed exertion, make that better luck for themselves. It was exactly the unpredictable spiraling of “better luck” in markets and manufacturing that appalled Jacksonian Democrats and fueled their resistance to banks, tariffs, and highways. And among slaveholders in the American South, the Whig agenda generated fears that a government big enough to build roads, levy tariffs, and charter banks would also turn out to be a government big enough to emancipate their slaves. And so a fateful alliance was struck between Jackson’s Democrats and the plantation oligarchy of the South that persisted all the way to the doorstep of the Civil War. But it was also Lincoln’s advocacy of a market-driven society that lay at the root of his hostility to slavery, for if slavery was anything, it was a loathsome determination on the part of the slaveholder to deny at least one class of human beings—namely, black slaves—all hope of self-transformation, or even to deny that African Americans had even the capacity to improve themselves. Like most white Americans of his day, Lincoln took the superior “intellectual endowment” and “physical difference” of white people for granted; unlike many white Americans, however, he also insisted that “there is no reason in the world why the negro is not entitled to all the natural rights enumerated in the Declaration of Independence,” and that especially included the right to economic self-improvement—“to eat the bread, without the leave of anybody else, which his own hand earns.” What slavery symbolized to Lincoln was stasis—a society in which people were assigned a status, and in which government existed to preserve that status and prevent any disruption of it, using either the carrot of subsidy (for poorer whites who did not own slaves) or the stick of force (to suppress slave restlessness and restrain the possibility of black-white alliances). Slavery was the badge of a society that looked with suspicion upon self-transformation, as well as the labor that made it possible. Slavery “betokened not only the possession of wealth but indicated the gentleman of leisure who was above and scorned labour.” Slaveholding was “highly seductive to the thoughtless and giddy headed young men” of America because it taught them that work, enterprise, and money-making was “vulgar and ungentlemanly.” It represented a receding from the high promise of the American republic into “a British aristocratic” pattern. And when the Southern states attempted to secede from the Union in order to insulate slavery from federal tampering, he made very clear what he thought the stakes in the ensuing Civil War were about, at the highest level: On the side of the Union, it is a struggle for maintaining in the world, that form, and substance of government, whose leading object is, to elevate the condition of men — to lift artificial weights from all shoulders — to clear the paths of laudable pursuit for all — to afford all, an unfettered start, and a fair chance, in the race of life. Lincoln, unlike Wendell Phillips, saw slavery as an economic problem more than a racial one. But on either count, he found it difficult to prescribe a means for ending it. Slavery was legal in fifteen states in 1860, when Lincoln was nominated for the presidency, and in each case, its legalization was a matter of state statute, rather than federal law. “According to our political system, as a matter of civil administration, the general government had no lawful power to effect emancipation in any State,” he acknowledged. Back in 1856, when he was prominent only in Illinois political circles, he had admitted that “If all earthly power were given me, I should not know what to do, as to the existing institution.” He clung to the hope that “systems of gradual emancipation might be adopted,” especially if slavery was prevented from expanding into the western territories. But legalizing slavery in the territories was precisely what had been sanctioned by the Kansas-Nebraska Act in 1854 (the event that galvanized Lincoln politically) and been protected by the Supreme Court’s decision in Dred Scott v. Sanford in 1857. Even after his election as president, Lincoln understand all-too-clearly that he had no civil authority to interfere with slavery; and if the South was successful in its fight to tear away from the Union, slavery would be even further beyond the reach of the United States government to deal with. Lincoln’s impulse, in 1861, was to implement his federally funded buy-out plan in Delaware (one of the four slave states that remained loyal to the Union), as a way of showing how state legislatures could back their way painlessly out of slavery. But the Delaware legislature failed to act, and in the spring of 1862, the other loyal slave states—Kentucky, Maryland, and Missouri—rejected Lincoln’s proposal with contempt. By the summer of 1862, Lincoln’s mind revolved to a different, but much more constitutionally controversial strategy—a “war powers” proclamation of emancipation, issued on the strength of his constitutional designation as Commander in Chief, and based on the premise that freeing the South’s slaves would constitute a legitimate blow to the Southern ability to carry on the war. No such proclamation had ever been issued by an American president—no such “war powers” had even been defined judicially—but by that time Lincoln “had about come to the conclusion that we must free the slaves or be ourselves subdued.” On July 22, 1862, Lincoln laid before his Cabinet a preliminary draft of an Emancipation Proclamation, declaring that “all persons held as slaves within any state, or designated part of a state, the people whereof shall then be in rebellion against the United States, shall be then, thenceforward, and forever free.” His secretary of state, William H. Seward, urged him to sit on the proclamation until a Union military victory could bolster its credibility. But when such a victory came at Antietam on September 17, 1862, Lincoln waited only until he had confirmation of the event before re-assembling his Cabinet and issuing the proclamation as military law, to become effective on January 1, 1863. On the other hand, invoking the “war powers” as a justification limited Lincoln to freeing slaves only “within any state, or designated part of a state, the people whereof shall then be in rebellion against the United States,” and so he was forced to exempt Kentucky, Maryland, Delaware, and Missouri from its application (they were not, after all, at war with the United States) plus a number of zones within the South occupied by federal forces that were already under the civil jurisdiction of “reconstruction” governments. As he explained to his impatient abolitionist secretary of the Treasury, Salmon Chase, he could not extend the Proclamation further than “any state, or designated part of a state” actually in rebellion without undermining the legal rationale of using his “war powers.” And that would leave the whole emancipation project liable to interference from the same Supreme Court that had given the nation the Dred Scott decision. “The exemptions were made because the military necessity did not apply to the exempted localities,” Lincoln explained. If, as Commander in Chief, he tried to emancipate slaves outside the war zones, he would have no more justification for doing so than saying, “I think the measure politically expedient, and morally right.” This would surrender “all footing upon constitution or law” and plunge him into “the boundless field of absolutism.” Abolitionists might not worry about the consequences of absolutism, but he did. At the same time, though, as many slaves as he could free, would be free forever. He assured one inquirer in July 1863, that “I think [the Proclamation] is valid in law, and will be so held by the courts,” but even if not, “I think I shall not retract or repudiate it. Those who shall have tasted actual freedom, I believe, can never be slaves, or quasi-slaves again.” And he frankly warned Congress in his annual report at the end of 1864 that any move which required him to step back from the Proclamation would result in his resignation. “If the people should, by whatever mode or means, make it an Executive duty to re-enslave such persons, another, and not I, must be their instrument to perform it.” Finally, in January 1865, he was able to obtain from Congress what he described as the “king’s cure for all the evils” of slavery—an amendment to the Constitution, not merely emancipating slaves, but abolishing the entire legal institution of slavery throughout the nation. Perhaps it was only an after-thought on Lincoln’s part to have included in the Emancipation Proclamation a recommendation to the newly freed slaves that their next step as free men and women should be into the openness of the markets, that “in all cases when allowed, they labor faithfully for reasonable wages.” Even in what he termed “an act of Justice,” Lincoln still saw the ultimate realization of freedom in economic terms. This raises the potent question for modern Americans about what constitutes reform itself, and whether the emergence of the United States as a great world market-power in the 150 years since Lincoln’s day should be considered a reform, or the object of reform—whether the operation of market-driven forces carries within it more hope of ameliorating injustice than political ones. These divergent strategies formed the substance of the great debate between W. E. B. Du Bois and Booker T. Washington over civil rights in the early twentieth century; it formed the core of the argument over the New Deal and the Great Society; and it continues to agitate voices all along our political spectrums. At least we know where Lincoln placed himself in this debate, and where he believed the ultimate reformation of American life would always lie. William Henry Herndon interview with Sara Bush Lincoln (September 8, 1865) and Dillard C. Donnohue interview with Jesse Weik (February 13, 1887), in Herndon’s Informants: Letters, Interviews and Statements About Abraham Lincoln, eds. Douglas L. Wilson and Rodney O. Davis (Urbana: University of Illinois Press, 1998), 107, 602; Herndon, in Jesse Weik, The Real Lincoln: A Portrait (Boston: Houghton Mifflin, 1922), 110; Lincoln, “Temperance Address” (February 22, 1862), in Collected Works of Abraham Lincoln, ed. Roy F. Basler et al (New Brunswick, NJ: Rutgers University Press, 1953), 1:271-272, 274. Lincoln, “Protest in Illinois Legislature on Slavery” (March 3, 1837) and “To Albert G. Hodges (April 4, 1864), in Collected Works, 1:75, 7:281. Lincoln, “To Horace Greeley” (March 24, 1862), in Collected Works, 5:169. Lincoln, “Speech at Worcester, Massachusetts” (September 12, 1848), in Collected Works, 2:3-4. Phillips, “Philosophy of the Abolition Movement” (January 27, 1853) in Speeches, Lectures, and Letters (Boston: J. Redpath, 1863), 100; Tappan, “Particular Instructions,” to Theodore Dwight Weld, in Letters of Theodore Dwight Weld, Angelina Grimke Weld, and Sarah Grimke, 1822-1844, eds. G. Barnes and D.L. Dumond (New York: D. Appleton-Century, 1934), 1:124-128; Elizur Wright, The Sin of Slavery and its Remedy; Containing Some Reflections on the Moral Influence of African Colonization (New York: Elizur Wright, 1833), 9, 39. H. Ford Douglass, in James M. McPherson, The Negro’s Civil War: How American Negroes Felt and Acted during the War for the Union (New York: Pantheon Books, 1965), 7; Phillips, “Lincoln’s Election” (November 7, 1860), in Speeches, Lectures, and Letters, 294. Lincoln, “Speech at Kalamazoo, Michigan” (August 27, 1856) and “Fragment on Free Labor” (September 17, 1859), in Collected Works, 2:364 and 3:462. Lincoln, “Fragment on Government” (July 1, 1854), “Address before the Wisconsin State Agricultural Society, Milwaukee, Wisconsin” (September 30, 1859), and “Speech at New Haven, Connecticut” (March 5, 1860), in Collected Works, 2:220, 3:479, 481, and 4:24. Lincoln, “First Debate with Stephen A. Douglas at Ottawa, Illinois” (August 21, 1858), in Collected Works, 3:16. Joseph Gillespie to W.H. Herndon (January 31, 1866), in Herndon’s Informants, 183; Lincoln, “Fragment on Free Labor” (September 17, 1859) and “Message to Congress in Special Session” (July 4, 1861), in Collected Works, 3:462 and 438 Lincoln, “Speech at Peoria, Illinois” (October 16, 1854) and “Annual Message to Congress” (December 8, 1863), in Collected Works, 2:255-256 and 7:49 Gideon Welles, diary entry for July 13, 1862, in The Diary of Gideon Welles, ed. John T. Morse (Boston: Houghton Mifflin, 1911) 1:70; Lincoln, “Preliminary Emancipation Proclamation” (September 22, 1862) and “To Salmon P. Chase” (September 3, 1863), in Collected Works, 5:434, 6:428-429. “To Stephen A. Hurlbut” (July 31, 1863) and “Annual Message to Congress” (December 8, 1864) and “Response to a Serenade (February 1, 1865) in Collected Works, 6:358 and 8:152, 254. Lincoln, “Emancipation Proclamation (January 1, 1863), in Collected Works, 6:30 Allen C. Guelzo is the Henry R. Luce Professor of the Civil War Era and director of the Civil War Era Studies Program at Gettysburg College. He received the Lincoln Prize in 2000 for Abraham Lincoln: Redeemer President (1999) and in 2005 for Lincoln’s Emancipation Proclamation: The End of Slavery in America (2004). Make Gilder Lehrman your Home for History Already have an account? Please click here to login and access this page. How to subscribe Click here to get a free subscription if you are a K-12 educator or student, and here for more information on the Affiliate School Program, which provides even more benefits. Otherwise, click here for information on a paid subscription for those who are not K-12 educators or students. Make Gilder Lehrman your Home for History Become an Affiliate School to have free access to the Gilder Lehrman site and all its features. Click here to start your Affiliate School application today! You will have free access while your application is being processed. Individual K-12 educators and students can also get a free subscription to the site by making a site account with a school-affiliated email address. Click here to do so now! Make Gilder Lehrman your Home for History Why Gilder Lehrman? Your subscription grants you access to archives of rare historical documents, lectures by top historians, and a wealth of original historical material, while also helping to support history education in schools nationwide. Click here to see the kinds of historical resources to which you'll have access and here to read more about the Institute's educational programs. Individual subscription: $25 Click here to sign up for an individual subscription to the Gilder Lehrman site. K-12 School subscription: $195 Click here to sign up for an institutional subscription, which allows site access to all faculty and students in a single school, or all visitors to a library branch. Make Gilder Lehrman your Home for History Upgrade your Account We're sorry, but it looks as though you do not have access to the full Gilder Lehrman site. All K-12 educators receive free subscriptions to the Gilder Lehrman site, and our Affiliate School members gain even more benefits! How to Subscribe K-12 educator or student? Click here to edit your profile and indicate this, giving you free access, and here for more information on the Affiliate School Program. Not a educator or student? Click here for more information on purchasing a subscription to the Gilder Lehrman site.
fwe2-CC-MAIN-2013-20-43912000
In 1900 Sperry Glacier had an area of 3.39 km2. By 1938 it had diminished to 1.58 km2 and by 1946 it was only 1.34 km2 in area. The estimated loss in volume between 1938 and 1946 was a 23 meter reduction in the level of the surface of the lower half of the glacier during that period. Recession proceeded at an annual rate of 15.3 m. be¬tween 1938 and 1945; 11.9 m. from 1945 to 1947; 10.5 m. from 1947 to 1948; and 12.9 m. from 1948 to 1949 (Dyson, 1950). Recession of Sperry Glacier continued from about 1950-1970 and has been accompanied by loss of volume of the lower part of the glacier. Sperry Glacier has been examined in reconnaissance (Johnson, 1958, 1960, 1964). Comparison of longitudinal and transverse profiles shows that since 1947 the upper part of the glacier has increased in vol¬ume during some years and remained constant during others, whereas the lower part has decreased in volume. Throughout this time span slow terminal recession has been continuous. Surface ice velocities on Sperry Glacier average about 3 m./year. Sperry Glacier retreated at a slower rate of 5 m/a, from 1950-1979 (Cararra and McGrimsey, 1981). The retreat has ranged from 3-5 m/a from the 1979-1993 period (Key, Fagre and Menicke, 2002). In 1993 0.87 square kilometers remained. This glacier still has crevasses and is not merely stagnant and melting away. A comparison of imagery from 1991 top (orange line for terminus), 2003 middle (green line) and 2005 bottom (blue line) indicate the marginal changes during this 14 year interval. These images are all from Google Earth using the historic imagery function. Marginal recession averages 95 meters in this period ranging from 20-200 meters. The glacier was 1200 meters long in 1990 so this is close to a 10% loss in length. The current rate of retreat is slightly higher than the 3-5 m/a average fro the 1979-1993 period. The image in 1991 is from Aug. 25th, the glacier still has 70% of its area covered with snow from the previous winter. This is called the accumulation area ratio and in general must be above 60 at the end of the summer for the glacier to not lose mass. In 2003 the accumulation area ratio is about 30 and this is on Sept. 25th at the end of the melt season. In 2005 the accumulation area ratio is 30 at the most. Both years this limited a snowcover would lead to a significant negative mass balance, volume loss. The thinning in the upper portion of the glacier appears limited. There is not an evident change in the upper margin of the glacier. The crevassing which is indicative of movement has also not decreased much suggesting limited changes in the dynamics of the upper glacier. The comparatively slow changes in the accumulation zone, suggests a glacier that still has a consistent accumulation zone and is not likely to melt away rapidly, within the next 30 years, given the current climate. The glacier is showing no signs that it is approaching equilibrium, and that it can survive the current climate. This is in contrast to nearby Harrison Glacier which is receding quite slowly. There are new outcrops appearing at points A and B in the 2005 image indicating thinning and retreat is continuing. Annual layers are evident at point c in the 2005 image. Crevassing in the same area at point D is evident in each image. The USGS and the NPS have made Sperry Glacier a focus of field study beginning in 2005. The long term record of glacier area and glacier retreat makes it a good candidate. To date no mass balance data has been completed or reported. This data is essential to understand future terminus and volume responses. This project has been particular good at acquiring historic images to compare to current images 1913 and 2008. Bob Sihler captured the lack of snow remaining on Sperry Glacier in 2009., with a month still left in the melt season.
fwe2-CC-MAIN-2013-20-43914000
World War Two Fashion The Impact of War on 1940′s Fashion in the USA. by Tia Craig- Click World War Two Influence on 1940s Fashion to read the full article from the beginning or to download the free ebook. 4.1940s Silhouette and Style Changes The spring of 1942 was when the War Production Board and the Civilian Production Administration “issued a series of rules for the garment industry that were identified by a number preceded by the letter L, for Limitation Order. Women were able to adjust to it by utilizing fewer amounts of fabric and different colored dyes. The reduction in fabric amount changed the overall silhouette so that clothing would become more practical as women needed more versatility with their wardrobe. There were very strict guidelines on the fashion such as “a woman’s skirt could be no wider than 198cm around. Sleeves could measure no more than 36cm around. Belts had to be less than 5cm wide. Ruffles, pleats, and extra pockets were banned. For women, trim, knee-length skirts replaced long gowns.” (Lindop). Further more, “A reduction in the number of fashion colors, especially for wool, was required to conserve chemicals needed for wartime use.” (Walford).As a result, dyes were so scarce that blacks, browns, and white replaced the brightly colored attire. However, “The Textile Color Association of the United States released a palette for fall of 1942 that included a number of shades with patriotic names such as ‘Victory Gold’, ‘Gallant Blue’, and ‘Patriot Green’.” (Walford). These restrictions caused new styles to emerge such as the military look which “included short jackets, narrow skirts, wide shoulders, pantsuits, low-heeled shoes, berets, and peaked caps.” (Lindop). This style was primarily worn by women who served in the war as nurses and other military services. Another style that appeared was called utility clothing which was the more standard look for women. It included “squared shoulders, narrow hips, and skirts that ended just below the knee. Tailored suits were the dominant form of utility fashion.” (The University of Vermont). For the women working in the factories, there was a dress code for them but that will be explained in further detail later on. An additional note to add to the style change was that pants became more popular during this time. Many still wore skirts or dresses but pants had become an article of clothing that had a sense of practicality. copyright Tia Craig Next Chapter – 5.1940s clothing rationing and the black market
fwe2-CC-MAIN-2013-20-43915000
Hepatitis is a contagious disease that is preventable. Basic preventive principles include avoiding contact with other people’s blood or bodily fluids and practicing good sanitation. In addition, vaccines are available to prevent some types of hepatitis. They are given to people at high risk of contracting the disease. Avoid Contact With Blood and Bodily Fluids Infected blood and bodily fluids can spread hepatitis. To avoid contact: - Do not inject illicit drugs , especially with shared needles. Seek help to stop using drugs. - Do not have sex with partners who have hepatitis or other sexually transmitted diseases . - Practice safe sex using latex condoms or abstain from sex. - Limit your number of sexual partners. A mutually monogamous relationship is best. - Avoid sharing personal hygiene products (eg, toothbrushes, razors). - Avoid handling items that may be contaminated with hepatitis-infected blood. - Donate your own blood before elective surgery so it can be used if you need a blood transfusion . - Avoid getting a tattoo or a body piercing. If you get a tattoo or body piercing, make sure the artist or piercer properly sterilizes the equipment. You might get infected if the tools have someone else's blood on them. - If you are a healthcare professional, always follow routine barrier precautions and safely handle needles and other sharp instruments and dispose of them properly. Wear gloves when touching or cleaning up bodily fluids on personal items, such as: - Tampons, sanitary pads, diapers - Cover open cuts or wounds. - Use only sterile needles for drug injections, blood draws, ear piercing, and tattooing. - If you are pregnant, have a blood test for hepatitis B . Infants born to mothers with hepatitis B should be treated within 12 hours after birth. travelling to countries where the risk of hepatitis is higher, follow proper precautions, such as: - Only drinking bottled water - Not using ice cubes - Avoiding certain foods, like shellfish, unpasteurized milk products, and fresh fruits and vegetables Practice Good Sanitation Good sanitation can prevent the transmission of some forms of hepatitis. - Wash your hands with soap and water after using the bathroom or changing a diaper. - Wash your hands with soap and water before eating or preparing food. - Carefully clean all household utensils after use. Get a Vaccine, If Recommended Get Immune Globulin (IG) Injection, If Recommended IG, available for hepatitis A and B, is an injection that contains antibodies, which help provide protection. This shot is usually given: - Before exposure to the virus, or - As soon as possible after exposure to the virus - Reviewer: Daus Mahnke, MD - Review Date: 03/2013 - - Update Date: 00/31/2013 -
fwe2-CC-MAIN-2013-20-43922000
Some arrhythmias may occur without any symptoms. Others may cause noticeable symptoms, such as: - Dizziness, sensation of lightheadedness - Shortness of breath - Chest pain - Sensation of your heart fluttering (palpitations) - Sensation of a missed or extra heart beat Fainting, dizziness, lightheadedness, weakness, fatigue, and shortness of breath all mean that your brain or your muscles are not getting enough blood because your heart isn't pumping effectively. Chest pain means that the heart itself is not getting enough blood. This is called angina . Some people report an unusual feeling of their “heart beating,” especially if it is beating abnormally. With none of the other symptoms, this may be harmless or it may be a warning of a potential problem. - Reviewer: Michael J. Fucci, DO - Review Date: 09/2012 - - Update Date: 00/91/2012 -
fwe2-CC-MAIN-2013-20-43925000
Last week, in one of the most densely populated places on Earth, 150 people addressed a topic usually left to pasture: The future of agriculture. At one point, Melina Shannon-DiPietro, Director of the Yale Sustainable Food Project, asked the audience, “How many of you have worked in a garden in the past month?” Over three-quarters of the room raised our hands. Shannon-DiPietro also said that Yale University currently offers 30 classes related to food and agriculture. As recently as 2003, that number was zero. What is happening here and why now? The group in the NYC room was attending Agriculture 2.0: The Conference for Innovators & Investors, hosted by NewSeed Advisors and SPIN-Farming. There is much to explore, and for now, a few facts have percolated to the top: - Many estimate the world’s population will grow to 9 billion by 2050. As a result, the Asset Management arm of Deutsche Bank foresees a 50% increase in global caloric demand. We aren’t ready. - Every year, as the Mississippi River flows into the Gulf of Mexico, the agricultural run-off creates a vast “dead zone” in the water. The dead zone can get as large as the state of Mississippi and nothing survives in it. - Tod Murphy of the Farmers Diner said that as Americans consume food carted from thousands of miles away, we eat about 19% of our country’s fossil fuel usage. Robert Fireman of Sky Vegetables framed the issue another way: Every year, the average American intakes 350 gallons of oil with his meals. We have never had issues like these before in human history. But innovators live in the solution. We have also never been as connected and accessible to one another as we are now. Thus, in the spirit of the pioneers at Agriculture 2.0, we have never had opportunities like this before. More to come.
fwe2-CC-MAIN-2013-20-43930000
List of Chinese Inventions China has always prided itself with its ancient and ground breaking discoveries. In fact, some of the world’s most important inventions were made by the Chinese. Their inventions have shaped our history. A cursory look at the list of Chinese inventions will give you a sense of how critical these discoveries were when it came to building civilizations. Joseph Needam, a British scholar, recognized the importance of these discoveries. He studied these inventions extensively, even listing four of them as being the greatest inventions of ancient China. These include the compass, gunpowder, paper, and printing. The list of Chinese inventions includes familiar tools and some lesser known materials. A lot of these are even still used today. Many more are considered as direct descendants of modern day tools and methods. Let’s take a look at some of these inventions shall we? First up on our list of Chinese inventions is the compass. Without a doubt, this tool has greatly expedited and eased how our ancestors navigated the globe. To this day, the compass is still considered as an important navigation tool, and little has changed on how compasses are manufactured. Next up on our list is gunpowder. It was first discovered in China during 1000 A.D. This was about 300 years before the first recorded gunpowder use in Europe. Unlike their European brethren, the Chinese never really pursued the use of explosives as a weapon. This was a tragic irony for the Chinese; since with the aid of gunpowder, the Europeans went on to win their wars against the Chinese. Another important material included in the list of Chinese inventions is paper. First invented somewhere around 105 A.D., we can all thank the Chinese for this wonderful and infinitely important piece of discovery. Printing was also first discovered by the Chinese. This includes both moveable type, and block printing. Europeans seem to have learned about block printing from the Chinese playing cards which they introduced to Europe. Tea lovers should be thankful to the Chinese since tea drinking was first invented in China. Two other forms of beverage – this time alcoholic ones – also originated in China, brandy and whiskey. Distillation was discovered in China during the seventh century A.D., well before its twelfth century discovery in the west. These are but a few of the items included in the list of Chinese inventions. Some of them have been of significant import to us, others, well we can do without. One thing is for certain, those ancient Chinese inventors made a lot of impact on our history. “This article is brought to you by Gus Woltmann”.
fwe2-CC-MAIN-2013-20-43933000
The Wandering is Over Haggadah - The Four Children The Four Children As we tell the story, we think about it from all angles. Our tradition speaks of four different types of children who might react differently to the Passover seder. It is our job to make our story accessible to all the members of our community, so we think about how we might best reach each type of child: What does the wise child say? The wise child asks, What are the testimonies and laws which God commanded you? You must teach this child the rules of observing the holiday of Passover. What does the wicked child say? The wicked child asks, What does this service mean to you? To you and not to himself! Because he takes himself out of the community and misses the point, set this child’s teeth on edge and say to him: “It is because of what God did for me in taking me out of Egypt.” Me, not him. Had that child been there, he would have been left behind. What does the simple child say? The simple child asks, What is this? To this child, answer plainly: “With a strong hand God took us out of Egypt, where we were slaves.” What about the child who doesn’t know how to ask a question? Help this child ask. Start telling the story: “It is because of what God did for me in taking me out of Egypt.” Do you see yourself in any of these children? At times we all approach different situations like each of these children. How do we relate to each of them?
fwe2-CC-MAIN-2013-20-43935000
Create organic farm, aquaponics, vermiculture, larvae entrapment projects - Provide Aquaponic systems for a families of four. Collect unwanted 55 gallon drums and wooden pallets that would normally end up in landfills and have Kahuku High and Intermediate School students transform them into perpetual food source that will provide local families and communities with fresh, organic vegetables, fruit, herbs and fish in their own backyards powered by a small photovoltaic system. - Provide scholarship money for students building the systems instead of paying an hourly wage (use student payment model used by Ma’o Farms). Partner with experts on aquaponics, sustainability systems, recycling, etc and use them as mentors for students so they can enter college and obtain a meaningful career that will support themselves and their families. - If requested, provide aquaponic systems for church properties, social service providers (Salvation Army), schools, etc. Our latest initiative, Innovative Education, is comprised of filmmakers, scientists, Kupuna and agricultural teachers (Dr. Don Sand, Dr. Clyde Tamaru, Dr. Kai Fox, Dr. Kendra Martin, Christian Wilson and Ben Shaffer) is an additional branch of KEAC that is writing curriculum and delivering educational programs that engage, inspire, create learning experiences using student relevant subjects such as digital media, current youth issues, sustainability, aquaponics, organic farming, participatory learning and life skills. The “Film Club” is our after school program that currently has 27 active members. Our new Kahuku Sustainability Club named Halau Haloa currently has 20 members. COMMUNITY NEED BEING ADDRESSED The students living in the Ko’olauloa District (Ka’a’awa, Punalu’u, Hau’ula, Laie, Kahuku and Sunset Beach) have unmet needs for educational experiences that are relevant to several rapidly growing modern industries such as film, digital media and the sustainability industry. These are fields that will provide jobs and advanced education for those students who are allowed to learn these skill sets early. The traditional school system is slower in responding to these relevant programs that we are helping to bring now, today. The Innovation Education branch is dedicated to helping students receive programs in these 21st century industries. The programs are developed and delivered in creative, participatory methods that are based on living projects in self-directed ways. It is believed that the students not only learn advanced knowledge in these relevant industries but that they develop actual skills sets needed to become valued employees. The students learn business world and college world “people skills”, project management skills, leadership, teamwork, time management and problem solving. Each target student groups can be inspired accelerate their educational productivity. The “at risk” students find that there are reasons for learning traditional subjects in school and the advanced students are allowed to enhance their gifts and dreams. WHAT WE DO Funds and support would help increase capacity of our after school programs including the Kahuku Sustainability and Film clubs and building living sustainability projects. We will continue to develop new advance curriculum and film the student projects as a “students teaching students” educational video series A portion of the funds would be used to hold contests around renewable energy issues that would inspire students to learn using a camera as a fun learning tool. OUR INTENDED PARTICIPANTS The goal is to bring these educational experiences to at least 100 per year in the sustainability programs and 100 students in the film-digital media programs. The students will be selected from those attending KHIS, while 100 more students will be offered program experiences from Kahuku Elementary School.. OUTCOMES WE YOU EXPECT The outcomes would be measured in terms of the number of students in the programs, learning projects, and learning contests. We will continue to build a platform in Kahuku that will become a major pipeline for innovative learning programs, mentors, internships, and courses in 21st Century careers skills and knowledge. Benefits to students will include a decrease in drop out students, improvement of grades, and more gifted students entering the competitive smart technological fields that add to the quality of life in the islands. HOW WE WILL MEASURE THE EXPECTED OUTCOMES The success of our program can be measured by the number of students taking the sustainability and renewable energy classes and the digital media courses. We will also measure the number of students that have received certifications and have participated in our learning experience projects, contests, and field trips. Motivation of the students in our programs can be measured by attendance, improvement in grades, and placement in colleges. These accumulated certifications, interactions with mentors, new letters of recommendations, sustainability and digital media contests won will go far to improve the chances of students being accepted in colleges and receiving scholarships. HOW FUNDS ARE SPENT The aquaponic projects will require the purchase of all materials to build the working model including pumps, foundation, scaffolding, sun screens, tubs, soil, starter fish and plant seed. A portion of the funds would be used to develop more curriculum in both digital media and agriculture. Student incentives would include prizes for the contests and token fees for mini-internships. Because of Hawaii’s ideal location and temperature, its residents can become masters of sustainability and stop being dependent on the grid by paying the most expensive electricity, gas and food in the nation. LOCAL FOOD PRODUCTION Use simplicity and affordability of aquaponics and vermiculture to encourage the growing organic food in backyards or apartment lanai’s of Hawaii residents. SOCIAL TRANSFORMATION IN THE ABOVE AREAS We need to reduce our dependency on fossil fuels by growing our own food where we live thus reducing the need to: - ship from the mainland and to stores from warehouses, and from warehouse to grocery stores, - drive to a grocery stores and gas stations We need to grow our own produce so we can know: - the source of the food - we can be assured the food is not genetically modified - contaminated with unsafe fertilizers, herbicides and pesticides (organically grown and harvested) - we can prepared for a disaster when gas, food and money is not available - we need to be examples to our children and grandchildren that it is possible to live off the food we grow ourselves Demonstrate innovation and local leadership Our organization will consist of a partnership of forward-thinking individuals who see the value of assisting the youth of our communities to take aquaponics and sustainability as a way of life. They will also recognize the value of the educational process they will be engaged in along the way. Have the potential for growth and success due to our involvement It will be relatively easy to scale for growth since aquaponics uses very little space and few resources. Every family in Hawaii should have access to inexpensive, fresh organic produce. Every student in Hawaii deserves the chance to learn about aquaponics and fulfill the DOE rubrics at the same time. Stem from ideas and inspiration that are born in Hawai’i to meet the needs of Hawai’i Ancient Hawaiians were masters of sustainability. We need to re-create how Hawaiians turned the most remote islands into sustaining over a million people without modern technology. - Employ scalable technologies and models that are applicable globally Provide Aquaponic systems for a families of four. Collect unwanted 55 gallon drums and wooden pallets that would normally end up in landfills and have Kahuku High and Intermediate School students transform them into perpetual food source that will provide local families and communities with fresh, organic vegetables, fruit, herbs and fish in their own backyards powered by a small photovoltaic system.Provide scholarship money for students building the systems instead of paying an hourly wage (use student payment model used by Ma’o Farms). Partner with experts on aquaponics, sustainability systems, recycling, etc and use them as mentors for students so they can enter college and obtain a meaningful career that will support themselves and their families.If requested, provide aquaponic systems for church properties, social service providers (Salvation Army), schools, etc.
fwe2-CC-MAIN-2013-20-43936000
Cloud computing is either a revolutionary IT management tool or a nebulous puff of marketing hype, depending on whom you ask. For now, we’re thinking it’s puffery—but intriguing developments are under way. A Cloudy Concept Rather than house your own IT servers or rent the maximum processing and storage capacity you’ll ever need, why not pay only for what you use, when you use it? That’s the basic idea behind cloud computing—and it’s an alluring possibility for many reasons, not least the desire to contain costs and reduce energy consumption. But it turns out that much of the appeal is based on a murky understanding of the concept. According to research by Gartner group vice president Mark McDonald, the percentage of CIOs interested in cloud computing has grown considerably, from 5% in 2009 to 37% earlier this year. And the bigger the company, the more likely management is to say that cloud computing is a top-five IT priority. Interest in Cloud Computing But three out of four respondents who profess interest in cloud computing report little to none in three of the key technologies it entails: server virtualization, service-oriented architecture, and software as a service. Further, nearly half the respondents equate cloud computing with virtualization alone, which shows that many executives have an incomplete view of it. Cloud computing has rapidly risen to what McDonald calls “the peak of inflated expectations.” And where is it headed next? The “trough of disillusionment,” he says. That’s because few people can even seem to agree on what cloud computing is, never mind how on earth it should work. The National Institute of Standards and Technology (NIST) IT laboratory’s definition, version 15, is more than 760 words long and includes five characteristics, three service models, four deployment models, and a disclaimer saying, in essence, that the definition will change again soon. Is the Cloud Greener? Despite all the confusion about cloud computing, the IT laboratory at NIST lays out some figures that make a compelling environmental case for it. According to one NIST presentation, the number of servers in traditional data centers in the U.S. doubled from 2001 to 2006. Power consumption per server quadrupled in the same time period, even though servers typically operate at only 15% of capacity.
fwe2-CC-MAIN-2013-20-43939000
Despite our best efforts to replicate hormones, slow the ageing process and map the human brain, our bodies are smarter than we'll likely ever understand. But not everything is so complex. When we're feeling tired, stressed or rundown, our bodies let us know with some very basic signals. Ignore them at your peril writes Rosalind Scutt. In a perfect world we'd live long, healthy lives, free from illness and disease. Although developments in medical science are bringing us closer to that point, there is still much we do not understand about our own physiology how it works, and why it breaks down. While we're waiting for research to unlock the secrets to infinitely good health, it's reassuring to know that very often, before our bodies do malfunction, we're likely to see some obvious warning signs associated with a weakening immune system. Some common physical symptoms include sweating, headaches, cold sores, thrush and other skin inflammation such as eczema, while emotional symptoms include feelings of irritability, anxiety, aggression or fatigue. These indicators should serve as a serious warning that we require additional rest and care, but many of us are inclined to take a cold and flu tablet and solider on. Valiant though this may seem, a growing body of research suggests that this approach may jeopardise our health in the long term with potentially fatal consequences. Earlier this year a study titled Chronic stress, glucocorticoid receptor resistance, inflammation, and disease risk found that stress affects the body's ability to protect against illness by directly impacting the immune system. In particular, the study, which was published in the Proceedings of the National Academy of Sciences, found that cortisol (a hormone produced during times of stress) temporarily suppresses the immune system and reduces the body's natural inflammatory response to viruses and bacteria. "The immune system's ability to regulate inflammation predicts who will develop a cold, but more importantly it provides an explanation of how stress can promote disease," lead researcher, Sheldon Cohen, of Carnegie Mellon University in Pittsburgh said. "When under stress, cells of the immune system are unable to respond to hormonal control, and consequently, produce levels of inflammation that promote disease. Because inflammation plays a role in many diseases such as cardiovascular, asthma and autoimmune disorders, this model suggests why stress impacts them as well." Dr Mark Smyth of the Peter MacCallum Cancer Centre in Melbourne, Australia's only public hospital solely dedicated to cancer treatment, research and education agrees. "Proper immune function is now appreciated as another important factor in preventing development of some cancers," he said. Understanding more about how stress can impact our immune system may help us learn to listen to our bodies and recognise when they are telling us to slow down. And while maintaining a healthy immune system can help to prevent an individual from contracting disease, it is also hoped that immunotherapy can one day be used to treat and manage existing disease. "We may one day be able to use immunotherapy to artificially induce equilibrium and convert cancer into a chronic, but controllable disease," Smyth said. So, next time you feel the itch of a re-occurring rash, the twitch of a cold sore, or general malaise associated with ongoing fatigue, stop and listen to your body. A course of antibiotics may solve your problem in the short term, but your body is really telling you it needs some urgent nurturing attention and if Cohen's findings are correct, that attention could be the all that stands between you a life of blissful longevity. - Is shopping for beauty products is overwhelming? For the lowdown on products that are right for you and a little beauty news via a free digital magazine check out the Beauty Advisor app now! - Life is full of amazing moments. Share your own and read about others’ on the Olay Facebook page.
fwe2-CC-MAIN-2013-20-43941000
(SOURCE: Washington University School of Medicine in St. Louis, news release, Aug. 30, 2012) TUESDAY, Sept. 4 (HealthDay News) -- At least seven antibiotic-resistant genes have recently passed between soil bacteria and bacteria that cause human disease, according to a new study. Further research is needed to determine how widespread this sharing is, and to what extent it could make disease-causing bacteria harder to control, said the researchers at Washington University School of Medicine in St. Louis. "It is commonplace for antibiotics to make their way into the environment. Our results suggest that this may enhance drug resistance in soil bacteria in ways that could one day be shared with bacteria that cause human disease," first author and graduate student Kevin Forsberg said in a university news release. For this study, the researchers analyzed the DNA of bacteria in soil samples collected at various locations in the United States. The findings were published recently in the journal Science. The researchers said it's important to find the answers to many questions, such as: Did the genes pass from soil bacteria to human pathogens or vice versa? Are the genes just the tip of a vast reservoir of shared resistance? Did some combination of luck and a new technique for studying genes across entire bacterial communities lead to the discovery of the shared resistance genes? While humans only mix their genes when they have children, bacteria regularly exchange genes throughout their lifecycles. That means that when a strain of bacteria develops resistance to antibiotics, it can share this ability not only with its descendants but also with other bacteria, the researchers explained. "I suspect the soil is not a teeming reservoir of resistance genes. But if factory farms or medical clinics continue to release antibiotics into the environment, it may enrich that reservoir, potentially making resistance genes more accessible to infectious bacteria," study senior author Gautam Dantas, an assistant professor of pathology and immunology, said in the news release. The U.S. Food and Drug Administration has more about antibiotic resistance. Copyright © 2013 ScoutNews, LLC. All rights reserved. HealthDayNews articles are derived from various sources and do not reflect federal policy. healthfinder.gov does not endorse opinions, products, or services that may appear in news stories. For more information on health topics in the news, visit Health News on healthfinder.gov.
fwe2-CC-MAIN-2013-20-43942000
American Clean Energy And Security Act Of 2009 H.R. 2454 would make a number of changes in energy and environmental policies largely aimed at reducing emissions of gases that contribute to global warming. The bill would limit or cap the quantity of certain greenhouse gases (GHGs) emitted from facilities that generate electricity and from other industrial activities over the 2012-2050 period. The Environmental Protection Agency (EPA) would establish two separate regulatory initiatives known as cap-and-trade programs—one covering emissions of most types of GHGs and one covering hydrofluorocarbons (HFCs). EPA would issue allowances to emit those gases under the cap-and-trade programs. Some of those allowances would be auctioned by the federal government, and the remainder would be distributed at no charge.
fwe2-CC-MAIN-2013-20-43944000
Introduction to Web Conferencing View as a PDF What is Web Conferencing? While Skype provides audio and video conferencing and a chat tool, web conferencing software provides a broader set of synchronous communication tools. Some of these additional features include: - Multi-point audio and video - You can have several people in different locations using cameras and microphones. - Desktop sharing - You can give live demonstrations of software, go over an assignment in real-time with a student, or even control a participant's application remotely. - Whiteboard - Participants in multiple locations can work together on the same whiteboard - Import presentation - You can import a PowerPoint file right into the tool, which is useful for giving presentations to remote audiences. - Classroom management - You can control the privileges of participants in a session. - Ability to record sessions - You can record part or all of a session for later reference. There are several web conferencing tools available on the market, including Elluminate, Centra, Interwise, and WebEx. At NMU we have licenses for a product called Adobe Acrobat Connect Professional. When is Web Conferencing Useful? Educational uses of web conferencing range from one-on-one meetings with students to full class sessions. Some specific applications where it can be an appropriate tool include: - Blended online courses where some aspects are conducted asynchronously on EduCat (e.g., assignments, assessments) and others, such as student presentations, are conducted live via web conferencing. - Bringing in a guest speaker - Virtual office hours with students in an asynchronous online class. - Live distance learning courses that utilize a high level of interaction among students. - Live distance learning courses where remote students don't have access to ITV tools. The "7 things you should know about Virtual Meetings" handout from ELI outlines some specific applications and scenarios for web conferencing use. Who can Use Acrobat Connect? (Licensing) Most commercial software used at NMU has either a site license, meaning anyone on campus can use it (examples include Microsoft Office and WebCT) or individual licenses, meaning that NMU must have a license assigned to anyone who uses it (examples include Camtasia and Adobe Photoshop). Acrobat Connect is a little different. NMU has a limited number of "named host" licenses for Acrobat Connect, which are only assigned to faculty. Named hosts have the ability to create virtual meetings and invite other participants to them. Participants (such as students) in meetings do not get a license. There can be up to 100 people in any meeting session. The named host can create as many meetings as he or she wants, but only one can be in session at a given time. Adobe Connect is web-based and cross platform; it runs through a web browser and the Flash plug-in, which are both standard on NMU laptops. A free add-in is needed to do some functions, but to install it you just need to click "yes" when prompted to install it. It will then install in just a few seconds - no trip to the Help Desk needed. Signing up for a Free Trial of Acrobat Connect Because each named host account costs NMU a licensing fee, we only assign them to faculty members who have definite plans to use web conferencing. Adobe offers a free, 30 day trial of Acrobat Connect Pro that allows faculty to "test drive" the software before requesting a named host account. For the purposes of the workshop, each of you will sign up for the trial. If, after becoming familiar with Acrobat Connect, you decide that want to use it with your classes in the fall, contact the CITE to request a named host account. Here is the URL for signing up for the free trial: Follow the steps on-screen. Your account will be active within a few minutes. A representative from Adobe may contact you within a few days of your registering to see if you have questions or if you want to buy a license. Just explain to them that your university has a limited number of licenses and that you are evaluating the software before deciding whether to request one. Adobe has a nice set of tutorials, documentation (some of which is provided in your handouts), and tips in their Acrobat Connect Pro Resource Center, at http://www.adobe.com/resources/acrobatconnect/ Please be aware that in addition to Acrobat Connect Pro information, there are some references to companion products (e.g., Adobe Presenter, audio teleconferencing) to which NMU does not subscribe. You can also reach the Resource Center through the Help menu of any Acrobat Connect meeting.
fwe2-CC-MAIN-2013-20-43970000
How are HBSLs Used? Concentrations of contaminants in water are compared to human-health benchmarks in screening-level assessments to provide an initial perspective on the potential relevance of detected contaminants to human health and to help prioritize further investigations. Two human-health benchmarks are used in USGS screening-level assessments: U.S. Environmental Protection Agency's (USEPA) Maximum Contaminant Levels (MCLs) and USGS's Health-Based Screening Levels (HBSLs). Concentrations of regulated contaminants (those with MCLs) are compared to their MCLs and concentrations of unregulated contaminants (those without MCLs) are compared to their HBSLs, when available. See "Guidance on Use of Benchmarks in Screening-Level Assessments" and SIR 2007-5106 for more information. Comparisons of measured contaminant concentrations in water to MCLs and HBSLs are useful for local, State, and Federal water-resource managers and others charged with protecting and managing drinking-water resources. For example, these comparisons can indicate when measured concentrations may be of potential human-health concern and can provide an early indication of when contaminant concentrations in ambient water resources may warrant further study or monitoring.
fwe2-CC-MAIN-2013-20-43980000
The Environmental Interpretation Centre is located on the shores of the Faja da Caldeira de Santo Cristo on São Jorge Island on the Azores archipelago of Portugal. The islands are very remote and located smack dab in the middle of the Atlantic and not particularly well-known, but are quite beautiful. The government is trying to increase tourism and education of their parks and the Centre is one of their projects to aid that effort. The Centre will help provide information on the area, serve as an education center on the local environment, support architectural heritage and serve as a center to further study the local flora and fauna. Design work began on the Environmental Interpretation Centre in 2007 with construction starting in 2009. The original property features a one story stone building, which was used as the foundation for the reconstruction. This building was faithfully restored using the original plans, but adding a second story and then an additional building behind it. The main building serves as the education center and the building behind is a temporary apartment for visiting researchers. Thick stone walls act as thermal mass to slow the transfer of heat and keep the interior comfortable. Deep window wells provide natural light but minimize heat gain. In the future a campsite will be created and the Interpretation Centre will serve as its headquarters. Images ©FG+SG – Fotografia de Arquitectura
fwe2-CC-MAIN-2013-20-43981000
Although Spiller's operation of anterolateral chordotomy has been often performed since 1911 for the control of intractable pain, it may still be said to be a much neglected procedure, the great scope of which has not been recognized by the medical profession. Historically, Van Gehuchten in 1893 first expressed the definite opinion that fibers conveying pain and temperature sensation passed up the cord in Gowers' tract, although Gowers himself had suggested this in 1879. No actual proof was afforded until Spiller's1 fortunate observation of a patient at the Philadelphia General Hospital in August, 1904. This patient showed an almost complete loss of the sense of pain and temperature in the legs, with preservation of tactile sensibility. He was under observation for some months and died in January, 1905. The necropsy revealed a solitary tubercle involving the right tract of Gowers at the extreme lower end of the thoracic cord
fwe2-CC-MAIN-2013-20-43998000
Nothing! Optional; cards of the letters you want to practice. Students are divided into teams. The target sounds are written on the blackboard as letters. The first student from each group stands up in front of the blackboard with a fan. The teacher says a word which uses a target sound, and the students have to hit the letter which makes that sound. The first student to hit the correct letter is the winner. Have the students stand in lines to save time while taking turns. Vowel sound variationEdit This variation of the game distinguishes between different vowel sounds. Instead of using letters, use picture cards to represent the particular sounds. The teacher will say a word, and the students have to hit the picture with the same vowel sound. This variation concentrates on the 'y' sound. Write the kanji for 'year' (年 (ねん - nen?)) and 'ear' (耳 (みみ mimi?)) on the blackboard. Say the word 'year' or 'ear', and students have to hit the correct kanji. This variation concentrates on only two words. Write two similar words on the blackboard, such as 'right' and 'light' and the students have to hit the correct word. See Pronunciation problems for a comprehensive list. In a similar fashion to how Karuta is played, students in a group have the cards in front of them and try to be the first to hit them. This variation enables a higher level of student involvement. This lesson plan was taken from the sixth edition of Team Taught Pizza.
fwe2-CC-MAIN-2013-20-44000000
In preparation for the fifth centenary of the Reformation in 2017, the Vatican and the World Lutheran Federation are preparing a joint document on the course of events in the early sixteenth century. The Humboldt University in Berlin is also building up to the centenary with lectures and discussions. I was honored to take part in a disputatio with Professor Notger Slencka, a foremost connoisseur of Luther’s work, under the auspices of the Romano Guardini foundation, on May 7, 2012, in which I took the side of Erasmus as we re-argued the famous discussion on free will between the Humanist and the Reformer. The epoch of the Renaissance and the Reformation sought to overcome scholastic metaphysics by a return to the sources, a purification of language, and a new encounter with the realities of human experience and biblical revelation. Overcoming metaphysics in theology means protecting the biblical language and the biblical phenomena against the insidious falsification brought about by metaphysical habits of thought. The late scholasticism that both Luther and Erasmus resisted lives on today in the USA among analytical philosophers of religion who believe that it is the sole business of theology of puzzles over metaphysical riddles such as a the apparent incompatibility between divine simplicity and the multiplicity of divine attributes and actions, or between divine omnipotence and the existence of evil, or between divine foreknowledge and predestination and the reality of free will. These philosophers think that modern theology has lost its intellectual grip through its strategy of avoiding these hard problems. Thanks to the Reformation and to historical biblical scholarship, theology today is more richly based in Scripture than was the case in the Middle Ages. Before getting caught up in metaphysical conundrums, theologians test them against the biblical vision of God and the world. They discern that the God of analytical neoscholastic philosophy is very remote from the God of Abraham, Isaac, and Jacob, the living, saving God presented in church preaching. The analytical debate about God is a factory for producing refined philosophical concepts and arguments, but its value for Christian theology remains slight. In his Ratio verae theologiae (1519), Erasmus taught that Scripture contains all Christian doctrine and dogmas. He influenced his fellow humanist and Luther’s comrade in arms, Philip Melanchthon, who in his Loci communes (1521) sought to draw all the essential theological truths from an exegesis of the Epistle to the Romans. Erasmus felt he stood on solid ground, then, when he challenged Luther’s denial of free will exclusively on the basis of biblical texts, in his Diatribe de Libero Arbitrio (Discussion on the freedom of the will) in 1524. Luther’s powerful riposte, in De Servo Arbitrio (On the bondage of the will) in 1525, showed that he understood the return to the Bible in a far more radical sense than Erasmus did. A truly biblical view of God and humanity, he showed, would overthrow not only the scholastics but also the quiet humanistic reasonableness of Erasmus. Luther finds Erasmus to be radically defective as a theologian, due to a lack of existential authenticity, and an evasive and diluting attitude to the claim of the biblical word. Both Luther and Erasmus thought that they had left scholastic metaphysics behind, but both of them reached back to scholastic distinctions in the course of their discussion. Paradoxically, Luther, the scorner of philosophy, uses starkly metaphysical arguments to bolster his biblical case. His deep sense of human weakness and divine power leads him to adopt a primitive metaphysical determinism that both reflects and reinforces a deeply problematic aspect of his thinking. Defenders of De Servo Arbitrio try to play down this metaphysics and the extremism it reflects, focusing instead on Luther’s witness to biblical realities. Luther himself saw this text, along with the Commentary on Galatians (1531/1535), as his most important work, so the stakes are high. Luther was certainly a great witness to the Gospel and offered the Church a priceless treasure in his doctrine of Justification. But there is also a dark and unwholesome side to his thought, and it is on display in this classic text. I shall formulate three criticisms of his metaphysical utterances: they override the dignity of human freedom; they imply a fatalistic, deterministic view of reality; they project a monstrous image of God. The Dignity of Human Freedom Erasmus is usually seen as a great loser in the history of theology, yet today his tolerance and humanism seem a blessed oasis amid the violence and fanaticism of the sixteenth century. Luther’s reaction to Erasmus’s mild and modest intervention augured ill: ‘I will kill the Satan with my feather, as I killed Münzer, whose blood lies at my throat.’ Erasmus was doomed to lose, for the age of humanistic reason was ceding to one of sectarian absolutism. His very long reply to De Servo Arbitrio, the Hyperaspistes or ‘Shield,’ has received shamefully little attention. Some claim that Luther detected in Erasmus a champion of what was to be the great heresy of the modern world, a proud emphasis on human autonomy. In reality there is nothing revolutionary about Erasmus’s recognition of ‘a power of human willing, through with man can turn to that which leads to eternal salvation or can turn away from it’ (Ausgewählte Werke, Darmstadt = AW IV, 37). This is little more than a quotation from Origen, who in the third century defended free will against a Gnostic determinism based on inborn ‘evil natures’ (De Principiis III, 1, 18). For revolutionary modernity in this period, one should look rather to Pico della Mirandola, who sees humans as created without fixed identity, called to be self-shapers and self-surpassers, an ideal that recurs in Fichte and Sartre. Schleiermacher is a follower of Luther when he objects to Fichte that there is no unconditional sense of freedom, but that freedom is always subordinate to a sense of unconditional dependence (on God). Luther might score valid points against the modern absolutization of freedom-from at the expense of freedom-for. Nonetheless, what Luther saw as heresy is the orthodoxy of the modern world. ‘Man is born free,’ wrote Rousseau in 1762 (The Social Contract), ‘and everywhere he is in chains’—chains he can break. Who remembers Archbishop Christophe Beaumont and Cardinal Gerbil, defenders of Original Sin against Rousseau? Who would wish to return of Bossuet’s view that ‘all men are born subjects’? We have embraced the credo that ‘all men are created equal,’ with rights to ‘Liberty and the pursuit of Happiness’ (Declaration of Independence, 1776). The inviolability of human freedom is a central theme of Christian preaching today. There is no stepping back to a pre-modern mentality of subservience. How did Luther understand the words: ‘For freedom Christ has set us free’ (Gal 5:1)? ‘It is freedom from the Law, sins, death, from the power of the devil, the wrath of God, the last judgment,’ and all other freedoms are but droplets in comparison with ‘the majesty of theological freedom’ (Weimarer Ausgabe = WA 40/II, 3). This freedom is negatively defined, in relation to fear that is overcome, and in strict distinction from ‘freedom of the flesh’ and ‘political freedom.’ In the De Servo Arbitrio we do not hear much even about this negative freedom. Luther had found warmer tones in On the Freedom of a Christian (1520), but in the year of the Peasants’ Revolt, in which he had played a grisly role, he was no longer so freedom-friendly. An enslaved will, moved by grace as if it were a puppet, would be a travesty of evangelical freedom, ‘the glorious freedom of the children of God’ (Rom 8:21). Erasmus celebrates instead the freed will of the redeemed. He holds that human digntiy and also the dignity of the Holy Spirit require that grace acts only though awakening and empowerment of human freedom. He follows Origen, who saw the multiplicity of human characters and the corresponding multiplicity of divine handlings of the human soul. As souls are innumerable,’ wrote Origen, ‘so are the mores, decisions, movements, drives, desires of each one’ (De Principiis III, 1, 14). He uses Greek philosophical words, êthê, protheseis, kinêmata, hormai, epiboulai, giving them a pluralistic twist. This pluralism of experience is not irreconcilable with the conviction that everything depends on God’s grace, or that justification consists only in believing acceptance of the merits of Christ. Luther, too, seems to respect the dignity of human freedom when he writes: ‘through his spirit we are made slaves and captives (which however is royal freedom), so that we want and gladly do what he wants’ (WA 18, 635). He suggests that just as the sinner is not free to break with sin, so too, as long as the Spirit and grace last, the saint is not free to turn away from God, both are enthralled by their own willing dedication. Unfortunately, there follows immediately the image of the draught animal (iumentum), whom either God or Satan rides, which again undermines human dignity, and also implies a Manichean equilibrium between God and Satan. According to Erasmus, Origen teaches that ‘whether we turn to salvation or turn away lies in our hand’ (Hyperaspistes II, Opera Omnia, Louvain, X, 1501D). In Origen’s own words: ‘there is placed in us the power to give ourselves either to a praiseworthy or a culpable life’ (De Principiis III, 1, 1). This does suggest an underestimation of the bondage of the will and the power of grace. Elsewhere Origen is anxious to tie election to merits, accrued in past lives, which led St Jerome to characterize him as the father of Pelagianism. But the common Lutheran perception of Erasmus as (at least) a Semi-Pelagian is unfair. Erasmus can cite Augustine with empathy, and he distances himself from his friend St John Fisher, who claimed that man could contribute to his salvation ‘from merely natural powers’ (1480A). He holds that the fallen will was ‘so degraded that it could not recall itself by its own resources to a better course, but having lost liberty was forced to serve sin, to which it voluntarily bound itself’ (AW IV, 40). He explains: ‘I ascribed nothing to free will except that it responds to the grace that knocks, cooperates with the grace that operates, and that it can turn away from both’ (1480B). ‘When I say that free will does some good, I link it with grace; as long as it obeys grace it is happily acted on and acts; when it resists, it merits to be deserted by grace, and when deserted it does only evil acts’ (AW IV, 414-16). Here he presents not a neutral, independent will, which decides sovereignly by itself whether to obey grace or its own vices, addictions, obsessions, and bad habits. The unfreedom of the will lies deeper than these, in the fundamental option by which one’s life is directed. Erasmus does not preach a will that always remains free to choose between the proud, self-centered motivation and the orientation to God’s will and his Kingdom. He sees that the will can be freed from self-bondage only through grace, though he lacks Luther’s concrete feeling for this tragic servitude and for how little we have the power of choice in our own hands. Luther was shocked that Erasmus referred to the question of the role played by free will in the process of salvation as a matter of superfluous speculation. The genre of the diatribe gave the impression that he wanted to treat the role of free will as a quaestio disputata, in which the Pelagians also were given a respectful hearing. His defense of free will sounds as if it is merely a question of correct, approved opinion, rather than a matter of ultimate concern. Luther found this detachment intolerable. This was not because the Bible had given a clear and unambiguous answer, as Luther wanted to believe, but more because Augustine and the Church had detected and denounced the Pelagian error of giving the primary role in salvation to our free will. Luther’s years as an Augustinian monk and theologian shaped his reception of Scripture. Erasmus must also claim, like Luther, that the Bible gives a clear answer, since the Holy Spirit ‘cannot fight with itself’ (AW IV, 156), and he, too, underestimates the plurality and contradictoriness of the biblical statements. Luther shows that the sinner is totally enslaved, and he gives to the righteous only a freedom that comes from outside, the freedom of passive obedience, not that of creative cooperation with divine grace. A synergy between human freedom and divine grace in the event of Justification is what he most vigilantly excludes, and perhaps there is no real contradiction on this particular issue between him and Erasmus, Trent, and modern Catholic theology. But that grace works in and through creative human freedom is the best insight of Christian humanism, which Luther, at least in De Servo Arbitrio, holds at a distance. It is true that the text does refer occasionally to cooperation between God and human freedom, but only in a muted and concessive way, emphasizing so massively the asymmetry between the divine and human element, that the latter scarcely attains any vivid profile. Only grudgingly and in subclauses does he use expressions such as ‘whereby the creature cooperates with God who operates’ (18, 753), whereas it is with great rhetorical and existential force that he declares, ‘our freedom is nothing’ (18, 720). Had Luther made an effort to build on what he and Erasmus had in common, the future of Lutheran and ecumenical theology might have been brighter. Erasmus notes Luther’s concessions, but likewise fails to build on them, preferring to see them as contradictions: Luther said first that free will had only the power to sin, then that ‘it is nothing at all,’ and finally, that ‘as if reborn, free will cooperates with grace in good works and with the aid of grace can do all things’ (1480). Luther uses weak metaphysical arguments to boost his case. He asks how the will can be free if neither angels nor humans can exist for a moment by their own power (18, 662), as if it were impossible for God to create and sustain free beings. Even Adam and Eve, made in the divine image, had no free will. The Fall is not a loss of free will but a consequence of its absence. Adam and Eve were unhappy that God had given them no power of free decision in regard to their relationship with him. Even the editors of the Weimar edition note that Luther’s claim that Augustine was totally on his side (WA 18, 640) comes to grief here, since Augustine denies the necessity of Adam’s sin. Luther’s thesis of the non-existence of free will is not biblical, and needs to be shored up by metaphysical arguments, which are constructed ad hoc. Erasmus saw that when Luther spoke as a scholastic, in defense of his exaggerations, he had lost the authentic biblical perspective. Later Melanchthon, who remained on good terms with Erasmus to the end, would considerably tone down this radical denial of freedom, and he met no resistance from Luther, who had perhaps realized that the metaphysical claim was not so important or so certain as he had claimed. Aquinas distinguishes between a necessitas consequentiae and a necessitas consequentis. What God knows from eternity cannot fail to happen, but in the realm of secondary causes contingency and freedom remain real. Luther dismisses this as vain words (WA 56, 382), and indeed it does seem rather feeble. He himself tends to emphasize a stark contrariety between divine power and human freedom. But in scarcely noticeable concessions he seems close to the scholastic distinction in that he denies that God’s predetermination of our acts implies any compulsion (coactio). Yet he also uses deterministic language that seems to deny human freedom altogether, even the normal freedom we enjoy in everyday affairs, which he generally upholds. Erasmus also speaks contemptuously of the scholastic distinctions: ‘it was wrong to plunge with irreligious curiosity into those recondite, not to say superfluous matters— whether God foreknows something contingently, whether our will effects something in the things that pertain to eternal salvation, or merely undergoes the action of grace’ (AW IV, 12). However he let his colleague Louis Ber persuade him to use the scholastic distinction in defending the freedom of Judas, who contingently, freely betrays Christ, though the act is necessary in view of divine foreknowledge. Luther sees this as a concession to his own view: ‘They are compelled to concede that all things are done by necessity, with the necessitas consequentiae (as they say), but not with the necessitas consequentis. Thus they elude the violence of this question’ (WA 18, 616). He himself denies that Judas suffers any necessitas coactionis, and affirms rather a necessitas immutabilitatis, a necessitas infallibilitatis ad tempus, which does not impinge on Judas’s freedom (720-1). Here again Erasmus might have built on a rough agreement between Luther and himself, but instead he rejects Luther’s proposal as philosophically feeble (X, 1424). In tit for tat style he mocks Luther as a metaphysician in several places, and pounces on his inconsistencies: ‘Judas willingly betrayed the Lord, Luther admits, though he elsewhere teaches that the human will performs nothing either in good or evil’ (1424). What turned Judas from being a faithful apostle into a traitor? Luther would answer, ‘the divinely willed withdrawal of grace.’ Erasmus sees this as ‘a kind of force,’ and insists that ‘Judas could have not taken up the will to betray, or having taken it up he could have put it down again’ (1425). This sounds self-evident, but for Luther it is blasphemy, not only because it underestimates the power of sin to bind the will and the inability of the will to free itself, but because it takes the salvation or damnation of the sinner out of God’s hands. Yet in insisting that Judas is nonetheless not forced, Luther implicitly refers to the same double register that he has denounced as eluding the violence of the question. When he reached for the weapons of metaphysics to defend grace from the claims of human autonomy, Luther thought he could use them tactically, in the service of the biblical matter, without having to bow to the rigors of classical metaphysical logic. He often imaginatively gives scholastic terminology a surprising new concrete and biblical meaning, but at the price of much inaccuracy and ambiguity. In De Servo Arbitrio his high-handed way with metaphysical terms and arguments boomerangs on him, causing a distortion of his message, which takes on the monstrous appearance of a metaphysical determinism. His true aim was not to profess a metaphysical determinism but to make grace alone the cause of salvation, excluding any contribution from human agency. This sounds like a false problem, solved long ago by Augustine, who saw grace as acting through the free acts whereby sinners are enabled to respond to it. In any case, his metaphysics led Luther into a view of freedom that has little to do with sin or grace. He argues that humans are unfree, not because of sin or the sovereignty of grace, but because God’s infallible foreknowledge entails that all things happen of necessity. The drama of sin and grace is flattened out and becomes one instance of the deterministic character of God’s rule. What led Luther into this unbiblical blind alley? Can Luther’s philosophical determinism be cleanly separated from his theological concern? ‘Everything we do, everything that happens, even when it seems to us to happen mutably and contingently, in reality happens necessarily and immutably, if one considers God’s will... To happen contingently, however, means... not that the work itself happens contingently, but rather that it happens through a contingent and mutable will, such as is not found in God’ (WA 18, 615-16). Here Luther makes the same kind of distinctions as Boethius and Aquinas, leaving free play to contingency and putting the necessity of the divine will in the background. This ultimate necessity does not seem to affect the foreground realities of freedom and choice at all. Luther could have presented the phenomenology of the enslaved will just as effectively without drawing on it at all. A short work on free will by the humanist scholar Lorenzo Valla, edited in 1518, had an influence on Luther’s deterministic thinking. Valla finds the medieval harmonization of omnipotence and free will to be shallow, and quotes Romans 9:11-21 to show that the contradiction between them is unsolvable for human thought. ‘God lays no necessity on us, nor does he rob us of freedom of will, when he hardens the one and has mercy on the other, for he does this in great wisdom and holiness. The basis for it, however, he has as it were stored away and hidden in a treasure chamber.’ This is intended as a blow against the metaphysical complacency of Boethius and others who serenely harmonized omnipotence and free will. Humility before the unsearchable divine mystery and trust in Christ is the path that opens up when our thinking is thus left in the lurch by philosophy. Yet Valla’s own account of the abysses of predestination is more a metaphysical construction than a datum of biblical revelation. Luther praised Valla’s ‘steadfastness and sincere zeal for the Christian faith’ (WA 6, 183). Melanchthon followed Valla in his Loci communes of 1521, in which he sharpened the deterministic ideas he received from Luther, but in the last edition of the Loci he declares that Valla’s rejection of freedom and contingency comes from Stoic philosophy and has no place in the Church. Melanchthon also rejects the ‘Stoic necessity’ of the Geneva theologians, which in Calvin’s eyes meant that Melanchthon had fallen away from biblical thought back into metaphysical rationalism. Justification as a free act of divine mercy is an event that cannot be brought under a philosophical concept. For Calvin, predestination and the eternal divine decree are the seal of the gratuity of this event, but many Lutherans see theorizing about predestination as a falling back into the search for metaphysical grounds. Luther himself, from 1528 on, played down the predestinarian excesses of De Servo Arbitrio. In a sermon of 1540 he says that to think that God does not give blessedness to everyone is despairing or godless. The believer looks to Christ and find in him assurance of divine election. A preaching that undermines this confidence in a skeptical way must be problematic. Karl Barth’s judgment is telling: the thesis on the bondage of the will is not a decision for determinism: ‘that this is not clear in Luther‘s De servo arbitrio is the objection that one cannot fail to make to this famous text, and also to the conceptions of Zwingli and Calvin’ (Kirchliche Dogmatik IV/2:559). The Hidden God Luther constructs, behind the phenomena of biblical revelation, a hidden story going on in the wings. He distinguishes between the God who is ‘preached, revealed, presented, and revered by us’ and the God who is ‘not preached, not revealed, not presented, not revered,’ with whom we have no concern (WA 18, 683). In other texts Luther celebrates the ‘joyous exchange,’ whereby Christ takes on our sins to share with us his own righteousness. But as if this good news were merely the surface, he now stresses that we must ‚keep separate the God who stands with us in exchange and sharing, insofar as he is preached and revered, and the God who is not revered and preached, that is, God as he is in his nature and majesty’ (685). We can rise above the preached God, but ‘nothing can rise above the God who is not honored, not preached, as he is in his nature and majesty, but all is under his mighty hand.’ But the Bible does proclaim the divine nature and majesty, and nowhere suggests that there is another way of seeing them. Luther goes on to say that we should not concern ourselves with the hidden divine majesty but only with God as he robes himself in his word. This preached God seeks to take away sin and death but the hidden God ‘neither laments death nor takes it away, but effects life, death and everything whatsoever,’ as a blind, indifferent force. The preached God wants all to be saved, but the hidden God does not intervene to save them, for inscrutable reasons of his own (WA 18, 685-6). ‘God is light, and in him there is no darkness at all’ (1 Jn 1:5). If Luther were to say that this text speaks only of the revealed God, he would be radically undercutting the integrity of biblical revelation much as the ancient Gnostics did. He would probably say, ‘I am obliged to believe that God is light, but when I think of his hidden face, I am tempted by the idea that God is darkness.’ The allegedly hidden face actually impinges forcefully on his imagination in the doctrine of predestination: ‘I myself have more than once been offended by it even unto the depth and abyss of despair’ (WA 18, 719). Only when the light of glory is given to us will we understand ‘how God damns him who is unable by his own powers to do anything other than sin and be culpable’ (785). Meanwhile we walk by faith and must trust, that despite the dark appearances, God is good. ‘This is the supreme degree of faith, to believe him to be clement, who saves so few, damns so many, to believe him to be just, who of his own will made us such as are necessarily to be damned’ (633). Can this fearful God be really hidden, if we know so much about his activity? And whence do we know it? If from Scripture, then this is the revealed God, not a hidden one. Or we are dealing with a contradiction between two faces of the biblical God. In his struggle with this contradiction, Luther is a hero of faith. But it seems that we are not obliged to imitate this particular brand of heroism. Like Pascal and Kierkegaard he seems to have created unnecessary worries and anxieties for himself. Scripture invites us to dissolve this apparent contradiction. The difference between the good and the wicked in biblical scenes of judgment is always constructed in view of divine justice, and the accent generally falls on the positive side. The hope for universal salvation, strong in contemporary Catholic thinking and also in Barthian theology, has deeper roots than the simple reflection that if few are saved the Redemption was a flop. Belief in a God who reveals himself as an event of light and of love, and who does not lie, and who wishes all men to be saved, must overcome all other images of God that are produced by our anxiety, even if they seem legitimated by the letter of Scripture. In Luther the so-called hidden God overcomes the revealed God, which one might see as a regression to primeval heathen myth. No doubt the projection of the hidden God is not merely metaphysical speculation, but has deep roots in Luther’s experience, and in the problem of evil. But Manicheanism and Gnosis also emerged from deep existential experience. Augustine faced down Manicheanism with his insistence that everything that is, insofar as it is, is good, and that in consequence evil has no real existence, being merely a deficiency of being. He seems to have lost sight of this wholesome metaphysics in his Anti-Pelagian writings. The phenomenon of God’s hiddenness, as the Bible deals with it, in the darkness of Golgotha, does not point to any other God than the one whose light shines in this darkness. God as judge of sinners hides his gracious face under that of his anger, in what Luther calls the opus alienum dei, as opposed to the opus proprium, the proper work of God, his saving mercy. But there is no fundamental contradiction between the two; the word of the Law and the word of the Gospel reveal the same God. The bondage of our will is not divinely determined, and texts such as the hardening of Pharaoh’s heart must be interpreted in line with this. They are about the impotence and self-imprisonment of the sinner. As Buddhism and psychoanalysis show, we are all unconsciously in the grip of the Three Poisons, attachment, aversion, and delusion. Where is God to be located in reference to these chains? Not as the one who forges them, but as the one we meet when we meet when we can break through to spiritual freedom, the one enabling that breakthrough. The Law held us bound in impotence and guilt, not with the purpose of keeping us captive forever, but with a view to the Gospel that breaks these chains. Hence it is characterized relatively mildly as a ‘pedagogue’ (Gal 3:23-5). God is neither directly nor indirectly the cause of sin, Aquinas insists. He is the cause of our election, but ‘the first cause of the lack of grace is from us’ (Summa Theologiae II-II, q. 112, a.3, ad 2). There is an asymmetry between election and damnation here, whereas Luther’s God seems to elect or condemn indifferently, with a preference for the latter. In the name of a biblical literalism, Luther overrides a basic principle of Christian ontology, namely, that a good God can never create evil or be responsible for sin. Luther may have felt that in doing so he was overcoming metaphysics in a liberating way, but in reality he instead becomes captive of a bad metaphysics. Confusing evangelical assurance with metaphysical certitude, he accuses anyone who queries his understanding of biblical texts of being a Pelagian and a doubter of divine omnipotence. Any objection to the arbitrariness he ascribes to God is seen not a criticism of himself but as an offense against God or an attempt to replace the active, free, sovereign God of Scripture with the cold and indifferent God of Aristotle. Luther sticks to his rigid metaphysics, to provoke and annoy the minds of those whose dislike of his doctrine is interpreted as a sign of rebellious resentment against God. The Bible presents a God who is always working for the welfare and salvation of his creatures. Luther succumbs to a bad metaphysics when he probes behind this revelation, seeking its ultimate ground in the hidden depths of the divinity, which may even contradict the revealed, gracious face of God. But when believers think of the ultimate source of revelation, they should follow the lines of the biblical word that point back to the gracious mystery of the loving Father, rather than impose models of divine ineffability and incomprehensibility drawn from Platonism, or worse, from ancient ideas of cruel and inevitable Fate. The biblical sense of gracious divine mystery may seem vague and soft to the hard-headed philosopher, but in the case of God we are always learning the basic phenomena, and are never ready to overleap them to an ambitious speculation on the workings of the divine mind. Schleiermacher’s location of God as the ‘whence’ of our existence, of whose absolute goodness it would be senseless to doubt, is ultimately saner and more biblical than the image of sinners caught in the hands of an inscrutable, unpredictable, and angry deity. Biblical passages such as Romans 9-11, which nourished so much predestinarian brooding from Augustine on, must be interpreted in this perspective of indubitable divine goodness. Dark pages such as John 8, which suggests that some are predestined by their very nature to be children of the devil, must be put aside, as we learn to know the gracious countenance of God ever better. ‘Scripture is its own interpreter,’ Luther taught; it is also its own corrector. As always, when one reviews a bitter controversy from church history, one is left wondering if there was any value in the discussion and whether it has not become entirely meaningless today. It is depressing to think that so much ink, not to mention blood, was spilt over such arcane disputes. The best way to salvage something from that past is to focus on the most vibrant and persuasive witness offered by the disputants. Stefan Zweig has done this for Erasmus, in a monograph of 1935, where he upholds Erasmus’ tolerance and humanity over against the barbaric fanaticism of Luther. Karl Barth, known as a critic of Luther, is also his best defender, in that he quotes him two hundred times (often from De Servo Arbitrio) in the first part of the Church Dogmatics, using Luther to light up the experience of encountering the Word of God. For despite his vehemence, misstatements, and exaggerations, Luther did attest to the power of the biblical word, and did draw from it a luminous clarification of the gospel. His tragic vision of human weakness and bondage has enough truth to ensure its perpetual relevance, and his defense of the sovereignty of grace, in the spirit of Augustine, retains its power to free us from the prison of anxious Pelagian efforts at self-justification. Published in The Japan Mission Journal, June, 2012
fwe2-CC-MAIN-2013-20-44003000
Thank The Simple Wasp For That Complex Glass Of Wine Originally published on Mon October 22, 2012 9:33 am That's because those big scary flying insects whose stings can be especially painful may be the secret to the wonderful complex aroma and flavor of wine. "Wasps are indeed one of wine lovers' best friends," says Duccio Cavalieri, a professor of microbiology at the University of Florence in Italy. Cavalieri and his colleagues discovered that these hornets and wasps bite the grapes and help start the fermentation while grapes are still on the vines. They do that by spreading a yeast called Saccharomyces cerevisiae — commonly known as brewer's yeast and responsible for wine, beer and bread fermentation — in their guts. When the wasps bite into the fruit, they leave some of that yeast behind. Cavalieri says one of the reasons the discovery is so exciting for him is that it's an example of just how connected the natural world is and how humans rely on this interconnection in ways we simply cannot perceive. "It's important because it's telling to me it's crucial to look at conservation and the study of biodiversity," says Cavalieri, one of the authors who published his findings in the journal Proceedings of the National Academy of Sciences recently. "Everything is linked," he adds. Of course, Cavalieri says, winemakers can add yeast later. But wines would not taste the same without wasps. Different yeasts applied at different times have a big impact on flavors. The wasps also introduce other microorganisms to the grapes, which add flavors to the wine. "One of the most beautiful things of wine is the fact that basically it's complex; it's made of several parts and it communicates to several parts of your brain," he says, which could be lost without the wasps and hornets. Cavalieri comes by his interest in wine naturally. He's from a family of winemakers in the Chianti region of Italy. He first had the inkling of hornets' special role when he saw them piercing the skin of grapes during field research in the region 15 years ago. Insects have long helped out with wine and other crops, we just didn't know why. At least since the time of the ancient Romans, winemakers have planted flowers near their vines to lure certain insects. The researchers were able to unwrap the mystery of the insects' role by using DNA sequencing techniques to analyze the genes of the yeast, then tracing them to the guts of wasps. They even did a lab experiment to see if hornets could pass the yeast to their offspring, and they did. Other insects and birds also carry the yeast, Cavalieri says.But hornets seem to play a special role because they both harbor the yeast over winters and can pass them along to their offspring. You can imagine a vineyard might be interested in pest control — but perhaps it should be careful about which bugs it considers pests. Evolutionary biologist Anne Pringle of Harvard, who was not involved in the study, says the findings have two strong messages: Great wines need bugs and people still know almost nothing about ecology. "If you'd like to have your grapes fermented by local yeasts, which I think many vineyards do, then you have to have these insects around," she says. STEVE INSKEEP, HOST: Before the next time you take a sip of wine, you might want to make a toast to wasps. Those big scary insects play a key role in making wine, we're told. NPR's Elizabeth Shogren reports new research reveals their special function and suggests that preserving biodiversity might be more important than you think. ELIZABETH SHOGREN, BYLINE: Years ago, Italian microbiologist Duccio Cavalieri was doing field research in the vineyards of the Chianti region, and he noticed something special about the relationship between grapes and wasps, particularly a type of wasp called the European hornet. DUCCIO CAVALIERI: If you are looking at the berry and who was eating the berry, who could actually eat the berry were these big hornets. SHOGREN: Other insects couldn't pierce grape skin. Cavalieri had an inkling that he was observing an important secret about wine. It took nearly 15 years and some sophisticated DNA sequencing to prove his hunch. CAVALIERI: Wasps are indeed one of wine lovers' best friends. SHOGREN: It turns out wasps have yeast in their bellies and they regurgitate it into the grapes they bite. Yeast is the stuff that turns grape juice into wine. The type of yeast a winemaker uses will affect the way it tastes. So the yeast in the wasps gut gets passed into the wine and imbues it with the flavor of the region. CAVALIERI: Since the times of the Romans we have realized that it was important to improve some qualities and characteristics of the wine to have flowers and insects around the vineyard. And now we really know more about it. SHOGREN: Cavalieri comes from a family of winemakers in the Chianti region, so he's delighted to be able to unveil one of the mysteries of wine. He published his team's results in the proceedings of the National Academy of Sciences. He says what's really important to him about his discovery is that it hints at just how interconnected the natural world is. CAVALIERI: I think everything is linked. SHOGREN: Those links aren't always apparent to us. Winemakers never knew that wasps were kicking off the fermentation process for them. CAVALIERI: Yet, if we lose this, we lose complexity. And one of the most beautiful things of wine is the fact that basically it's complex, it's made of several parts and it communicates to several parts of your brain. SHOGREN: Harvard evolutionary biologist Anne Pringle wasn't involved in the research, but she says it sends a warning to wine growers who might be inclined to use pesticides to get rid of wasps and hornets. ANNE PRINGLE: If you'd like to have your grapes fermented by local yeasts, which I think many vineyards do, then you have to have these insects around. SHOGREN: Pringle says there's a larger message. PRINGLE: Personally, what this tells me is how little we know about how the world works and we're running out of time. SHOGREN: The natural world is changing quickly because of stresses like climate change, invasive species, habitat loss and pollution. And Pringle says many species already have been lost or may soon vanish before we learn what magic they perform for Earth's ecology. Elizabeth Shogren, NPR News. Transcript provided by NPR, Copyright National Public Radio.
fwe2-CC-MAIN-2013-20-44007000
Pelvic inflammatory disease (PID) is a serious infection of the female reproductive system that can develop from an untreated sexually transmitted disease (STD). In most cases, it occurs when bacteria from the STD in the vagina or cervix move into the uterus and upper genital tract. The most common organisms that lead to PID are gonorrhea and chlamydia, both of which are highly contagious STDs. Untreated PID can damage the fallopian tubes, ovaries, and uterus, which can lead to chronic pelvic pain and serious damage to the reproductive system. PID is the most common, preventable cause of infertility, and can also lead to ectopic pregnancies. The good news is that when PID causes symptoms, it can be diagnosed and treated with antibiotics. The essential part is to detect it before it leads to serious health problems. However, since symptoms can be mild, many cases of PID are unrecognized and, therefore, may be untreated if people aren't screened for STDs. So women who are sexually active should take precautions to keep from contracting STDs, and eventually PID, and be screened for STDs regularly. Signs and symptoms of PID can range from mild to severe, and can appear weeks after exposure to an STD. Sometimes, there are no symptoms at all. When symptoms of PID occur, they may include: abnormal vaginal discharge, possibly with an odor pain during urination or more frequent urination aching pain in the lower abdomen pain in the upper abdomen or more frequent urination fever and chills nausea and vomiting irregular menstrual bleeding pain during sex If your daughter complains of any symptoms associated with PID, she should see her doctor as soon as possible. You should be especially alert to these symptoms if she has had PID before because they may signal a repeat infection. The STDs that can lead to PID are very contagious. All sexual partners of someone who is diagnosed with chlamydia or gonorrhea should be notified and treated with antibiotics, even if they have no signs or symptoms. If PID is not treated or goes unrecognized, it can continue to spread through a girl's reproductive organs. Untreated PID may lead to long-term reproductive problems, including: Scarring in the ovaries, fallopian tubes, and uterus. Widespread scarring may lead to infertility (the inability to have a baby) and chronic pelvic pain. A teen girl or woman who has had PID multiple times has more of a chance of being infertile. Ectopic pregnancy. If someone who has had PID does get pregnant, scarring of the fallopian tubes may cause the fertilized egg to implant in one of the tubes rather than in the uterus. The fetus would then begin to develop in the tube, where there is no room for it to keep growing. This is called an ectopic pregnancy. An untreated ectopic pregnancy could cause the fallopian tube to burst suddenly, which might lead to life-threatening bleeding. Tubo-ovarian abscess (TOA). A TOA is a collection of bacteria, pus, and fluid that occurs in the ovary and fallopian tube. A woman with a TOA often looks sick and has a fever and pain that makes it difficult to walk. The abscess will be treated in the hospital with antibiotics, and surgery may be needed to remove it. Because STDs can lead to PID, the best way to prevent it is to abstain from having sex (abstinence). Sexual contact with more than one partner or with someone who has more than one partner increases the risk of contracting any STD. When properly and consistently used, condoms decrease the risk of STDs. Latex condoms provide greater protection than natural-membrane condoms. The female condom, made of polyurethane, is also considered effective against STDs. Although birth control pills offer no protection against STDs, they may provide some protection against PID by causing the body to create thicker cervical mucus, making it more difficult for bacteria to reach the upper genital tract. Using douche can actually increase a female's risk of contracting STDs and developing PID because it can change the natural flora of the vagina and flush bacteria higher into the genital tract. A teen who is being treated for PID also should be tested for other STDs, and should have time alone with the doctor to openly discuss things like sexual activity. Not all teens will be comfortable talking with parents about these issues. But it's important to encourage them to talk to a trusted adult who can provide the facts. PID can be treated with antibiotics, which kill the bacteria that cause the disease. If damage has already occurred in the reproductive organs, antibiotics will not be able to reverse it but will stop further spread of the infection. In some cases, girls with PID do have to be hospitalized, particularly if they develop a high fever, severe nausea, and vomiting; if they need intravenous antibiotics; or if the diagnosis is uncertain. In trying to diagnose PID, the doctor will likely ask questions about your daughter's medical history, method of birth control, and her sexual activity and that of her partner. The doctor may then perform a pelvic exam to find out if her reproductive organs are tender or swollen and to identify the location of the infection. It’s not always easy to diagnose PID. Some other conditions, like appendicitis, can cause symptoms similar to PID. During the pelvic exam, the doctor may take samples to look for the germs that cause gonorrhea and chlamydia infections. Blood tests also may be done. Other procedures may be required to determine whether the fallopian tubes are swollen or if an abscess (collection of pus) is present. Prompt treatment of PID and follow-up care can cure the infection and prevent complications. Rest can help your daughter recover. Hot baths and heating pads applied to the lower back and abdomen can help relieve discomfort. Your daughter should finish all medicines as prescribed because the PID infection may continue even after the symptoms disappear. To prevent re-infection, her partner also should be examined and treated. It's important to abstain from sex until treatment of both partners is completed and the doctor determines that the infection is gone. If your teen is thinking of becoming sexually active or already has started having sex, it's important to discuss it. Make sure your teen knows how STDs can be spread (during anal, oral, or vaginal sex) and that these infections often don't have symptoms, so a partner might have an STD without knowing it. It can be difficult to talk about STDs, but just as with any other medical issue, teens need this information to stay safe and healthy. Provide the facts, and let your child know where you stand. It's also important that all teens have regular full physical exams — which can include screening for STDs. Your teen may want to see a gynecologist or a specialist in adolescent medicine to talk about sexual health issues. Community health organizations and sexual counseling centers in your local area also may be able to offer some guidance.
fwe2-CC-MAIN-2013-20-44015000
Yesterday, your son sounded like he's always sounded — like a boy. But today, you heard that first crack in his voice. He's started puberty and several things about him are changing. Along with obvious changes in physical appearance, his voice will start sounding a whole lot different. For a while, he might have difficulty controlling it and he'll make all sorts of odd noises when trying to speak. It's the larynx (or voice box) that's causing all that noise. As the body goes through puberty, the larynx grows larger and thicker. It happens in both boys and girls, but the change is more evident in boys. Girls' voices only deepen by a couple of tones and the change is barely noticeable. Boys' voices, however, start to get significantly deeper. The Science Behind the Squeaking The larynx, which is located in the throat, plays the major role in creating the sound of the voice. Two muscles, or vocal cords, are stretched across the larynx and they're kind of like rubber bands. When a person speaks, air rushes from the lungs and makes the vocal cords vibrate, which in turn produces the sound of the voice. The pitch of the sound produced is controlled by how tightly the vocal cord muscles contract as the air from the lungs hits them. If you've ever plucked a small, thin rubber band, you've heard the high-pitched twang it makes when it's stretched. A thicker rubber band makes a deeper, lower-pitched twang. It's the same process with vocal cords. Before a boy reaches puberty, his larynx is pretty small and his vocal cords are kind of small and thin. That's why his voice is higher than an adult's. But as he goes through puberty, the larynx gets bigger and the vocal cords lengthen and thicken, so his voice gets deeper. Along with the larynx, the vocal cords grow significantly longer and become thicker. In addition, the facial bones begin to grow. Cavities in the sinuses, the nose, and the back of the throat grow bigger, creating more space in the face in which to give the voice more room to resonate. As a boy's body adjusts to this changing equipment, his voice may "crack" or "break." This process lasts only a few months. Once the larynx is finished growing, your son's voice won't make those unpredictable sounds. Those croaks and squeaks in a boy's voice are just a part of this normal and natural stage of growth. As a boy gets used to these big changes, his voice can be difficult to handle and it may take a lot of effort to keep it in control. Just as he's getting used to the big changes in his body, he has to adapt to the sound of what he's saying. As puberty continues, his body adjusts to the new size of the larynx, and the croaks and squeaks begin to taper off. After that, the new, deeper voice becomes much more stable and easier to control. Along with several other obvious changes in the way he looks, you might recognize a significant change in appearance in a boy's throat area. When his larynx grows bigger, it tilts to a different angle inside the neck and part of it sticks out at the front of the throat. This is the "Adam's apple." In girls, the larynx also grows bigger but not as much as a boy's does, which is why girls don't have prominent Adam's apples. Everyone's timetable is different, so some boys' voices might start to change earlier and some might start a little later. A boy's voice typically begins to change between ages 11 and 14½, usually just after the major growth spurt. Some boys' voices might change gradually, whereas others' might change quickly. If your son is concerned, stressed, or embarrassed about the sound of his voice, let him know that it's only temporary and that everyone goes through it to some extent. After a few months, he'll likely have a resonant, deep, and full voice just like an adult!
fwe2-CC-MAIN-2013-20-44016000
Lisa's son Jack had always been a handful. Even as a preschooler, he would tear through the house like a tornado, shouting, roughhousing, and climbing the furniture. No toy or activity ever held his interest for more than a few minutes and he would often dart off without warning, seemingly unaware of the dangers of a busy street or a crowded mall. It was exhausting to parent Jack, but Lisa hadn't been too concerned back then. Boys will be boys, she figured. But at age 8, he was no easier to handle. It was a struggle to get Jack to settle down long enough to complete even the simplest tasks, from chores to homework. When his teacher's comments about his inattention and disruptive behavior in class became too frequent to ignore, Lisa took Jack to the doctor, who recommended an evaluation for attention deficit hyperactivity disorder (ADHD). ADHD is a common behavioral disorder that affects an estimated 8% to 10% of school-age children. Boys are about three times more likely than girls to be diagnosed with it, though it's not yet understood why. Kids with ADHD act without thinking, are hyperactive, and have trouble focusing. They may understand what's expected of them but have trouble following through because they can't sit still, pay attention, or attend to details. Of course, all kids (especially younger ones) act this way at times, particularly when they're anxious or excited. But the difference with ADHD is that symptoms are present over a longer period of time and occur in different settings. They impair a child's ability to function socially, academically, and at home. The good news is that with proper treatment, kids with ADHD can learn to successfully live with and manage their symptoms. ADHD used to be known as attention deficit disorder, or ADD. In 1994, it was renamed ADHD and broken down into three subtypes, each with its own pattern of behaviors: 1. an inattentive type, with signs that include: inability to pay attention to details or a tendency to make careless errors in schoolwork or other activities difficulty with sustained attention in tasks or play activities apparent listening problems difficulty following instructions problems with organization avoidance or dislike of tasks that require mental effort tendency to lose things like toys, notebooks, or homework forgetfulness in daily activities 2. a hyperactive-impulsive type, with signs that include: fidgeting or squirming difficulty remaining seated excessive running or climbing difficulty playing quietly always seeming to be "on the go" blurting out answers before hearing the full question difficulty waiting for a turn or in line problems with interrupting or intruding 3. a combined type, which involves a combination of the other two types and is the most common Although it can be challenging to raise kids with ADHD, it's important to remember they aren't "bad," "acting out," or being difficult on purpose. And they have difficulty controlling their behavior without medication or behavioral therapy. Because there's no test that can determine the presence of ADHD, a diagnosis depends on a complete evaluation. Many children and adolescents diagnosed with ADHD are evaluated and treated by primary care doctors including pediatricians and family practitioners, but your child may also be referred to one of several different specialists (psychiatrists, psychologists, neurologists) especially when the diagnosis is in doubt, or if there are other concerns, such as Tourette syndrome, a learning disability, anxiety, or depression. To be considered for a diagnosis of ADHD: a child must display behaviors from one of the three subtypes before age 7 these behaviors must be more severe than in other kids the same age the behaviors must last for at least 6 months the behaviors must occur in and negatively affect at least two areas of a child's life (such as school, home, daycare settings, or friendships) The behaviors must also not only be linked to stress at home. Kids who have experienced a divorce, a move, an illness, a change in school, or other significant life event may suddenly begin to act out or become forgetful. To avoid a misdiagnosis, it's important to consider whether these factors played a role in the onset of symptoms First, your child's doctor may perform a physical examination and take a medical history that includes questions about any concerns and symptoms, your child's past health, your family's health, any medications your child is taking, any allergies your child may have, and other issues. The doctor may also check hearing and vision so other medical conditions can be ruled out. Because some emotional conditions, such as extreme stress, depression, and anxiety, can also look like ADHD, you'll likely be asked to fill out questionnaires to help rule them out. You'll be asked many questions about your child's development and behaviors at home, school, and among friends. Other adults who see your child regularly (like teachers, who are often the first to notice ADHD symptoms) probably will be consulted, too. An educational evaluation, which usually includes a school psychologist, may also be done. It's important for everyone involved to be as honest and thorough as possible about your child's strengths and weaknesses. ADHD is not caused by poor parenting, too much sugar, or vaccines. ADHD has biological origins that aren't yet clearly understood. No single cause has been identified, but researchers are exploring a number of possible genetic and environmental links. Studies have shown that many kids with ADHD have a close relative who also has the disorder. Although experts are unsure whether this is a cause of the disorder, they have found that certain areas of the brain are about 5% to 10% smaller in size and activity in kids with ADHD. Chemical changes in the brain also have been found. Research also links smoking during pregnancy to later ADHD in a child. Other risk factors may include premature delivery, very low birth weight, and injuries to the brain at birth. Some studies have even suggested a link between excessive early television watching and future attention problems. Parents should follow the American Academy of Pediatrics' (AAP) guidelines, which say that children under 2 years old should not have any "screen time" (TV, DVDs or videotapes, computers, or video games) and that kids 2 years and older should be limited to 1 to 2 hours per day, or less, of quality television programming. One of the difficulties in diagnosing ADHD is that it's often found in conjunction with other problems. These are called coexisting conditions, and about two thirds of kids with ADHD have one. The most common coexisting conditions are: Oppositional Defiant Disorder (ODD) and Conduct Disorder (CD) At least 35% of kids with ADHD also have oppositional defiant disorder, which is characterized by stubbornness, outbursts of temper, and acts of defiance and rule breaking. Conduct disorder is similar but features more severe hostility and aggression. Kids who have conduct disorder are more likely to get in trouble with authority figures and, later, possibly with the law. Oppositional defiant disorder and conduct disorder are seen most commonly with the hyperactive and combined subtypes of ADHD. About 18% of kids with ADHD, particularly the inattentive subtype, also experience depression. They may feel inadequate, isolated, frustrated by school failures and social problems, and have low self-esteem. Anxiety disorders affect about 25% of kids with ADHD. Symptoms include excessive worry, fear, or panic, which can also lead to physical symptoms such as a racing heart, sweating, stomach pains, and diarrhea. Other forms of anxiety that can accompany ADHD are obsessive-compulsive disorder and Tourette syndrome, as well as motor or vocal tics (movements or sounds that are repeated over and over). A child who has symptoms of these other conditions should be evaluated by a specialist. About half of all kids with ADHD also have a specific learning disability. The most common learning problems are with reading (dyslexia) and handwriting. Although ADHD isn't categorized as a learning disability, its interference with concentration and attention can make it even more difficult for a child to perform well in school. If your child has ADHD and a coexisting condition, the doctor will carefully consider that when developing a treatment plan. Some treatments are better than others at addressing specific combinations of symptoms. ADHD can't be cured, but it can be successfully managed. Your child's doctor will work with you to develop an individualized, long-term plan. The goal is to help a child learn to control his or her own behavior and to help families create an atmosphere in which this is most likely to happen. In most cases, ADHD is best treated with a combination of medication and behavior therapy. Any good treatment plan will require close follow-up and monitoring, and your doctor may make adjustments along the way. Because it's important for parents to actively participate in their child's treatment plan, parent education is also considered an important part of ADHD management. Sometimes the symptoms of ADHD become less severe as a person grows older. Hyperactivity tends to get less as people grow up, although the problems with organization and attention often remain. More than half of kids who have ADHD will continue to have symptoms as young adults. Several different types of medications may be used to treat ADHD: Stimulants are the best-known treatments — they've been used for more than 50 years in the treatment of ADHD. Some require several doses per day, each lasting about 4 hours; some last up to 12 hours. Possible side effects include decreased appetite, stomachache, irritability, and insomnia. There's currently no evidence of long-term side effects. Nonstimulants represent a good alternative to stimulants or are sometimes used along with a stimulant to treat ADHD. The first nonstimulant was approved for treating ADHD in 2003. They may have fewer side effects than stimulants and can last up to 24 hours. Antidepressants are sometimes a treatment option; however, in 2004 the U.S. Food and Drug Administration (FDA) issued a warning that these drugs may lead to a rare increased risk of suicide in children and teens. If an antidepressant is recommended for your child, be sure to discuss these risks with your doctor. Medications can affect kids differently, and a child may respond well to one but not another. When determining the correct treatment, the doctor might try various medications in various doses, especially if your child is being treated for ADHD along with another disorder. Research has shown that medications used to help curb impulsive behavior and attention difficulties are more effective when combined with behavioral therapy. Behavioral therapy attempts to change behavior patterns by: reorganizing a child's home and school environment giving clear directions and commands setting up a system of consistent rewards for appropriate behaviors and negative consequences for inappropriate ones Here are examples of behavioral strategies that may help a child with ADHD: Create a routine. Try to follow the same schedule every day, from wake-up time to bedtime. Post the schedule in a prominent place, so your child can see what's expected throughout the day and when it's time for homework, play, and chores. Get organized. Put schoolbags, clothing, and toys in the same place every day so your child will be less likely to lose them. Avoid distractions. Turn off the TV, radio, and computer games, especially when your child is doing homework. Limit choices. Offer a choice between two things (this outfit, meal, toy, etc., or that one) so that your child isn't overwhelmed and overstimulated. Change your interactions with your child. Instead of long-winded explanations and cajoling, use clear, brief directions to remind your child of responsibilities. Use goals and rewards. Use a chart to list goals and track positive behaviors, then reward your child's efforts. Be sure the goals are realistic (think baby steps rather than overnight success). Discipline effectively. Instead of yelling or spanking, use timeouts or removal of privileges as consequences for inappropriate behavior. Younger kids may simply need to be distracted or ignored until they display better behavior. Help your child discover a talent. All kids need to experience success to feel good about themselves. Finding out what your child does well — whether it's sports, art, or music — can boost social skills and self-esteem. Currently, the only ADHD therapies that have been proven effective in scientific studies are medications and behavioral therapy. But your doctor may recommend additional treatments and interventions depending on your child's symptoms and needs. Some kids with ADHD, for example, may also need special educational interventions such as tutoring, occupational therapy, etc. Every child's needs are different. A number of other alternative therapies are promoted and tried by parents including: megavitamins, body treatments, diet manipulation, allergy treatment, chiropractic treatment, attention training, visual training, and traditional one-on-one "talking" psychotherapy. However, scientific research has not found them to be effective, and most have not been studied carefully, if at all. Parents should always be wary of any therapy that promises an ADHD "cure." If you're interested in trying something new, speak with your doctor first. Parenting a child with ADHD often brings special challenges. Kids with ADHD may not respond well to typical parenting practices. Also, because ADHD tends to run in families, parents may also have some problems with organization and consistency themselves and need active coaching to help learn these skills. Experts recommend parent education and support groups to help family members accept the diagnosis and to teach them how to help kids organize their environment, develop problem-solving skills, and cope with frustrations. Training can also teach parents to respond appropriately to a child's most trying behaviors with calm disciplining techniques. Individual or family counseling can also be helpful. As your child's most important advocate, you should become familiar with your child's medical, legal, and educational rights. Kids with ADHD are eligible for special services or accommodations at school under the Individuals with Disabilities in Education Act (IDEA) and an anti-discrimination law known as Section 504. Keep in touch with teachers and school officials to monitor your child's progress. In addition to using routines and a clear system of rewards, here are some other tips to share with teachers for classroom success: Reduce seating distractions. Lessening distractions might be as simple as seating your child near the teacher instead of near the window. Use a homework folder for parent-teacher communications. The teacher can include assignments and progress notes, and you can check to make sure all work is completed on time. Break down assignments. Keep instructions clear and brief, breaking down larger tasks into smaller, more manageable pieces. Give positive reinforcement. Always be on the lookout for positive behaviors. Ask the teacher to offer praise when your child stays seated, doesn't call out, or waits his or her turn instead of criticizing when he or she doesn't. Teach good study skills. Underlining, note taking, and reading out loud can help your child stay focused and retain information. Supervise. Check that your child goes and comes from school with the correct books and materials. Sometimes kids are paired with a buddy to can help them stay on track. Be sensitive to self-esteem issues. Ask the teacher to provide feedback to your child in private, and avoid asking your child to perform a task in public that might be too difficult. Involve the school counselor or psychologist. He or she can help design behavioral programs to address specific problems in the classroom. Helping Your Child You're a stronger advocate for your child when you foster good partnerships with everyone involved in your child's treatment — that includes teachers, doctors, therapists, and even other family members. Take advantage of all the support and education that's available, and you'll help your child navigate toward success.
fwe2-CC-MAIN-2013-20-44017000
The percentage of overweight children in the United States is growing at an alarming rate, with 1 out of 3 kids now considered overweight or obese. Many kids are spending less time exercising and more time in front of the TV, computer, or video-game console. And today's busy families have fewer free moments to prepare nutritious, home-cooked meals. From fast food to electronics, quick and easy is the reality for many people. Preventing kids from becoming overweight means adapting the way your family eats and exercises, and how you spend time together. Helping kids lead healthy lifestyles begins with parents who lead by example. Is Your Child Overweight? Body mass index (BMI) uses height and weight measurements to estimate a person's body fat. But calculating BMI on your own can be complicated. An easier way is to use a BMI calculator. Once your child's BMI is known, it can be plotted on a standard BMI chart. Kids ages 2 to 19 fall into one of four categories: underweight: BMI below the 5th percentile normal weight: BMI at the 5th and less than the 85th percentile overweight: BMI at the 85th and below 95th percentiles obese: BMI at or above 95th percentile BMI calculations aren't used to estimate body fat in babies and young toddlers. For kids younger than 2, doctors use weight-for-length charts to determine how a baby’s weight compares with his or her length. Any child who falls at or above the 85th percentile may be considered overweight. BMI is not a perfect measure of body fat and can be misleading in some situations. For example, a muscular person may have a high BMI without being overweight (extra muscle adds to body weight — but not fatness). Also, BMI might be difficult to interpret during puberty when kids are experiencing periods of rapid growth. It's important to remember that BMI is usually a good indicator — but is not a direct measurement — of body fat. If you're worried that your child or teen may be overweight, make an appointment with your doctor, who can assess eating and activity habits and make suggestions on how to make positive changes. The doctor also may decide to screen for some of the medical conditions that can be associated with obesity. Depending on your child's BMI (or weight-for-length measurement), age, and health, the doctor may refer you to a registered dietitian for additional advice and, possibly, might recommend a comprehensive weight management program. Obesity increases the risk for serious health conditions like type 2 diabetes, high blood pressure, and high cholesterol — all once considered exclusively adult diseases. Obese kids also may be prone to low self-esteem that stems from being teased, bullied, or rejected by peers. Kids who are unhappy with their weight may be more likely than average-weight kids to: develop unhealthy dieting habits and eating disorders, such as anorexia nervosa and bulimia be more prone to depression be at risk for substance abuse Overweight and obese kids are at risk for developing medical problems that affect their present and future health and quality of life, including: high blood pressure, high cholesterol and abnormal blood lipid levels, insulin resistance, and type 2 diabetes bone and joint problems shortness of breath that makes exercise, sports, or any physical activity more difficult and may aggravate the symptoms or increase the chances of developing asthma restless or disordered sleep patterns, such as obstructive sleep apnea tendency to mature earlier (overweight kids may be taller and more sexually mature than their peers, raising expectations that they should act as old as they look, not as old as they are; overweight girls may have irregular menstrual cycles and fertility problems in adulthood) liver and gall bladder disease Cardiovascular risk factors present in childhood (including high blood pressure, high cholesterol, and diabetes) can lead to serious medical problems like heart disease, heart failure, and stroke as adults. Preventing or treating overweight and obesity in kids may reduce the risk of developing cardiovascular disease as they get older. A number of factors contribute to becoming overweight. Genetics, lifestyle habits, or a combination of both may be involved. In some instances, endocrine problems, genetic syndromes, and medications can be associated with excessive weight gain. Much of what we eat is quick and easy — from fat-laden fast food to microwave and prepackaged meals. Daily schedules are so jam-packed that there's little time to prepare healthier meals or to squeeze in some exercise. Portion sizes, in the home and out, have grown greatly. Plus, now more than ever life is sedentary — kids spend more time playing with electronic devices, from computers to handheld video game systems, than actively playing outside. Television is a major culprit. Kids younger than 6 spend an average of 2 hours a day in front of a screen, mostly watching TV, DVDs, or videos. Older kids and teens average 4.5 hours a day watching TV, DVDs, or videos. When computer use and video games are included, time spent in front of a screen increases to over 7 hours a day! Kids who watch more than 4 hours a day are more likely to be overweight compared with kids who watch 2 hours or less. Not surprisingly, TV in the bedroom is also linked to increased likelihood of being overweight. In other words, for many kids, once they get home from school, virtually all of their free time is spent in front of one screen or another. The American Academy of Pediatrics (AAP) recommends that kids over 2 years old not spend more than 1-2 hours a day in front of a screen. The AAP also discourages any screen time for children younger than 2 years old. Many kids don't get enough physical activity. Although physical education (PE) in schools can help kids get up and moving, more and more schools are eliminating PE programs or cutting down the time spent on fitness-building activities. One study showed that gym classes offered third-graders just 25 minutes of vigorous activity each week. Current guidelines recommend that kids over 2 years old get at least 60 minutes of moderate to vigorous physical activity on most, preferably all, days of the week. Babies and toddlers should be active for 15 minutes every hour (a total of 3 hours for every 12 waking hours) each day. Genetics also play a role — genes help determine body type and how your body stores and burns fat just like they help determine other traits. Genes alone, however, cannot explain the current obesity crisis. Because both genes and habits can be passed down from one generation to the next, multiple members of a family may struggle with weight. People in the same family tend to have similar eating patterns, maintain the same levels of physical activity, and adopt the same attitudes toward being overweight. Studies have shown that a child's risk of obesity greatly increases if one or more parent is overweight or obese. The key to keeping kids of all ages at a healthy weight is taking a whole-family approach. It's the "practice what you preach" mentality. Make healthy eating and exercise a family affair. Get your kids involved by letting them help you plan and prepare healthy meals, and take them along when you go grocery shopping so they can learn how to make good food choices. And avoid falling into these common food/eating behavior traps: Don't reward kids for good behavior or try to stop bad behavior with sweets or treats. Come up with other solutions to modify their behavior. Don't maintain a clean-plate policy. Be aware of kids' hunger cues. Even babies who turn away from the bottle or breast send signals that they're full. If kids are satisfied, don't force them to continue eating. Reinforce the idea that they should only eat when they're hungry. Don't talk about "bad foods" or completely eliminate all sweets and favorite snacks from kids' diets. Kids may rebel and overeat these forbidden foods outside the home or sneak them in on their own. Recommendations by Age Additional recommendations for kids of all ages: Birth to age 1: In addition to its many health benefits, breastfeeding may help prevent excessive weight gain. Though the exact mechanism is not known, breastfed babies may be more able to control their own intake and follow their own internal hunger cues. Ages 1 to 5: Start good habits early. Help shape food preferences by offering a variety of healthy foods. Encourage kids' natural tendency to be active and help them build on developing skills. Ages 6 to 12: Encourage kids to be physically active every day, whether through an organized sports team or a pick-up game of soccer during recess. Keep your kids active at home, too, through everyday activities like walking and playing in the yard. Let them be more involved in making good food choices, such as packing lunch. Ages 13 to 18: Teens like fast food, but try to steer them toward healthier choices like grilled chicken sandwiches, salads, and smaller sizes. Teach them how to prepare healthy meals and snacks at home. Encourage teens to be active every day. All ages: Cut down on TV, computer, and video game time and discourage eating while watching the tube. Serve a variety of healthy foods and eat meals together as often as possible. Encourage kids to have at least five servings of fruits and vegetables a day, limit sugar-sweetened beverages, and eat breakfast every day. If you eat well, exercise regularly, and incorporate healthy habits into your family's daily life, you're modeling a healthy lifestyle for your kids that will last. Talk to them about the importance of eating well and being active, but make it a family affair that will become second nature for everyone. Most of all, let your kids know you love them — no matter what their weight — and that you want to help them be happy and healthy.
fwe2-CC-MAIN-2013-20-44019000
When you hear of plastic surgery, what do you think of? A Hollywood star trying to delay the effects of aging? People who want to change the size of their stomachs, breasts, or other body parts because they see it done so easily on TV? Those are common images of plastic surgery, but what about the 4-year-old boy who has his chin rebuilt after a dog bit him? Or the young woman who has the birthmark on her forehead lightened with a laser? What Is Plastic Surgery? Just because the name includes the word "plastic" doesn't mean patients who have this surgery end up with a face full of fake stuff. The name isn't taken from the synthetic substance but from the Greek word plastikos, which means to form or mold (and which gives the material plastic its name as well). Plastic surgery is a special type of surgery that can involve both a person's appearance and ability to function. Plastic surgeons strive to improve patients' appearance and self-image through both reconstructive and cosmetic procedures. - Reconstructive procedures correct defects on the face or body. These include physical birth defects like cleft lips and palates and ear deformities, traumatic injuries like those from dog bites or burns, or the aftermath of disease treatments like rebuilding a woman's breast after surgery for breast cancer. - Cosmetic (also called aesthetic) procedures alter a part of the body that the person is not satisfied with. Common cosmetic procedures include making the breasts larger (augmentation mammoplasty) or smaller (reduction mammoplasty), reshaping the nose (rhinoplasty), and removing pockets of fat from specific spots on the body (liposuction). Some cosmetic procedures aren't even surgical in the way that most people think of surgery — that is, cutting and stitching. For example, the use of special lasers to remove unwanted hair and sanding skin to improve severe scarring are two such treatments. Why Do Teens Get Plastic Surgery? Most teens don't, of course. But some do. Interestingly, the American Society of Plastic Surgeons (ASPS) reports a difference in the reasons teens give for having plastic surgery and the reasons adults do: Teens view plastic surgery as a way to fit in and look acceptable to friends and peers. Adults, on the other hand, frequently see plastic surgery as a way to stand out from the crowd. According to the ASPS, more than 300,000 people 18 years and younger had either major or minor plastic surgical procedures in 2012. Some people turn to plastic surgery to correct a physical defect or to alter a part of the body that makes them feel uncomfortable. For example, guys with a condition called gynecomastia (excess breast tissue) that doesn't go away with time or weight loss may opt for reduction surgery. A girl or guy with a birthmark may turn to laser treatment to lessen its appearance. Other people decide they want a cosmetic change because they’re not happy about the way they look. Teens who have cosmetic procedures — such as otoplasty (surgery to pin back ears that stick out) or dermabrasion (a procedure that can help smooth or camouflage severe acne scars) — sometimes feel more comfortable with their appearance after the procedure. The most common procedures teens choose include nose reshaping, ear surgery, acne and acne scar treatment, and breast reduction. Is Plastic Surgery the Right Choice? Reconstructive surgery helps repair significant defects or problems. But what about having cosmetic surgery just to change your appearance? Is it a good idea for teens? As with everything, there are right and wrong reasons to have surgery. Cosmetic surgery is unlikely to change your life. Most board-certified plastic surgeons spend a lot of time interviewing teens who want plastic surgery to decide if they are good candidates for the surgery. Doctors want to know that teens are emotionally mature enough to handle the surgery and that they're doing it for the right reasons. Many plastic surgery procedures are just that — surgery. They involve anesthesia, wound healing, and other serious risks. Doctors who perform these procedures want to know that their patients are capable of understanding and handling the stress of surgery. Some doctors won't perform certain procedures (like rhinoplasty) on a teen until they are sure that person is old enough and has finished growing. For rhinoplasty, that means about 15 or 16 for girls and about a year older for guys. Girls who want to enlarge their breasts for cosmetic reasons usually must be at least 18 because saline implants are only approved for women 18 and older. In some cases, though, such as when there's a tremendous size difference between the breasts or one breast has failed to grow at all, a plastic surgeon may get involved earlier. Things to Consider Here are a few things to think about if you're considering plastic surgery: - Almost all teens (and many adults) are self-conscious about their bodies. Almost everyone wishes there were a thing or two that could be changed. A lot of this self-consciousness goes away with time. Ask yourself if you're considering plastic surgery because you want it for yourself or whether it's to please someone else. - A person's body continues to change through the teen years. Body parts that might appear too large or too small now can become more proportionate over time. Sometimes, for example, what seems like a big nose looks more the right size as the rest of the person's face catches up during growth. - Getting in good shape through appropriate weight control and exercise can do great things for a person's looks without surgery. It's never a good idea to choose plastic surgery as a first option for something like weight loss that can be corrected in a nonsurgical manner. Gastric bypass or liposuction may seem like quick and easy fixes compared with sticking to a diet. Both of these procedures, however, carry far greater risks than dieting, and doctors should reserve them for extreme cases when all other options have failed. - Some people's emotions have a really big effect on how they think they look. People who are depressed, extremely self-critical, or have a distorted view of what they really look like sometimes think that changing their looks will solve their problems. In these cases, it won't. Working out the emotional problem with the help of a trained therapist is a better bet. In fact, many doctors won't perform plastic surgery on teens who are depressed or have other mental health problems until these problems are treated first. If you're considering plastic surgery, talk it over with your parents. If you're serious and your parents agree, the next step is meeting with a plastic surgeon to help you learn what to expect before, during, and after the procedure — as well as any possible complications or downsides to the surgery. Depending on the procedure, you may feel some pain as you recover, and temporary swelling or bruising can make you look less like yourself for a while. Procedures and healing times vary, so you'll want to do your research into what's involved in your particular procedure and whether the surgery is reconstructive or cosmetic. It's a good idea to choose a doctor who is certified by the American Board of Plastic Surgery. Cost will likely be a factor, too. Elective plastic surgery procedures can be expensive. Although medical insurance covers many reconstructive surgeries, the cost of cosmetic procedures almost always comes straight out of the patient's pocket. Your parents can find out what your insurance plan will and won't cover. For example, breast enlargement surgery is considered a purely cosmetic procedure and is rarely covered by insurance. But breast reduction surgery may be covered by some plans because large breasts can cause physical discomfort and even pain for many girls. Plastic surgery isn't something to rush into. If you're thinking about plastic surgery, find out as much as you can about the specific procedure you're considering and talk it over with doctors and your parents. Once you have the facts, you can decide whether the surgery is right for you. Share this page using: Note: All information on TeensHealth® is for educational purposes only. For specific medical advice, diagnoses, and treatment, consult your doctor. © 1995- The Nemours Foundation. All rights reserved.
fwe2-CC-MAIN-2013-20-44021000
Most Active Stories KRWG.ORG-The Region's Home Page Sat January 14, 2012 The Inquisition: Alive And Well After 800 Years When we talk of inquisition it is usually prefaced with a definite article — as in, The Inquisition. But, as Vanity Fair editor Cullen Murphy points out in his new book, God's Jury, the Inquisition wasn't a single event but rather a decentralized, centuries-long process. Murphy says the "inquisitorial impulse" is alive and well today — despite its humble origins with the Cathars in France, where it was initially designed to deal with Christian heretics. "The temptation, I think, is to think of the Inquisition as a kind of throwback," Murphy tells Guy Raz, host of weekends on All Things Considered. "Nothing quite says 'medieval' the way the word 'inquisition' does. And my view is that you should actually adjust the lens fairly substantially." When you look at the Inquisition, he says, what you really see is the beginning of the modern world. "There's always been persecution, there's always been hatred," Murphy says. The Inquisition, however, was such an enormous, sustained effort that it required an infrastructure to collect and retrieve information — over centuries. It was this institutionalizing of the Inquisition that revolutionized record-keeping and surveillance techniques, Murphy says. Modern Day Parallels If you open a modern day interrogation manual for the police force or the military and place an interrogation manual from the Spanish Inquisition by its side, Murphy says, you'd be shocked by the similarities. "There isn't a trick that is used nowadays that wasn't in use by the Inquisition. The psychology of interrogation, the ruses that people would use when you're questioning, there's nothing new under the sun when it comes to interrogation," he says. Interrogation at Guantanamo, for example, illustrates that the spirit of the Spanish Inquisition is alive and well today, Murphy says. "The Inquisition tried to put restraints on torture. The problem was that in the moment, when people are trying to get information, those boundaries keep being pushed," he says. "People think, 'You know, one more turn of the screw will get us one more little piece of information' ... and torture creeps and creeps and creeps." Are We In Danger? Murphy says the key ingredients for a modern day inquisition exist today. In order for an inquisition to succeed, he says, there must be an individual or a group of people who believe they are in the right and want everyone else to toe the line. "But that moral certainty isn't enough," Murphy says. There must also be a bureaucracy and methods of surveillance to sustain the persecution. "All of those things are much more advanced right now by an order of magnitude than they were centuries ago," Murphy says. "Nowadays [surveillance] is done almost automatically — every time you hit the keyboard on your computer or every time you walk by a camera on the street." Murphy fears what could happen if that moral certainty meets the kinds of monitoring tools that exist today. "In the wrong hands, the tools of repression are just more available and dangerous than they have been in a long time," he says. GUY RAZ, HOST: It's WEEKENDS on ALL THINGS CONSIDERED from NPR News. I'm Guy Raz. A few yeas ago, writer Cullen Murphy took a long, hard look at America's place in the modern world, and then he asked a simple question: Are we Rome? He went on to write a book with that very title, looking at the ancient world and the modern one and concluding not much has changed. Well, Murphy's back on the case. This time, he takes on the Inquisition - or, rather, the Inquisitions with an S. The book is called "God's Jury," and in it, Murphy argues that the Inquisitions that began in the 12th century were actually a harbinger of the modern world. CULLEN MURPHY: The temptation, I think, is to think of the Inquisition as a kind of throwback. Nothing quite says Medieval the way the word inquisition does. And my view is that you should actually adjust the lens fairly substantially. If you do, you begin to see that the Inquisition has a lot of characteristics that are not really medieval but in fact modern. You know, there's always been persecution, there's always been hatred, but the Inquisition is something that is institutionalized. And institutions require a kind of infrastructure. You need to be able to keep records, to, you know, amass information, and then you need to be able to find it. And the fact is that in the late medieval world, these kinds of tools are finally coming into existence once again. RAZ: Surveillance, data collecting. MURPHY: Surveillance would be another. Keeping tabs on what people are doing, keeping tabs on what people are thinking. So finally, these tools emerge. We see them around us in our own day all the time. We take them for granted. But it's not very often that we ask when did governments, when did other institutions begin to have these tools. And the Inquisition is a good way to begin to answer that question because it relied on them, you know, essentially. RAZ: What's fascinating is that certain techniques were so proscribed during the Inquisition. You talked about these Inquisition manuals, and you draw comparisons between those and modern manuals for interrogation. MURPHY: It's uncanny. There's an inquisitor named Bernard Gui. He compiled an Inquisition manual, you know, for use by other inquisitors, and it became the basis of many such manuals. And if you look at that and then you look at modern manuals for, for instance, police forces or the military, you begin to see that there isn't a trick that is used nowadays that wasn't in use by the Inquisition, you know, the psychology of interrogation, the ruses that people would use when you're questioning. There's, you know, there's nothing new under the sun when it comes to interrogation. RAZ: My guest is Cullen Murphy. He has written a new book. It's called "God's Jury: The Inquisition and the Making of the Modern World." At one point in the book you draw a comparison between Guantanamo and the Spanish Inquisition. Can you explain that? MURPHY: Guantanamo has been a symbol worldwide of many things, but one of them is interrogation gone wrong. And to me, it illustrates something that always happens when you try to put restrictions on a kind of behavior that is inherently problematic. The Inquisition tried to put restraints on torture. The problem was that in the moment when people are trying to get information, those boundaries keep being pushed. People think, you know, one more turn of the screw will get us one more little piece of information, and that will justify this very messy procedure that, you know, we really wish we didn't have to resort to. So that happens again and again, and torture creeps and creeps and creeps. The same thing happened at Guantanamo. If you look at the early history, the attempts to get information from detainees, you see the same kind of creep. So that is one thing that Guantanamo illustrates where I think the parallel with the way in which the Inquisition proceeded is very close. RAZ: Towards the end of the book, you write that not only do all the ingredients for a modern day inquisition exist today but also that they are more prevalent than ever before. How so? MURPHY: Well, this is a real worry of mine. There's one thing that every Inquisition needs, and that is a person, people, who are possessed of an idea. They think they're in the right about something that they want everyone else to toe the line. And you see this in religion, you see this in totalitarian regimes, but that moral certainty isn't enough. You need to have something that sustains it that gives it life over time. And those things, like having a bureaucracy, having methods of surveillance, information technology, all of those things are much more advanced right now by an order of magnitude than they were centuries ago. And many of these things are, you know, more or less on cruise control. You know, we know what bureaucracies are like. They don't shrink. They expand. We know what surveillance is like. Nowadays, it's done almost automatically every time you hit the keyboard on your computer or every time you walk by a camera on the street. And so my worry is what happens when you combine that idea of moral certainty with the kinds of tools that exist nowadays? You know, it does seem to me that in the wrong hands, the tools of repression are just more available and dangerous than they have been for a long time. RAZ: I should probably mention that you are a Catholic and a practicing Catholic. Is that fair to say? RAZ: And as you point out, many accounts of the Inquisition have been biased, either overly critical of the church or overly defensive. And understandably, the Church has been prickly about accounts of the Inquisition, but what does the Inquisition tell us about the modern day Catholic Church? MURPHY: Well, the Church certainly has been prickly about the Inquisition, and there's a lot to be defensive about. There's no way that you can paint the Inquisition in a lovely light. I'm a Catholic who has, you know, long had issues with his church, and one of those issues has to do with a basic mindset. And you can think of it this way: Is the Church and its teachings fundamentally about absolute certainty that brooks no discussion, or is it fundamentally about something else? It is about humility? Does it have a place for tolerance and for doubt in a constructive sense? And these two traditions fight with each other throughout the history of the church. And for a long time, the first tradition has been in the ascendant. And I think it's time for the second tradition to emerge. RAZ: That's Vanity Fair editor-at-large Cullen Murphy. His new book is called "God's Jury: the Inquisition and the Making of the Modern World." Cullen Murphy, thank you so much. MURPHY: Thank you, Guy. Transcript provided by NPR, Copyright National Public Radio.
fwe2-CC-MAIN-2013-20-44030000
John Singleton Copley’s famous painting Watson and the Shark was commissioned by Brook Watson to document a harrowing event at age 14 as a maritime sailor. While swimming alone off Havana Cuba in 1749, Watson was repeatedly attacked by a shark. The shark first removed some flesh from Watson’s right calf, then bit off his entire left foot at the ankle. Rescued by his shipmates, the teenager subsequently had to have his left leg amputated below the knee. In the painting Copley has depicted Watson as a romanticized ghostly, nude figure on his back, woefully vulnerable to the more powerful beast acting on animal instinct. One seldom sees a more graphic vision of the classic theme Man versus Nature. In Los Angeles we have made an art form of controlling our environment: manicured palm trees, diverted river water, flood channels, fire breaks. However, once in a while Angelenos are starkly reminded of natural threats in our midst—say, by this Los Angeles Times photo last October of a shark breaching in Santa Monica Bay near Gladstone’s restaurant. As a regular ocean swimmer in this area, such photos are of immense interest to me. In chatting with the photographer, Randy Wright, I found he has recorded several more shark sightings there. On the Shark Research Committee website, local surfers have posted dozens of similar accounts, including some nibbles by smaller sharks. I hide from my swim buddies the fact that a sea lion pup with head cleanly sheared off was discovered in our Redondo Beach training area. Of course, such photos and reports generate countless theories within the swim/surf community of what provokes a shark attack, or doesn’t. For instance, “they will attack if attracted by sparkly jewelry, or by wetsuits resembling seals.” “They won’t savagely attack humans because we’re not blubbery enough.” “They’ll only strike in deep water because they spring up from below.” Almost 250 years since Copley’s painting, shark tales continue to ignite the imagination. It seems there are only three facts everyone can agree on: 1) Sharks are an important part of the ecosystem unfairly maligned and overhunted. 2) Swimmers must respect the ocean for what it is—the wild—and acknowledge one’s limitations, even a few yards off Gladstone’s. And 3) One’s chances of being attacked or being fatally wounded are extremely rare, aggravated by swimming alone—Watson’s mistake. Renee Montgomery, Assistant Director of Collections Information
fwe2-CC-MAIN-2013-20-44032000
Post-traumatic stress disorder (PTSD) is an anxiety disorder that can develop after exposure to a traumatic event or ordeal in which actual physical or emotional harm occurred or was threatened. Events that can trigger PTSD include violent personal assaults, such as rape or mugging, natural or human-caused disasters, accidents, or military combat. PTSD can be extremely disabling. Many people with PTSD repeatedly re-experience the ordeal in the form of flashback episodes, memories, nightmares, or frightening thoughts, especially when they are exposed to events or objects reminiscent of the trauma. Anniversaries of the event can also trigger symptoms. People with PTSD also experience emotional numbness and sleep disturbances, depression, anxiety, and irritability or outbursts of anger. Feelings of intense guilt are also common. Most people with PTSD try to avoid any reminders or thoughts of the ordeal. PTSD is diagnosed when symptoms last more than one month. Co-occurring depression, alcoholabuse, substance abuse, or another anxiety disorder is not uncommon. The likelihood of treatment success is increased when these other conditions are appropriately identified and treated as well.What are the risk factors for PTSD?What are the symptoms of PTSD?How is PTSD diagnosed?What are the treatments for PTSD?How can I reduce my risk of PTSD?What questions should I ask my doctor?What is it like to live with PTSD?Where can I get more information about PTSD? - Reviewer: Rimas Lukas, MD - Review Date: 11/2012 - - Update Date: 11/26/2012 -
fwe2-CC-MAIN-2013-20-44034000
Stay hydrated and gradually adapt your body to high temperatures, expert says SATURDAY, June 30 (HealthDay News) -- During hot weather, people who exercise outdoors need to take steps to avoid heat injury, according to the American Council on Exercise. Staying hydrated is essential, and can be accomplished by drinking a large amount of fluids (until you're just short of feeling bloated) 30 minutes before exercising, drinking at least six ounces of fluids every 20 minutes during exercise and drinking beyond the point where you are no longer thirsty after exercise, Dr. Cedric Bryant, the council's chief science officer, said in a council news release. Water is generally the best fluid, unless your exercise session lasts longer than an hour. In that case, a sports drink may be more beneficial. Another tip from the council is to gradually adapt your body to exercising in hot weather. This usually takes 10 to 14 days and can greatly reduce your risk for heat injury. Once your body is acclimatized, you will sweat sooner, produce more sweat and lose fewer electrolytes, Bryant said. The benefits of acclimatization include a lower body core temperature, a decreased heart rate during exercise and a reduced risk of dehydration. Reducing your exercise intensity level during hot weather -- especially during the acclimatization period -- is another good idea, the council suggests. Also, don't wear rubberized sweat suits or any other clothing that is impermeable to water. This type of clothing prevents the evaporation of sweat from the skin, increasing the risk of heat injury, Bryant said. Respect the conditions. In general, you should consider forgoing exercise when the temperature is above 90 degrees Fahrenheit and the relative humidity is above 60 percent. The Texas Department of Public Safety has more about exercising safely in hot weather (http://www.txdps.state.tx.us/trainingacademy/recruiting/hotWeatherExercise.htm ). SOURCE: American Council on Exercise, news release, June 19, 2012
fwe2-CC-MAIN-2013-20-44044000
- Academic Search Complete Provides full text for more than 4,000 scholarly publications covering academic areas of study including social sciences, humanities, education, computer sciences, engineering, language and linguistics, arts & literature, medical sciences, and ethnic studies. This database is updated daily. - Accessible Archives A site devoted to primary source material in American history. Information archived is from leading historical periodicals and books, and includes eyewitness accounts of historical events, vivid descriptions of daily life, editorial observations, commerce as seen through advertisements, and genealogical records. An online encyclopedia that provides full-text access to articles, research updates, and dictionary terms in all areas of science and technology. Also contains biographies, weekly updates on hot topics and discoveries, a student center with resource guides, and links to related sites. Updated daily. This resource is available for the Kent, Salem, and Tuscarawas campuses only. - AccuNet/AP Multimedia Archive Available under its new name, AP Images . Contains approximately 500,000 photos and selections of pictures from the AP image and print negative library. Pictures cover local, state, national and international subjects. - ACM Digital Library Provides bibliographic information, abstracts, index terms, reviews,and the full-text for ACM conference proceedings. ACM journals, magazines, and newsletters are also available at this site, as well as through the OhioLINK Electronic Journal Center. Note: Not available off-campus. - African American Newspapers, 1827-1998 Part of the Readex America's Historical Newspapers collection, African American Newspapers, 1827-1998 was created from African American newspaper archives of the Wisconsin Historical Society, Kansas State Historical Society and the Library of Congress. Beginning with Freedom's Journal (NY), the first African American newspaper published in the United States, the titles in this resource include The Colored Citizen (OH), Rights of All (NY), Wisconsin Afro-American, New York Age, Virginia Journal and Alexandria Advertiser, Richmond Planet, Cleveland Gazette, The Appeal (MN) and hundreds of others from every region of the U.S - African Cultural Heritage Sites and Landscapes Subsection of the Aluka online digital library of scholarly resources from and about Africa. Focuses on high-quality visual, contextual, and spatial documentation of African heritage sites. - African-American Poetry Contains the full-text of nearly 3,000 poems written between 1760 and 1900. Provides extensive coverage of age and age-related issues. Updated 3 times per year. Produced by the National Agricultural Library, Agricola contains book and journal article citations in the areas of agriculture and related disciplines, including plant and animal sciences, forestry, entomology, soil and water resources, agricultural economics, agricultural engineering, agricultural farming products, alternative farming practices and food and nutrition. - AHFS Consumer Medication Information Published by the American Society of Health-System Pharmacists, this resource includes over one thousand drug information monographs written in lay language for consumers. Includes “How To” monographs for administering different types of medications such as eye drops and inhalers. Updated monthly. - Aldrich/ACD Library of FT NMR Spectra (Pro Version) Electronic version of the print title The Aldrich Library of 13 C and 1 H FT NMR Spectra 3 vol. set (QD96.F68 P67 1993). The Electronic Library contains CNMR and HNMR spectra of 11,828 organic compounds as well as information about their physico-chemical properties. You can browse through the database, perform searches according to catalog parameters (catalog number, CAS number, formula, name, and book references, print spectrum, as well as perform some basic operations with a spectrum (Peak picking and integration). In addition there are features such as multi-level searching using search lists, searching according to molecular weight, chemical properties(boiling point, melting point, etc.), structures and sub-structures, spectral parameters(peaks and solvent), searching for portions of the spectrum, and substructure search. It is also possible to create and modify reports using the ACD/ ChemSketch chemical graphics application. This database focuses on the many perspectives of complementary, holistic and integrated approaches to health care and wellness. It covers both scholarly and popular resources, with peer-reviewed materials comprising about 25%. It is possible to limit to scholarly journals only if desired. An online digital library of scholarly resources from and about Africa, Aluka includes a wide variety of high-quality scholarly materials ranging from archival documents, periodicals, books, reports, manuscripts, and reference works, to three-dimensional models, maps, oral histories, plant specimens, photographs, and slides. One of the site's primary objectives is to provide African scholars and students with access to scholarly materials originally from Africa. - Amateur Athletic Foundation Sport Library Digitized content from several sport-related publications undertaken by the Amateur Athletic Foundation of Los Angeles. Indexed and searchable. - America's News Magazines (NewsBank) With this resource, you can search the full text of 22 popular and news magazines. Date ranges vary by magazine; some go back to the early 1990s. - America: History and Life Provides a complete bibliographic reference to the history of the United States and Canada from prehistory to the present. - American & English Literature: Poetry, Drama and Prose Contains many works of poetry, drama, and prose, based on books and other sources originally published in print. Collections include: African-American Poetry 1700-1900; 20th Century African-American Poetry; American Poetry 1600-1900; 20th Century American Poetry; English Poetry; 20th Century English Poetry; American Drama; English Prose Drama 1280-1915; English Verse Dramas 13th-19th Centuries; Early American Fiction 1774-1850; 18th Century Fiction 1700-1800; Bibliography of American Literature; Editions and Adaptations of Shakespeare; William Butler Yeats Collection; The Bible in English. - American Drama American dramatic literature from 1714 to the present. The database contains more than 2,000 full-text plays written by over 300 American dramatists. - American Folklore This site contains one or more folktales from each state. The target audiences are storytellers, teachers, folklore fans and students. The site is updated regularly. The folktales were rewritten by S. E. Schlosser. - American History in Video Online collection of video available for the study of American history allowing students and researchers to analyze historical events, and their presentation over time, through commercial and governmental newsreels, archival footage, public affairs footage, and important documentaries. - American National Biography Online Covers more than 18,000 people from all eras who have influenced and shaped American history and culture. Includes illustrations and links to select web sites. Also includes articles from The Oxford Companion to United States History, which gives context to the lives included in ANB Online. All articles originally included in the ANB Online were on biographical subjects who died before the end of 1995. Articles on important figures who have died since 1995 are being added in quarterly updates, b - American Periodicals Series Online American Periodicals Series Online contains page images of more than 1,100 historic American magazines, journals, and newspapers. These resources illuminate the development of American culture, politics, and society across some 150 years. Articles can be searched by author, source, and words in the complete text. Updated quarterly. - American Poetry Database Contains the full text of over 40,000 poems by more than 200 writers from the 17th Century to the early 20th Century. - American Social Movements (Sharpe Reference Online) This full-text reference encyclopedia examines significant social movements in American history. Search, browse or use the topic finder to locate entries. - Ancestry Library Edition Available on campus only Genealogical database covering U.S., Canada and the U.K. Includes census records, vital records, immigration records, family histories, military records, court and legal documents, directories, photos and maps. - Annual Bibliography of English Language and Literature (ABELL) Indexes monographs, journal articles, book reviews, and more on the language, literature, bibliography, and culture of English-speaking areas of the world. Coverage includes materials published from the late 19th century to the present. - Annual Reviews Contains critical reviews of significant primary literature in the areas of biology, biomedicine, chemistry, physics, sociology, and related disciplines. Published yearly, this is the online, full-text version of the printed Annual Review of...series. (1998 - present) - Anthropology Plus Anthropology Plus combines the Anthropological Literature (from Harvard University) with the Anthropological Index (from the Royal Anthropological Institute). Gives extensive worldwide indexing of journal articles, reports, commentaries, edited works, and obituaries in the fields of social, cultural, physical, biological, and linguistic anthropology, ethnology, archaeology, folklore, material culture, and interdisciplinary studies. Coverage is from the late 19th century to the present. - AP Images Contains approximately 500,000 photos and selections of pictures from the AP image and print negative library. Pictures cover local, state, national and international subjects. (Formerly AccuNet/AP Multimedia Archive.) - Art and Architecture Database Contains approximately 3000 art and architectural images. Collections include Greek and Roman sculpture and architecture, Minoan art, artists throughout history, and selected images from several art history textbooks. Part of the Digital Media Center (OhioLINK). - Art and Architecture from the University of Cincinnati Including works by Eisenman, Fellheimer & Wagner, Latrobe, Elizabeth Nourse, and Frank Lloyd Wright. - Art Full Text [1984-] This offers full text plus abstracts and indexing of an international array of peer-selected publications, now with expanded coverage of Latin American, Canadian, Asian and other non-Western art, new artists, contemporary art, exhibition reviews, and feminist criticism. Full text coverage for selected periodicals is also included. Reproductions of works of art that appear in indexed periodicals is also included. Also includes access to Art Index Retrospective. - Art Index Retrospective [1929-1984] Art Index Retrospective provides searchable indexing of art journalism from international publications, reflecting coverage provided from 1929 through 1984. Cites sources published in French, Italian, German, Spanish, and Dutch, as well as English. In addition to periodicals, users will find data from select yearbooks and museum bulletins. - ARTbibliographies Modern ARTbibliographies modern contains abstracts of journal articles, books, exhibition catalog and reviews and dissertations. Its scope covers Impressionism up to the late 20th century. Emphasis is on adding new and lesser-known artists and coverage for foreign-language literature. About 13,000 new entries are added each year. - ARTFL Project Consists of 2000 texts in the French language ranging from classic works of French literature to various kinds of non-fiction prose and technical writing. - ArticleFirst (OCLC) Contains citations to journal articles covering the humanities, popular culture, science, technology, business and the social sciences. Updated daily. - Arts and Humanities Citation Index This "Web of Science" database covers the journal literature of the arts and humanities, indexing 1,100 of the world's leading arts and humanities journals, and relevant items from over 6,800 major science and social science journals. Updated weekly. ARTstor is a digital library of nearly one million images in the areas of art, architecture, the humanities, and social sciences with a set of tools to view, present, and manage images for research and pedagogical purposes. Images are available for use in presentations for educational or other noncommercial uses. - ATLA (EBSCO Access) A premier index to journal articles, book reviews, and collections of essays in all scholarly fields of religion representing all the major religious faiths, major denominations, and numerous language groups. Major areas of coverage include: Archaeology & Antiquities, Bible, Church History, Human Culture & Society, Missions & Ecumenism, Pastoral Ministry, Philosophy & Ethics, Religious Studies, Theology, and World Religions. Updated twice per year. - Audit Analytics Audit Analytics provides detailed research on over 20,000 public companies and more than 1,500 accounting firms in the US. KSU subscribes to two modules: Audit & Compliance and Corporate & Legal. Audit & Compliance covers current or past auditor firms, auditor changes, fees, opinions, SOX 302 disclosure controls, SOX 404 internal controls, legal cases, director and officer changes, company history, share price details, income statement and compliance difficulties. The Corporate & Legal module is an integrated collection of databases focused on actions, disclosures and correspondence by companies, advisors, regulators and investors. The module is composed of five data sets: SEC Comment Letters, bankruptcies, litigation, shareholder activism and tax footnotes. - Audit Analytics - through WRDS Provides detailed research on over 20,000 public companies and more than 1,500 accounting firms in the US. Kent State University subscribes to two Audit Analytics products: Audit & Compliance and Corporate & Legal. Audit & Compliance. Additional information on the modules is available here: Audit & Compliance Datasheet and Corporate & Legal Datasheet. - Avery Index to Architectural Periodicals The Avery Index to Architectural Periodicals database offers a comprehensive listing of journal articles on architecture and design, including bibliographic descriptions. It contains over 600,000 entries surveying over 2,500 American and international journals, including many that are peer reviewed. Publications from professional associations and regional periodicals are also included. - L'Année Philologique Core database for scholarship in the study of Greek and Roman civilization. This includes literature; linguistics; political, economic, and social history; attitudes and daily life; religion; cultural and artistic life; law; philosophy; science and technology; the history of classical studies and more.
fwe2-CC-MAIN-2013-20-44046000
In memory of John Hope Franklin (1915-2009) and in honor of Black History Month, this exhibit touches on four periods crucial to understanding the history of African Americans in the United States, exploring their dimensions-in a necessarily brief manner-through the words of John Hope Franklin and the many forms of historical documentation in the collections of the Rare Book, Manuscript and Special Collections Library. Through these displays, we can reflect on our past and at the same time, as Dr. Franklin so strongly urged us, look to the present for the means to free ourselves from injustice, fear, and hatred. "The writing of history reflects the interests, predilections, and even prejudices of a given generation. This means that at the present time there is an urgent need to re-examine our past in terms of our present outlook." (John Hope Franklin, from African American Biography, Volume 2, 1994) Exhibit curated by Paula Jeannet Mangiafico and Janie Morris, with support from the John Hope Franklin Research Center for African and African American History and Culture: http://library.duke.edu/specialcollections/franklin/ Some material on this page may be protected by copyrights not held by the Duke University Libraries, all other material is copyright 2009 by Duke University Libraries. For complete information about use and reproduction of Duke materials, please read our Use and Reproduction Policy.
fwe2-CC-MAIN-2013-20-44049000
Although Perl remains a vibrant language with a fiercely loyal following, it has undergone many changes to keep up with new technologies and applications that were not anticipated when Perl was first introduced in 1987. Through its community-based development model, Perl has kept up with changing times and remained fresh when other languages might have stagnated. Internally, however, there have remained kinks and stumbling blocks that developers have needed to sidestep, long-abandoned features that have been maintained only for backwards compatibility, misdirected phrasings that have hindered more intuitive syntax structures, and a cacophony of modules that sometimes work well together, but occasionally don't. Perl continues to have a strong following devoted to its development, but in the meantime, a group of core Perl developers have begun working on Perl 6, a complete rewrite of the Perl language. While Perl's creative philosophy and common-sense syntax are sure to remain in Perl 6, everything else in the language is being re-examined and recreated. Perl 6 Essentials provides an overview of the current state of Perl 6 for those who await its release. Written by members of the Perl 6 core development team, the book offers an explanation of the various stages of the project, with reference material for programmers who are interested in what changes are planned or who may want to contribute to the project. The book will satisfy their curiosity and show how changes in the language will make it more powerful and easier to use. Perl 6 Essentials is the first book that offers a peek into the next major version of the Perl language. This book is essential reading for anyone interested in the future of Perl.
fwe2-CC-MAIN-2013-20-44062000
By Kevin Simmons – The canopy stretches in all directions as far as the eye can see. A gentle breeze rustles the leaves to life in waves, and a shifting light peeks through to illuminate thousands of subtle shades within. The meditative swaying transfixes, one of those moments where time slows and the air is undeniably magically charged. This is what is it like 100 feet above the jungle floor at the top of the Rainforest Discovery Center’s Observation Tower in Soberania National Park, just 30 minutes outside of Panama City. This treetop world is usually reserved for biologists with scientific equipment and a considerable degree of courage. But, in January of 2008, the Eugene Eisenmann Avifauna Foundation introduced the Rainforest Discovery Center and a unique ecotourism and educational experience. Named to honor Panamanian ornithologist Eugene Eisenmann, the Avifauna Foundation’s primary mission is to protect the birds of Panama and their habitat with a commitment to conservation through sustainable tourism. Situated on a 50 acres within the national park at the entrance to the Pipeline Road, the Rainforest Discovery Center offers nature lovers an extensive network of guided trails, a lake for kayaking and canoeing and a visitor’s center with a small gift shop and cafeteria. Additional educational exhibits are scheduled to open next year. Conceptualized by Patrick Dillon, the lead Panamanian architect for Panama’s Museum of Biodiversity, the design plays with height and proportion to create a stunning example of expansiveness, ambition and restraint. The center is energy self-sufficient with solar panels and a rainwater collection system from the roof. Much of the materials used to build the structures were recycled from old houses in the Canal area. The tour begins at the visitor center, built in a small jungle clearing. Out on the breezy balcony, the rapid-fire patter of wings is mesmerizing. Hummingbirds of all shapes, size and color surround the half dozen feeders perched on the recovered-wood railings. Sunlight glints off iridescent plumages of orange, violet, aquamarine, deep red and celadon shine. Some of the beaks are perfectly arced while others angle sharply like carnival masks. Others still are perfectly straight and seem to be three times as long as the birds’ fragile bodies. There are 59 known species of hummingbird in Panamá, one of the knowledgeable guides says. Just down a gravel pathway is the 100- foot observation tower. A 174-step staircase spirals up the iron structure into the forest, with resting and observation platforms every 25 feet, providing a rare view into the forest habitat. Each altitudinal level is home to a distinct variety of plants and animals, many of which spend their entire life cycles without ever stepping outside their level. In some cases, a species’ home may be as small as a pool of water gathered inside the leaves of a bromeliad. The variations in humidity, precipitation, and solar radiation at different heights create unique environmental conditions. At the topmost layer of the forest, plants and animals must be highly resistant to extreme variations in weather conditions, including cold nights, intense daytime heat, and violent rainstorms. At these altitudes, canopy-dwelling creatures such as toucans also often display brighter colors to arouse the attention of potential mates. Through an alliance between the AviFauna Foundation and the University of Panama, each level of the tower will soon have its own small meteorological station to measure and display these weather conditions. And, far above the tower are the data points to monitor and count the approximately 300,000 birds of prey that fly over the tower every year. Tagging select birds helps to assess the health of the forests, the stability of their migration patterns and the continuity of the population. Other Avifauna projects include an educational program designed for school children, ages 12 to 18, and working with the Panamanian Association for Sustainable Tourism on a certification program to train naturalist guides, the first of its kind in Panama. The inaugural course will begin in May and will involve more than 400 hours of training. “We are developing a complete curriculum, using teachers from the United States as well as Panama,” says Beatriz Schmitt, the Executive Director of the Avifauna Foundation. “It will be very intensive, and provide certification and skills that will be recognized internationally.” The organization is also consulting with private reserves, including one in Coronado, to design and create trail systems and educational displays that will connect residents to their natural environment. At Soberania, back on the jungle floor, the apricot light of late afternoon reflects off the lake’s glassy surface. Far below the riotous abundance of life of the Panamanian rainforest, stillness reigns supreme, broken only by the leaves that periodically helicopter down from above. Hours: Everyday from 6 a.m. to 4 p.m. except Christmas and New Year Between 6 a.m. and 10 a.m., $20 (foreigners) and $10 (residents) Between 10 a.m. and 4 p.m., $10 (foreigners) and $5 (residents)
fwe2-CC-MAIN-2013-20-44072000
Add To PHR A home blood pressure test allows you to keep track of your blood pressure at home. Blood pressure is a measure of the force of blood inside an artery. A blood pressure measurement is taken by temporarily stopping the flow of blood in an artery (usually by inflating a cuff around the upper arm) and then listening for the sound of the blood beginning to flow through the artery again as air is released from the cuff. As blood flows through the artery, it can be heard through a stethoscope placed on the skin over the artery. Blood pressure is recorded as two measurements. These two pressures are expressed in millimeters of mercury (mm Hg) because the original devices that measured blood pressure used a column of mercury. Blood pressure measurements are recorded as systolic/diastolic (say "systolic over diastolic"). For example, if your systolic pressure is 120 mm Hg and your diastolic pressure is 80 mm Hg, your blood pressure is recorded as 120/80 (say "120 over 80"). The general types of blood pressure monitors commonly available are manual and automatic. Manual models are similar to those that your doctor might use to take your blood pressure. Called a sphygmomanometer, these devices usually include an arm cuff, a squeeze bulb to inflate the cuff, a stethoscope or microphone, and a gauge to measure the blood pressure. Blood pressure is displayed on a circular dial with a needle. As the pressure in the cuff rises, the needle moves clockwise on the dial. As the cuff pressure falls, the needle moves counterclockwise. Electronic battery-operated monitors use a microphone to detect blood pulsing in the artery. You do not need to listen with a stethoscope. The cuff, which is attached to your wrist or upper arm, is connected to an electronic monitor that automatically inflates and deflates the cuff when you press the start button. The type of blood pressure monitor typically found in supermarkets, pharmacies, and shopping malls is an electronic device. Ambulatory blood pressure monitoring (ABPM) is another method that may be ordered by your doctor if other methods do not give consistent results. It is often used if there is a big difference between the blood pressure readings you get at home and your readings in your doctor's office. An ambulatory blood pressure monitor is a small device that is worn throughout the day, usually for 24 or 48 hours. The device takes your blood pressure automatically. The device periodically inflates and takes blood pressure measurements, which are recorded for later printout and analysis. The devices are usually loaned by a clinic or hospital. If you are required to use an ambulatory blood pressure monitor, keep in mind that it is important for a health professional to properly size the cuff, which fits around your arm. Fitting does not take long. Health Tools help you make wise health decisions or take action to improve your health. Home blood pressure monitoring measures your blood pressure at different times and in different places (such as at home and at work) during the day. It may be done to: Ambulatory blood pressure monitoring (ABPM) is often used if there is a big difference between the blood pressure readings you get at home and your readings in your doctor's office. Remember that blood pressure readings vary throughout the day. They usually are highest in the morning after you wake up and move around. They decrease throughout the day and are lowest in the evening. If you have an ambulatory blood pressure monitor, you do not need to do anything to prepare. The monitor will automatically take your blood pressure while you do your normal daily activities. When you buy a blood pressure monitor, be sure to buy the correct size. The size of the blood pressure cuff and where you place the cuff on your arm can change your blood pressure readings. If the cuff is too small or too large, the measurements will not be accurate. Hospital and medical supply stores generally carry many cuff sizes and can help make sure that your cuff fits you. Take your new monitor to your doctor's office to make sure it is working right. Have your health professional take your blood pressure and then compare that result with your own device. Ask your health professional to watch you use your monitor to make sure that you are using it correctly. It is a good idea to have your monitor checked every year. Your blood pressure in your right arm may be higher or lower than the blood pressure in your left arm. For this reason, try to use the same arm for every reading. Blood pressure readings also rise and fall at different times during the day. They are usually highest in the morning and lowest in the evening. Ask your doctor if you should take your blood pressure at the same time of day each time you take it, or if you should take your blood pressure at different times of the day. The instructions for using blood pressure monitors vary depending upon the type of blood pressure monitor you choose. Here are some general guidelines: Sit with your arm slightly bent and resting comfortably on a table so that your upper arm is on the same level as your heart. Expose your upper arm by rolling up your sleeve but not so tightly as to constrict blood flow. If you are not able to roll up your sleeve, remove your arm from the sleeve or take off your shirt. Wrap the blood pressure cuff snugly around your upper arm so that the lower edge of the cuff is about 1 in. (2.5 cm) above the bend of your elbow. A large artery (called the brachial artery) is located slightly above the inside of your elbow. You can check its location by feeling for a pulse in the artery with the fingers of your other hand. If you are using a stethoscope, place the earpieces in your ears and the bell of the stethoscope over the artery, just below the cuff. The stethoscope should not rub on the cuff or your clothing, since this may cause noises that can make your pulse hard to hear. If you are using a cuff with a built-in stethoscope bell, be sure the part of the cuff with the stethoscope is positioned just over the artery. The accuracy of a blood pressure recording depends on the correct positioning of the stethoscope over the artery. You may want to have another person who can use a stethoscope properly help you take your blood pressure. Close the valve on the rubber inflating bulb. Squeeze the bulb rapidly with your opposite hand to inflate the cuff until the dial or column of mercury reads about 30 mm Hg higher than your usual systolic pressure. (If you don't know your usual pressure, inflate the cuff to 210 mm Hg or until the pulse at your wrist disappears.) The pressure in the cuff will stop all blood flow within the artery temporarily. Now open the pressure valve just slightly by twisting or pressing the valve on the bulb. The pressure should fall slowly at about 2 to 3 mm Hg per second. Some blood pressure devices have a valve that automatically controls this rate. As you watch the pressure slowly fall, note the level on the dial at which you first start to hear a pulsing or tapping sound through the stethoscope. The sound is caused by the blood starting to move through the closed artery. This is your systolic blood pressure. If you have trouble hearing the start of your pulse through the stethoscope, you can check your systolic blood pressure by noting the level on the dial when you are able to feel the pulse at your wrist once again. Continue letting the air out slowly. The sounds will become muffled and will finally disappear. Note the pressure when the sounds completely disappear. This is your diastolic blood pressure. Finally, let out all the remaining air to relieve the pressure on your arm. Be sure to write your numbers in your log book, along with the date and time. Sit with your arm slightly bent and resting comfortably on a table so that your upper arm is on the same level as your heart. Expose your upper arm by rolling up your sleeve but not so tightly as to constrict blood flow. If you are not able to roll up your sleeve, remove your arm from the sleeve or take off your shirt. Wrap the blood pressure cuff snugly around your upper arm so that the lower edge of the cuff is about 1 in. (2.5 cm) above the bend of your elbow. For electronic models, press the on/off button on the electronic monitor and wait until the ready-to-measure "heart" symbol appears next to zero in the display window. Then press the start button. The cuff will inflate automatically to approximately 180 mm Hg (unless the monitor determines that you require a higher value). It then begins to deflate automatically, and the numbers on the screen will begin to drop. When the measurement is complete, the heart symbol stops flashing and your blood pressure and pulse readings are displayed alternately. At first it is a good idea to take your blood pressure 3 times in a row, 5 or 10 minutes apart. As you get more comfortable taking your own blood pressure, you will only need to measure it once or twice each time. Check your blood pressure cuff frequently to see that the rubber tubing, bulb, valves, and cuff are in good condition. Even a small hole or crack in the tubing can lead to inaccurate results. You may feel some discomfort when the blood pressure cuff inflates, squeezing your arm. There are no risks or complications from this test. 119 or below 79 or below 120 to 139 80 to 89 140 or above 90 or above Blood pressure readings of less than 90/60 mm Hg are normal as long as you feel well. In general, the lower your blood pressure, the better. But if you have low blood pressure and feel lightheaded, faint, or like you may vomit, talk to your doctor. Reasons you may not be able to have the test or why the results may not be helpful include: Blood pressure normally goes up and down from day to day and even from minute to minute, depending upon how active you are, whether you are standing up or sitting down, and what medicines you are taking. Other things that can change blood pressure include being too hot or too cold, whether you have recently eaten, and whether you are relaxed or feeling stressed. Home blood pressure monitoring works best when you also record your daily activities, such as the time you take medicine if you feel upset or stressed, in a diary. This can help explain changes in your blood pressure readings and help your doctor adjust your medicines. Your blood pressure may only be high when you go to your doctor's office. This is called white-coat (or office) hypertension and may be caused by stress about seeing your doctor. When you regularly check your blood pressure at home, you may find that your blood pressure is lower when you are not at the doctor's office. Visit the American Heart Association (AHA) website for information on physical activity, diet, and various heart-related conditions. You can search for information on heart disease and stroke, share information with friends and family, and use tools to help you make heart-healthy goals and plans. Contact the AHA to find your nearest local or state AHA group. The AHA provides brochures and information about support groups and community programs, including Mended Hearts, a nationwide organization whose members visit people with heart problems and provide information and support. The U.S. National Heart, Lung, and Blood Institute (NHLBI) information center offers information and publications about preventing and treating: CitationsJoint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (2003). Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure JNC Express (NIH Publication No. 03–5233). Bethesda, MD: U.S. Department of Health and Human Services.Other Works ConsultedAmerican Heart Association. (2005). Recommendations for blood pressure measurement in humans and experimental animals. Part 1: Blood pressure measurement in humans. AHA Scientific Statement. Hypertension, 45(1): 142–161.Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (2003). Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure JNC Express (NIH Publication No. 03–5233). Bethesda, MD: U.S. Department of Health and Human Services.Pickering TG, et al. (2008). Call to action on use and reimbursement for home blood pressure monitoring. A joint scientific statement from the American Heart Association, American Society of Hypertension, and Preventive Cardiovascular Nurses Association. Hypertension, 52(1): 10–29. Last Revised: April 5, 2011 Author: Healthwise Staff Medical Review: E. Gregory Thompson, MD - Internal Medicine & Robert A. Kloner, MD, PhD - Cardiology To learn more visit Healthwise.org © 1995-2013 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated. print close directions
fwe2-CC-MAIN-2013-20-44074000
Shinmoedake volcano: The Shinmoedake cone on the Kirishima mountain range erupted on Sunday, the largest blast from the volcano in 52 years. When it comes to building a country, you'd be hard-pressed to do it in a more volatile part of the world than Japan. About 1,500 earthquakes strike the island nation every year. Minor tremors occur on a nearly daily basis. Deadly quakes are a tragic part of the nation's past. The anniversary of the Great Kanto Earthquake of 1923, for example, which killed more than 100,000 people around Tokyo, is now national Disaster Prevention Day. More recently, a 6.8 magnitude earthquake struck the city of Kobe in 1995, killing more than 6,000 people. Japan has such a large potential for earthquakes — and disaster — because the nation sits atop four huge slabs of the Earth's crust, called tectonic plates. These plates mash and grind together and trigger deadly earthquakes, like the 8.9-magnitude quake that struck on Friday (March 11). [Photos: Japan Earthquake and Tsunami in Pictures] The tectonic activity has also created explosive volcanoes, like south Japan's Mount Kirishima, which continued its recent eruptive streak today (March 14). Japan lies along the Pacific Ring of Fire — a narrow zone around the Pacific Ocean where a large chunk of Earth's earthquakes and volcanic eruptions occur. Roughly 90 percent of all the world's earthquakes — and 80 percent of the largest ones — strike along the Ring of Fire. More than 150 aftershocks of magnitude 5 or greater have followed — including more than two dozen of magnitude 6 or greater. The number of aftershocks in Japan is not uncommon for an earthquake of this size, said geologist Eric Geist, of the USGS, at a news conference last week, and the rumbling could last for a year or more. As a rule of thumb, an earthquake's largest aftershock is about one magnitude lower than the mainshock, said Paul Caruso, a geophysicist with the USGS. The largest aftershock from this earthquake has been a magnitude 7.1. Japan's tectonic shuffle Earthquakes typically occur along faults, which are breaks in the rocky plates of the Earth's crust. These faults accumulate strain over the years as two plates butt heads. Japan's stretch of the Ring of Fire is where the North American, Pacific, Eurasian and Philippine plates come together. Northern Japan is largely on top of the western tip of the North American plate. Southern Japan sits mostly above the Eurasian plate. Friday's temblor struck 231 miles (373 kilometers) northeast of Tokyo and 80 miles (130 km) east of Sendai, Honshu, in the Pacific Ocean near the Japan Trench. The Japan Trench, a subduction zone, is where the Pacific plate — beneath the Pacific Ocean — dives underneath North American plate — beneath Japan. This violent movement, called thrust faulting, forced the North American plate upward in this latest quake. On average, the Pacific Plate is moving west at about 3.5 inches (8.9 centimeters) per year, and the movement has produced major earthquakes in the past — nine earthquakes of magnitude 7 or greater since 1973. The largest of these was a magnitude 7.8 earthquake in December 1994, which caused three fatalities and almost 700 injuries, approximately 160 miles (260 km) to the north of Friday's quake. In June of 1978, a magnitude 7.7 earthquake about 22 miles (35 km) to the southwest caused 22 fatalities and over 400 injuries. The rupture during Friday's quake was almost 200 miles (322 km) long, on an underwater fault that is about 220 miles (354 km) long by about 60 miles (97 km) wide, said Tom Broker, of the USGS. Earthquakes along that fault can affect the rest of the world — literally. "This is just a ginormous earthquake," Broker said. "It's really hard to grasp how big it is." For one, the intense temblor accelerated Earth's spin, shortening the length of the 24-hour day by 1.8 microseconds, according to geophysicist Richard Gross at NASA's Jet Propulsion Laboratory in Pasadena, Calif. Japan's Earthquake Research Committee said the earthquake forced the North American plate eastward by about 66 feet (20 meters), reported Japan's national broadcast agency, NHK. The entire island of Honshu was moved about 8 feet (2.4 m) east, according to USGS scientists. Geologists in St. Louis reported that their city moved up and down a fraction of an inch during the quake, but too slowly for anyone to notice, reported the St. Louis Post-Dispatch. Friday's huge earthquake was about 15.2 miles (24.4 km) deep, which was shallow enough to trigger a tsunami as the seafloor was pushed up and away from Japan. As the energy from the quake rose, two waves were created. Wave heights of more than 20 feet (6 m) socked Japan's coast, where the death toll is expected to exceed 10,000, according to news reports. Colliding tectonic plates not only trigger earthquakes — they also build volcanoes. About 10 percent of the world's active volcanoes are in Japan, mostly where the Pacific Plate is diving below the Philippine Plate. About 950 miles (1,500 km) south of Friday's earthquake, the Shinmoedake cone on the Kirishima mountain range erupted on Sunday. The blast was the volcano's largest in 52 years, the BBC reported. The volcano had been active earlier in the year, and despite the renewed activity coinciding with last week's earthquake, any link between the two would be speculation at this time, reported the Los Angeles Times. The Pacific Ring of Fire is home to 452 volcanoes in total — that's 75 percent of the world's active and dormant volcanoes.
fwe2-CC-MAIN-2013-20-44081000
|This document is available in: English Castellano ChineseGB Deutsch Francais Nederlands Turkce| by Georges Tarbouriech About the author: Georges is a long time Unix user. He loves GNUstep and the tools this great framework provides. Gorm and ProjectCenter, the GNUstep RAD tools RAD stands for Rapid Application Development. At the end of the 80's, when NeXTstep was released, it came with an incredible tool, called InterfaceBuilder. Used in conjunction with another tool, named ProjectBuilder, it allowed to build graphical applications in a flash. GNUstep offers a free version of these tools, called Gorm.app and ProjectCenter.app. From the computers prehistory, software development has been a great challenge. Computers were quite big in size despite their very little power. They were quite expensive, not really numerous and developers were unable to use them as often as they wished since they had to share them with other people. Then, researchers tried to find a way to make computers execute more than one task at a time to improve efficiency. Obviously, they had to design and create programming languages from scratch, taking into account the poor resources of the available machines. Thus, during the 60's various new programming languages appeared: LISP, FORTRAN, BASIC, Algol68, BCPL, etc. Next came the B language derived from the above mentioned BCPL, which soon became the C language. This last changed the world of programming. The Object Oriented (SmallTalk, Objective C, C++, etc) languages appeared later, with the "graphical era". In the 80's some machines were providing graphical OSes (Apple Macintosh, Commodore Amiga, Atari ST, etc) and the X Window System was in the works. At the same time, a company was working on a GUI for IBM OS2, called Presentation Manager. Before finishing that job, this company released its "own" GUI for its DOS, called... Windos. The first two versions were hardly usable, but... the third one started it all. The MvAI (Microsoft very Artificial Intelligence) was born ! That is, every user became a computer scientist. Since then we have seen "great" applications written using Excel or Word and Visual Basic:-( Never mind ! Fortunately, long before we reached the above situation, NeXTstep was born and with it, came Interface Buider. This tool allowed you to create a GUI for your application in a very short lapse of time and with great ease. From there, this kind of tools has been spreading. Among others, let us mention Omnis, 4D, Delphi, Kylix, etc. A few of them are multiplatform while the vast majority is dedicated to Windos. Let us also mention that there are free toolkits using such a philosophy, Gtk (Gimp Tool Kit) for instance. Proprietary Unixes also provide these sort of tools. The most important feature of these tools is that you do not have to write the code for the 200 windows of your application, but only the one to manage the data. Whether you like this sort of tools or not is not the point. The development time is short: it is a fact (hence the name, "Rapid Application Development"). GNUstep provides us with free RAD tools. They are called Gorm and ProjectCenter. Of course, these tools are very "young" but they do work. Let us have a look at them. To be able to use both Gorm and ProjectCenter, you need to install GNUstep. How to do this is beyond the scope of this article. You will find everything you need at the GNUstep website. This includes source code, HOWTOs, tutorials, etc. You can also have a look at these articles: GNUstep, the open source OpenStep and GNUMail.app, the portability evidence. The tests for the present article have been done under FreeBSD 4.7 with Window Maker 0.80.1, using gnustep-make-1.5.0, gnustep-base-1.5.0, gnustep-gui-0.8.2 and gnustep-back-0.8.2. These last are the latest GNUstep unstable versions. You can also use the stable versions if you wish. Last but not least, we used the gcc 3.0.4 compiler. Gorm stands for Graphic Object Relationship Modeler (or perhaps GNUstep Object Relationship Modeler, as said in the README file). It is a clone of the above mentioned NeXTstep Interface Builder (or today's MacOS X). Gorm is the work of Richard Frith-Macdonald who started the project. Today Gregory Casamento is the current maintainer and he does most of the work with Pierre-Yves Rivaille. The present version is 0.1.9. Newer CVS snapshots are available from http://savannah.gnu.org/projects/gnustep. You can download the latest stable version from the GNUstep website. The philosophy behind Gorm (and Interface Builder) is to provide the user with objects found in palettes and drag these objects to empty windows to design the graphical components of your application. The objects can be buttons, fields, checkboxes, panels, etc. That is, everything you can add to a window to make it user-friendly. Next, you can modify them using inspectors. From the inspectors, you can change the attributes, define connections, size, help and manipulate classes for the selected objects. After creating a class, you can add outlets and actions to the objects. Next you instantiate the class, what creates a new object (the instance) in the Gorm main window, and you connect the outlets and the actions to the corresponding components. You do this just by dragging the mouse from the instance to the selected object to connect outlets and from the object to the instance to connect actions. Last, you create the skeleton of the class source files, and you're done. More on this later. ProjectCenter, as the name says, is the "heart" of a project. It is a clone of Project Builder found under NeXTstep and Mac OS X. ProjectCenter is the work of Philippe C.D.Robert and the present version is 0.3.0. Like Gorm, you can download it from GNUstep website going to the Developer apps section. Of course you can get the latest CVS snapshot: we use it for this article and it is version 0.3.1. From ProjectCenter you can create a project, its interface (using Gorm), write its source code; you can build this project and run it (debugging is not yet available). In short, you can manage all the resources required by the project: source code, documentation, libraries, subprojects, interfaces, etc. When you create a new project, you can choose its type. You can select between application, bundle, tool, library and Gorm application. Among other things, ProjectCenter provides you with an editor in which you will be able to complete the Gorm skeleton code. How do Gorm and ProjectCenter work together ? Very well, thank you ! More seriously, we will use two examples to illustrate it. This article is NOT a tutorial. The idea is to show the ease of use of these tools while insisting on the fact that you will be able to use the same code for both GNUstep (that is, a lot of Unix platforms... and, if you like "struggling", under Windos too) and MacOS X. The only thing you will have to do is to design the interface on every platform, since the nib (InterfaceBuilder or Gorm) files are not portable (at least for now). The above mentioned GNUMail.app article showed the portability from a user's point of view. This one will focus on the developer's point of view, still with portability in mind. That is, in GNUMail.app we used the work of Ludovic and friends and here we do create a GUI application for both GNUstep and MacOS X. Many tutorials are available, either for MacOS X or GNUstep. You can reach most of the GNUstep's ones from the GNUstep website or from http://www.gnustep.net, but let us mention a few of them. - An application using Gorm and ProjectCenter by Pierre-Yves Rivaille. - The Nicola Pero's tutorial page - An older tutorial on how to create a HTMLEditor: http://stepwise.com/Articles/Technical/HTMLEditor/ To learn more, you can also check the source code, the nib files, etc, of the existing GNUstep applications (Gorm, ProjectCenter, GNUMail, GWorkspace, etc) and of course, the gnustep-examples. Among the numerous MacOS X tutorials for InterfaceBuilder available on the net, we will use the following one as a first model: http://www.macdevcenter.com/pub/a/mac/2001/05/18/cocoa.html. The author, Mike Beam wrote a lot of more sophisticated tutorials available from http://www.macdevcenter.com/pub/ct/37. Why this one ? Because it provides you with a working text editor without writing a single line of code. This shows the power of these development tools, wether they work under MacOS X or under GNUstep. Using ProjectCenter.app and Gorm.app under GNUstep we create a very simple text editor able to cut, copy, paste. Obviously, you will not be able to save your work: remember, we will not write a single line of code. Using ProjectBuilder and InterfaceBuilder under MacOS X we will do the same. Obviously, there is a lot to do to improve this editor and we will leave this as an exercise for the reader. Again, this article is not a tutorial ! Here we go. Open ProjectCenter.app and create a new project called Editor. Choose a Gorm Application project at the bottom of the window before saving its name. This will provide you with an Interfaces item in the left column of Clicking Interfaces displays Editor.gorm. Double-click Editor.gorm and Gorm.app opens. Select the default window (MyWindow) and using the tool inspector change the name to Editor in Attributes. From the palette, drag a TextView to the Editor window. The TextView is the biggest object found in the selected palette using the rightmost icon at the top of the Palettes window. Resize this object to make it fill the window, and you are done. Again, using the GormInternalViewEditor inspector (while the TextView is selected), choose Size and change the values to make them match the Editor window size values. These last are obtained in the same way, that is, selecting the window and checking the size in GormNSWindow inspector. If you do not change the X and Y values, for instance, you will not be able to use the full width of the editor, whether you resize the window or not. Save all in the Gorm Document menu and quit to go back to ProjectCenter. Select the Build icon and click in the new build icon in the second horizontal half of the window. Everything should go well if you defined the right preferences for your compiler, debugger, etc. For example, using FreeBSD, you must change make to gmake (including the path) clicking in the Settings icon of ProjectCenter. Check also the paths from the Preferences menu in ProjectCenter. If the built succeeded (it should have !), just do the same with Run and you will see the Editor application. Just play with it, writing, cutting, pasting, etc. Obviously, you can restart it later using the openapp command. How long did it take ? Well, I should say a few minutes. Nothing much to say since you will have to do the same as above. Here is what it looks like while designing the GUI: Now we choose another example from Mike Beam. This time is a full working application, able to manage data: an address book. Mike's tutorial about the address book (like every other) is recommended reading to understand how the "thing" works. Also check the tutorial list since Mike provides different steps of the developing process for one and the same application allowing to improve it. Again we create and run the application on both GNUstep and MacOS X. Like you did for the Editor example, start ProjectCenter.app. Select a Gorm application and call it AddressBook. From ProjectCenter launch Gorm by double-clicking in Interfaces -> AddressBook.gorm. Drag a TableView from the palette to the default window. In other words, follow Mike's tutorial like you would under MacOS X. You will have to adapt a few things since they work differently in Gorm and in InterfaceBuilder. For example, the number of columns in the TableView cannot be defined from the attributes inspector in Gorm. To keep things simple, just copy a column and paste it next to it to get the required number (4 in our case). You should end with something like this: Mike Beam did the whole job: what else could I add ? Obviously, GNUstep development tools cannot be as ahead as Apple's. Apple and NeXT represent a 15 years old experience with hundreds of developers. GNUstep is the work (for free) of a few individuals having to do something else for a living. Accordingly, do not be surprised to find, for instance, much more available classes in InterfaceBuilder than in Gorm. Remember, Gorm is at version 0.1.9 (or 0.2.0). Furthermore, we did the tests the "hard" way. That is, we "ported" from OS X to GNUstep. The other way round would have been easier because of the above mentioned differences between the tools. For example, porting applications developed under MacOS X 10.2 would be much more difficult since the new Apple development tools have much improved. As already said, there are many new available classes or more elaborated ones. However, the tools rely on the same philosophy whether they work under GNUstep or MacOS X... and GNUstep improves every day. One thing looks very nice to me: GNUstep people really work together. They do help each other when individual projects are concerned and they also contribute in improving GNUstep core. This is the Free Software way of working I like. Congratulations for such a behavior Mr.Fedor and friends. The goal of this article was to show the power of the GNUstep "RAD" tools, Gorm.app and ProjectCenter.app. Despite their "youth" they can help you develop nice applications in a very easy way. Futhermore, these tools provide a very pleasant way of working while being very efficient. Objective C is a very compact language and, in my opinion, much easier to learn than C++ for someone with C knowledge (I know, I already said so !). This allows to design nice looking applications (well, it is a matter of taste, but I do love this look and feel) while keeping them rather small in size. I must admit that I never recovered from the shock I received when I first met the NeXT machine. The fact Apple released a modern version of NeXTstep delights me. This is also why I am very fond of projects such as GNUstep or Window Maker. However, if I love free software I am not a "fundamentalist" and accordingly, I am not against proprietary software (well, maybe a bit against a specific editor... but, just a bit !). GNUstep can benefit from Apple... but Apple can benefit from GNUstep too. GNUstep is not an Apple competitor, it is free software. As far as I know, free software is widely used in OS X. This to say that bringing even more free software to Apple cannot be a bad thing. What Ludovic and friends did with GNUMail.app is a very good example of what could happen. "I had a dream"... Apple was providing most of its development tools source code to GNUstep. GNUstep and Apple developers were working together to bring great applications to Unix users. And slowly, people were realizing they could live without Windos... Unfortunately, it was a dream ;-) Anyway, if you do not know GNUstep and its applications, feel free to give them a try. Remember, GNUstep is a framework and tools such as Gorm and ProjectCenter provide you with everything to create, to invent. In other words, with a bit of imagination, you can develop "products" much different from what we can see nowadays: Windos applications clones ! We are living in a great time ! To the GNUstep people: A.Fedor, N.Pero, G.Casamento, P.Y.Rivaille, N.Roard, L.Marcotte, R.Frith-Macdonald, P. C.D.Robert, E.Sersale, A.Froloff, F.Kiefer, M.Viviani, M.Guesdon and all those I forgot for the very great job either for the framework or for the applications. To Window Maker people: A.Kojima, D.Pascu and friends for bringing us a free NeXTstep interface for X. To J.M.Hullot and B.Serlet for inventing InterfaceBuilder. To "Steve Jobs INC." for bringing us NeXT, NeXTstep and MacOS X. To all the people not mentioned here having contributed in making our professional life much less sad. Webpages maintained by the LinuxFocus Editor team © Georges Tarbouriech , FDL 2002-12-19, generated by lfparser version 2.35
fwe2-CC-MAIN-2013-20-44089000
A Titanic mistake? New research sinks the “women and children first” myth. The Titanic sank 100 years ago today, and Men’s Rights Activists are still pissed off about it. They’re not really pissed off that it sank. They’re pissed off that the men on board were more likely to go down with the ship than the women. You know, that whole “women and children first” thing. Some MRAs were so pissed off about this that they were planning to march on Washington on this very day in an attempt, as they put it, to “Sink Misandry.” You don’t know how much I would have loved to see this, a dozen angry dudes marching in circles on the National Mall carrying signs protesting the sinking of the Titanic and demanding that in all future sinkings of the Titanic that women and men be equally likely to drown in the cold waters of the North Atlantic. For that would be justice at last! But, alas, due to unspecified logistical problems this march was cancelled some months back, and so misandry remains unsunk. Or does it? For you see, it turns out that the whole “women and children first” thing was not really a thing. Oh, on The Titanic it was. But women unfortunate enough to be passengers on sinking ships that weren’t the Titanic (or the HMS Birkenhead, which sunk off the coast of South Africa in 1852) weren’t able to push ahead to the front of the line. That, at least, is the conclusion of a new Swedish study (link is to a pdf of it). The chivalrous code “women and children first” appears to have sunk with the Titanic 100 years ago. Long believed to be the golden standard of conduct in a shipwreck, the noble edict is in fact “a myth that has been nourished by the Titanic disaster,” economist Mikael Elinder of Uppsala University, Sweden, told Discovery News. Elinder and colleague Oscar Erixson analyzed a database of 18 peace-time shipwrecks over the period 1852–2011 in a new study into survival advantages at sea disasters. Looking at the fate of over 15,000 people of more than 30 nationalities, the researchers found that more women and children die than men in maritime disasters, while captains and crew have a greater chance of survival than any passengers. Being a woman was an advantage on only two ships: on the Birkenhead in 1852 and on the Titanic in 1912. The notion of “women and children first” may have captured the popular imagination, but it’s never been an official policy for ship evacuations. It wouldn’t be fair, nor would it be an efficient way to get as many people as possible to safety. Nor was “women and children” strictly enforced even on the Titanic. True, my great-grandfather, the mystery writer Jacques Futrelle, was one of those who went down with the ship, while his wife and my great-grandmother, writer Lily May Futrelle made it off safely (in the last lifeboat). But there were many men who survived, and many women who died. If you want to get mad about the sinking of the Titanic all those years ago, get mad at the White Star Line for not bothering to equip the ship with lifeboats enough for everyone on it. Blame the captain, for ordering the ship to continue plowing ahead on a dark, foggy night into an area of the Atlantic where numerous icebergs had just been sighted by a number of other ships. Blame the crew for botching the evacuation – for the strange lack of urgency after the ship hit the iceberg, for the lifeboats leaving the sinking ship with half as many passengers as they could fit. Much like the iceberg that sank the Titanic, Elinder and Erixson’s research has poked a giant hole in the “women and children first” myth. Of course, MRAs aren’t interested in historical accuracy. They’re looking for excuses to demonize women and feminists. So I imagine we’ll be hearing about the Titanic from them for years to come. Here’s another tragic sinking, of yet another ship without a sufficient number of lifeboats: EDIT: I added a couple of relevant links and fixed a somewhat egregious typo.
fwe2-CC-MAIN-2013-20-44094000
|Grade Level||Elementary School| |Time Period||Current Events| |Topic||Economic Development, Geography Matters, Learning with Maps| The physical geography of an area has an enormous impact on the life of the people in that place. One of the most basic effects is what grows there. The available vegetation affects the lifestyle of the people in two important ways. First, it guides how they meet their basic needs. It also influences connections with other groups as contact for trade expands their activities.
fwe2-CC-MAIN-2013-20-44097000
Research is clear that there is an inextricable link between students' emotional and mental health and their ability to learn. A student is not able to benefit from the educational program if the student is suicidal or if the student is preoccupied by concerns about someone who may be thinking about suicide. Few events have greater impact than suicide upon students, parents, and staff. The Student Services and Alternative Programs Branch staff is committed to providing technical assistance about effective youth suicide prevention, intervention, and postvention (i.e., support and assistance for those affected by a completed suicide.) Suicide continues to be a leading cause of death in the United States and in Maryland. According to the Federal Centers for Disease Control and Prevention, suicide continues to be the third leading cause of death for youth in the United States and in Maryland. During 2004, Maryland lost 86 youth due to suicide. The results of the 2005 Maryland Youth Risk Behavior Survey (YRBS) indicate that more than one in ten Maryland high school students reported making a plan to commit suicide in the past twelve moths. The data demonstrate the importance of the statewide Youth Suicide Prevention School Program established in the Annotated Code of Maryland §7-503. The Maryland program establishes a shared responsibility between educational programs at the State and local levels and community suicide prevention and crisis center agencies. The statewide program includes: - Classroom instruction about warning signs of suicide and suicide prevention strategies - Maryland Youth Crisis Hotline at 1-800-422-0009 and local suicide and crisis hotlines - Suicide intervention and postvention - Data collection - Teacher training
fwe2-CC-MAIN-2013-20-44099000
Maryland is the first state in the nation to require high school students to engage in service-learning activities as a condition of graduation. Each of the 24 school districts in Maryland implements the service-learning graduation requirement differently, because they tailor the specifics of their program to their local community. In April 2008, the National Youth Leadership Council released the K-12 Service-Learning Standards for Quality Practice. There are eight national standards in comparison to Maryland's 7 Best Practices of Service-Learning. Most of the national standards have a direct corresponding match with one of Maryland's existing seven standards. What makes a project meaningful and effective? High quality experiences meet Maryland's Seven Best Practices for Service-Learning (now aligned with NYLC's K-12 Service-Learning Standards for Quality Practice). These projects allow students and teachers to: 1. Address a recognized need in the community 2. Achieve curricular objectives 3. Reflect throughout the service-learning experience 4. Develop student responsibility 5. Establish community partnerships 6. Plan ahead for service learning 7. Equip students with knowledge and skills needed for civic engagement If you would like to evaluate the effectiveness of a service-learning project you current offer or engage in, use our Seven Best Practice Evaluation Tool.
fwe2-CC-MAIN-2013-20-44100000
I have to determine all values of h for which A is invertible and I really don't know what should be my first step( If anyone could guide me through this that would be awesome. Here's the matrix: 1 1 0 1 1 0 0 1 0 1 2h + 1 0 1 1 h You mean then that the matrix is I see two ways to do that. One is to use the fact that a matrix is invertible if and only if its determinant is non-zero. The other is to row-reduce this to triangular form and use the fact that a matrix is invertible if and only if, reduced to triangular form, it has no zeros on its main diagonal. Since a simple way of determining the determinant of a matrix is to reduce to triangular form, those are essentially the same. That will give you Now you also need to note that 1) If you "add a multiple of one row to another" the determinant of a matrix 2) if you "multiply one row by a number", the determinant of a matrix is multiplied by that number. 3) if you "swap two rows", the determinant of a matrix is multiplied by -1. Since you have not "multiplied one row by a number", the determinant of your original matrix must be the determinant of this matrix: that is, . The determinant of your original matrix is non-zero if and only if h is non-zero.
fwe2-CC-MAIN-2013-20-44102000
A bag contains n discs, made up of red and blue colours. Two discs are removed from the bag. If the probability of selecting two discs of the same colour is 1/2, what can you say about the number of discs in the bag? Let there be r red discs, so P(RB) = r/n (nr)/(n1), similarly, P(BR) = (nr)/n r/(n1). Therefore, P(different) = 2r(nr)/(n(n1)) = 1/2. Giving the quadratic, 4r2 4nr + n2 n = 0. Solving, r = (nn)/2. If n is an odd square, n will be odd, and similarly, when n is an even square, n will be even. Hence their sum/difference will be even, and divisible by 2. In other words, n being a perfect square is both a sufficient and necessary condition for r to be integer and the probability of the discs being the same colour to be 1/2. Prove that n(n+1)/2 (a triangle number), must be square, for the probability of the discs being the same colour to be 3/4, and find the smallest n for which this is true. What does this tell us about n and n(n+1)/2 both being square? Can you prove this result directly?
fwe2-CC-MAIN-2013-20-44105000
Human reproduction is a complex and remarkable process. Women’s and men’s reproductive systems compliment one another, and each is essential for reproduction. There are two types of sex cells involved in human reproduction: the male’s sperm and the female’s egg. An egg that has been fertilized by a sperm cell grows and divides in a woman’s uterus (womb) throughout pregnancy until childbirth. The resulting child’s genetic makeup comes from the sperm and egg cells produced by the father and mother. The Female Reproductive System The female reproductive system includes the: - Vagina —a muscular passage that connects the cervix with the external genitals - Cervix —the lower part of the uterus that connects to the vagina - Uterus —a hollow, muscular structure in which the fertilized egg implants and fetus grows during pregnancy - Ovaries —two glands that produce eggs, as well as the female hormones estrogen and progesterone - Fallopian Tubes —two tubes that connect the ovaries with the uterus During a woman’s menstrual cycle, which usually lasts about 28 days, her body prepares for the possibility of a pregnancy. In the first half of the menstrual cycle, estrogen levels rise to thicken the lining of the uterus. At the same time, an egg begins to mature in one of the ovaries. Around the midpoint of the menstrual cycle (for example, day 14 of a 28-day cycle), a surge of luteinizing hormone (LH), which is produced by the pituitary gland in the brain, causes the mature egg to leave the ovary, a process called ovulation . In the second half of the menstrual cycle, fingerlike projections located at the opening of the fallopian tubes sweep the released egg into the tube toward the uterus. At the same time, rising levels of progesterone help prepare the lining of the uterus for pregnancy. If sperm cells are present at this time, the egg may become fertilized. If no sperm cells are present, the egg either dissolves or is absorbed into the body, no pregnancy occurs, hormone levels drop, and the thickened lining of the uterus is shed during the menstrual period. If fertilization does occur, the fertilized egg grows and divides until it becomes a blastocyst , which is a hollow ball of cells. The blastocyst moves to the uterus, where it attaches itself to the lining, in a process called implantation . The blastocyst is nourished, and continues to grow and divide until it becomes an embryo , which eventually becomes a fetus. Pregnancy lasts for an average of 280 days, or about nine months, until the baby is ready for birth and moves from the uterus through the cervix and out of the vagina. The Male Reproductive System The male reproductive system includes the: - Testicles, or Testes —two oval-shaped organs that produce and store millions of tiny sperm cells, as well as male hormones, including testosterone - Epididymis —two coiled tubes that connect each testicle to the vas deferens - Scrotum —a pouch of skin that hangs outside the pelvis to hold and regulate the temperature of the testes - Vas Deferens —a muscular tube that transports sperm from the testes to the ejaculatory ducts - Seminal Gland and Prostate Gland —glands that produce seminal fluid - Urethra —the tube that passes urine and semen out of the body - Penis —the organ in which muscular contractions force sperm-containing semen out of the urethra When a male is stimulated, sperm cells move out of the testes, through the epididymis, and into the vas deferens. They are mixed with the whitish seminal fluid produced by the seminal and prostate glands to form semen. The penis then fills with blood and becomes erect, and muscles contract, forcing semen through the urethra and out of the male’s body, a process called ejaculation . Each ejaculation can contain up to 500 million sperm. When ejaculation occurs during intercourse, semen is deposited into the female’s vagina. Sperm cells “swim” from the vagina through the cervix and uterus, toward the fallopian tubes. If a mature egg is present in one of the fallopian tubes, a sperm may penetrate and fertilize it. - Reviewer: Andrea Chisholm, MD - Review Date: 11/2012 - - Update Date: 11/26/2012 -
fwe2-CC-MAIN-2013-20-44119000
When you are pregnant, it is important to eat a well-balanced, healthful diet. This includes getting the right amount of calories and key nutrients to support both you and your baby. Since the amount of calories you need will vary depending on age, weight, and physical activity among other things, talk with your doctor about a calorie plan that is right for you. Along with talking about the amount of calories and types of foods you need to consume to achieve a well-balanced, healthy diet during pregnancy, your doctor may also discuss the kinds of nutrients you will need. There are some key nutrients, like folic acid and iron , which deserve extra attention during pregnancy. Many women may also benefit from a vitamin supplement. Women who are pregnant or may become pregnant should consume 600 micrograms of folic acid (ie, folate) every day. This mineral is most important during the first several weeks of pregnancy—often before a woman even knows she is pregnant. Getting enough folic acid can help prevent neural tube defects, such as spina bifida . Taking this vitamin may also help prevent birth defects like cleft lip and congenital heart disease You can meet this requirement by eating a variety of foods rich in folic acid. For extra insurance, you may also want to take a folic acid supplement before you become pregnant and through your first trimester. (If you are taking a prenatal vitamin that contains folate, you do not need a separate folic acid supplement.) Foods rich in folic acid include: - Fortified breakfast cereal - Whole-wheat breads - Orange juice and citrus fruits - Green leafy vegetables (eg, spinach, broccoli, and romaine lettuce) Iron is a mineral that helps red blood cells transport oxygen around the body. The recommended amount of iron for pregnant women is 27 milligrams (mg). Not getting enough of this mineral can lead to iron-deficiency anemia and pregnancy complications. Good sources of iron include: - Lean red meat - Dried fruits - Fortified breakfast cereals Eating vitamin C -rich foods along with iron-containing foods can help with iron absorption. On the other hand, drinking tea or coffee at the same time can inhibit iron absorption. Because it can be difficult to get all the iron you need from food alone, it is often recommended that all pregnant women take a prenatal vitamin that contains the necessary iron amount. Talk to your physician about iron supplementation. Good sources of calcium include: - Low-fat or nonfat dairy products (eg, milk, yogurt, cottage cheese) - Fish canned with bones - Green leafy vegetables - Fortified soy milk or rice milk - Fortified orange juice - Other calcium-fortified foods If you do not eat dairy products or enough foods fortified with calcium, talk to your physician about calcium and vitamin D supplementation. (Vitamin D is necessary for the body to absorb and use the calcium.) During pregnancy the body becomes extra efficient at absorbing the nutrients in food. If you are eating a variety of healthful foods every day, you may not need a supplement. But many women may benefit from taking a prenatal multivitamin. Some may need only an iron or folic acid supplement. Talk to your doctor about your eating and lifestyle habits to determine if you should take a vitamin supplement. No safe amount of alcohol has been shown in pregnancy. Therefore, it is recommended that you abstain from drinking until after your pregnancy. Most experts agree that having one or two cups of coffee or tea per day is fine during pregnancy. However some research has linked high intakes of caffeine (more than 300 mg per day) with greater difficulty to conceive and a higher rate of miscarriages . One cup of brewed coffee contains about 135 mg of caffeine, one shot of espresso contains about 35 mg, one brewed tea bag contains about 50 mg, and a 16-ounce serving of cola has about 50 mg. Talk to your doctor about how much caffeine you drink. Seafood is an excellent source of omega-3 fatty acids , which are essential for the proper brain development of the fetus. Therefore, it is recommended that pregnant women include seafood, particularly fatty fish such as salmon, as a regular part of their diet. However, some seafood contains high amounts of mercury, a contaminant that can be harmful to the developing baby. Fish that should be avoided due to their high mercury content include: tilefish, king mackerel, swordfish, albacore tuna, and shark. Good choices include salmon, sardines, catfish, canned light tuna, and shrimp. These are both high in omega-3 fatty acids and low in mercury. To avoid the risk of foodborne illness, which could harm you and your developing baby, it is important to pay close attention to food safety during pregnancy. Here are some general guidelines: - Wash your hands before eating or preparing food. - When preparing food, avoid cross-contamination of raw meats or poultry with other foods. - Cook meat to recommended temperatures. - Thoroughly reheat leftovers. - Avoid luncheon meats and hot dogs unless reheated until steaming. - Drink only pasteurized juice and milk. - Avoid raw or soft cheeses. Most artificial sweeteners are considered safe for use in moderation during pregnancy, including: acesulfame K (Sunett), aspartame (NutraSweet or Equal), and sucralose (Splenda). But more research is needed on saccharin (Sweet’N Low) and stevia, these should, therefore, be avoided by pregnant women. Staying well-hydrated is important for the health of you and your baby, so try to drink at least 6-8 glasses of water a day. Other beverages, such as juice and soda, also contribute to hydration, but tend to be high in calories and low in nutritional value. - Reviewer: Dianne Scheinberg Rishikof MS, RD, LDN - Review Date: 03/2013 - - Update Date: 00/31/2013 -
fwe2-CC-MAIN-2013-20-44121000
DNR to study declining moose populationby Curtis Gilbert, Minnesota Public Radio ST. PAUL, Minn. — The Minnesota Department of Natural Resources is embarking on what it calls the largest study of moose deaths ever conducted. The study will help determine why Minnesota's moose population has declined almost 50 percent in the last six years. This month the DNR will attach tracking collars and implant devices in the digestive tracts of 100 moose. Both the collars and devices will alert researchers when a moose has died. Scientists also plan to use GPS to track moose in the northeastern part of the state. Wildlife veterinarian Erika Butler said the goal is to autopsy each moose as quickly as possible. "We're going to be doing everything we can to remove the carcass intact," Butler said. "And if that's not possible, we'll be doing extremely thorough field necropsies." Butler said the DNR wants to know why moose populations are declining. "One thing that's been very clear for me working for the state of Minnesota is how much the state values moose as a species overall," she said. "You know all you really have to do is go up to Duluth or Ely or Grand Marais, and you know walk around in the bars and the shops and see all the moose paraphernalia everywhere. We know it's an iconic species for Minnesota, and we definitely have a connection with it." Butler said the $1.2 million study should yield results in about two years. About half of the funding comes from state lottery proceeds.
fwe2-CC-MAIN-2013-20-44134000
Training step 5: What it means to be a Mission:Explorer. Create a mashed-up and cross cultural dance routine that reveals how water is important to different people across the world. Flash perform your dance somewhere people would least expect it! Dance across a place in as many different ways as you can. Cross a place by walking, stepping, treading, pacing, striding, strutting, tiptoeing, tripping, skipping, dancing, leaping, lumbering, stamping, tramping, toddling, staggering, lurching, reeling,...
fwe2-CC-MAIN-2013-20-44136000
Rotary engines based on the Wankel principle were developed with two fundamentally different approaches to cooling the rotor. Mazda, Audi, Suzuki, Ingersoll-Rand and others used the oil-cooled rotor. It is an expensive, heavier and more complicated design, which achieved specific fuel consumption in the range of .55 to .6 lbs/hp hr. This is about 15% to 20% poorer than a typical four-stroke piston engine. The other approach taken by Outboard Marine Corporation (OMC), Fichtel Sachs and Norton was to use the incoming air-fuel mixture (“charge”) to cool the rotor. This design was much lighter, less expensive and through the use of roller bearings and very low rotor cooling losses achieved a specific fuel consumption between .45 and .5 lb/hp hr, which was close to the 4-stroke piston engine. Historically all of the charge-cooled rotary engines that were developed used an arrangement where the fuel-air mixture passed through the rotor from one side to the other. This design cooled the rotor unevenly, which lowered engine rotor bearing life and increased friction between the rotor and the end housing. In 1985 Moller International acquired the major rotary engine assets of OMC. OMC’s main product was the Johnson and Evinrude outboard engine and they were the world leader noted for their product’s reliability. OMC reportedly spent over $200 million between 1970 and 1985 developing a number of different rotary engine models including a 530cc displacement model that went into volume production and used in a snowmobile as a test product. Emissions requirements were one of the key motivators for this program. OMC believed that they would not be able to meet the proposed emissions standards proposed for the late 1980’s with their two-stroke engines and therefore chose to develop a lightweight low emission 4-stroke rotary engine. Fortunately for our Company the proposed emission standards were not enacted as originally planned and OMC stayed with their two-strokes, allowing Moller International to purchase their rotary engine technology. Since acquiring the OMC charge-cooled rotary design, Moller International has spent ~$35 million on further development, testing and product integration efforts related to its rotary engine, preparing it for use in its aeronautical products as well as for use in a wide range of other suitable applications. Freedom Rotapower Engine • High power to weight ratio -More than 2 HP per pound of installed weight in high-performance versions -Compares with .6 HP/lb. to 1 HP/lb. for 2-strokes and .3 HP/lb. to .65 HP/lb.for 4-stroke pistons. • High power to volume ratio -(Power Output / Volume) > 100 HP per cubic foot of installed volume -Compares with 36 HP/ft³ to 50 HP/ft³ for 2-strokes and 10 HP/ft³ to 20 HP/ft³ for 4-stroke piston engines. • Few moving parts -Moving Parts - only 2 for single rotor engine. -Compares to 7 parts for 2-stroke and 25 parts for 4-stroke piston with same nstantaneous output torque. • Solid fuel economy -Specific Fuel Consumption < .45 lb./HP-hr ~ (stratified charge). Expect <.4 lb./HP-hr when both stratified charged and turbo-charged -Compares to .65 lb./HP-hr for 2-strokes and ~ .4 lb/HP-hr for the best 4-stroke piston. • Proven multi-fuel performer -Demonstrated on gasoline, natural gas, alcohol and propane -Spark-ignited diesel, kerosine and jet fuel • Very low emissions levels -See Emissions Performance • Enhanced energy at exhaust -Exhaust temperatures > 1500 °F -Acts like a naturally occuring thermal reactor -Ideal for turbocharge/co-generation applications • Low vibration levels -Hard mounted engine can be used as part of the structure • Modular design -Stacking of rotors easily extends range of available power
fwe2-CC-MAIN-2013-20-44138000
Hilbert's Building Blocks Investigating space curves to construct 3-D forms I have been interested in the area of computer generated forms, mostly from the architectural viewpoint, for a long time. Most recently I have been investigating fractals as a way of generating 3-D forms. Not having a lot of luck in getting results that could suggest reasonable 3-D forms, I moved back to some earlier work I did in 2-D with Hilbert curves, spirolaterals, space filling curves, and recursive designs. The image above on the left is the space filling curve designed by the German mathematician David Hilbert. The adjacent image shows the three line segment "generator" for the Hilbert curve. The generator is connected to another generator by a connecting line segment. By definition, this type of curve will always remain in a two dimensional plane. If you break the generator into forward moves and turns, and then modify the angle of the turn, the lines segments will cross each other. This crossing enables the curve to trigger a move to another "level". This enables the determination of the curve height. Variations can be developed by using a turning angle other than 90 degrees. Two such variations are shown below. The second part of the this investigation is the interpretation of the curve once it is generated. Each of the line segments and their vertices can be interpreted in three dimensional, architectural terms: Select one the above variations to view these interpretations individually and in combination. walls, each line segment is constructed as a vertical plane floors, for each set of line segments, the minimum and maximum extends are found and constructed into a horizontal plane floor blocks, the horizontal floor plane is constructed into a volume extended walls, walls are constructed from the bottom and the top, starting at their beginning level, extending either to the bottom or top columns, volumes are constructed at the vertices of the line segments and the floor blocks beams, volumes are constructed along each line segment at the wall The more I worked with these variations and their interpretations, the more sculptural the forms became, further studies will continue in both the sculptural and architectural form possibilities. The next set of forms will use spirolaterals and more generalized recursive curves for the initial form generation. The forms currently only exist in this digital studio. My next goal is to generate STL files of the forms to send to a rapid prototyping system. Another possible direction would be to rewrite the generation software in AutoLisp for use within AutoCAD R13. This would also allow for the automation of the rendering of each variation. The entire idea of generating forms from specifications, have software develop alternative interpretative forms, then going to physical models is very intriguing; these concepts will continue to be the general direction of this investigation. A program written in MicroSoft QuickBASIC is used to generate all of the three dimensional components required for a particular variation in a 3D DXF format. The DXF file is then imported into AutoDesk 3D Studio for rendering. No manual modeling is required. These Web pages were constructed for use with Netscape 1.1. The initial programs which I wrote for the two dimensional versions and interpretations were upgraded to handle three dimensions by Amy Ferguson, a Teaching Assistant in the College of Architecture. She also did some exhaustive studies of the variations possible and some of the studies leading to the renderings produced here. For further information or comment contact: Robert Last update: Friday, December 29, 1995 Copyright 1995 Robert J. Krawczyk All Rights Reserved
fwe2-CC-MAIN-2013-20-44158000
The Sumxu is also known as the Chinese Lop-Eared Cat , Droop-eared cat, Drop-eared cat, or Hanging-Ear cat. All the names refer to its main feature - pendulous ears. Nowadays, the breed is considered extinct. It is thought that the pendulous ears were a result of mutation similar to that occurred in the Scottish Fold. All descriptions of the breed are based on reports of travellers. In 1976, a German naturalist gave rather a detailed description of the Sumxu, when a droop-eared cat had been brought from China by a traveller. The breed was described as long-haired cats with glossy black, yellow or cream coats and pendulous ears. Most probably, they looked like longhair Scottish Folds. The Cat by Lady Cust (1870) has this brief description: Bosman relates that in the province of Pe-chily, in China, there are cats with long hair and drooping ears, which are in great favour with the Chinese ladies; others say this is not a cat but an animal called 'Samxces'. Georges Louis Leclerc, Comte de Buffon, described the cat in The Natural History of The Cat (Volume 4 of Histoire Naturelle c. 1767, translated by William Smellie, 1781): Our domestic cats, though they differ in colour, form no distinct races. The climates of Spain and Syria have alone produced permanent varieties: to these may be added the climate of Pe-chi-ly in China, where the cats have long hair and pendulous ears, and are the favourites of the ladies. These domestic cats with pendulous ears, of which we have full descriptions, are still farther removed from the wild and primitive race, than those whose ears are erect. I formerly remarked, that, in China, there were cats with pendulous ears. This variety is not found any where else, and perhaps it is an animal of a different species; for travellers, when mentioning an animal called Sumxu, which is entirely domestic, say, that they can compare it to nothing but the cat, with which it has a great resemblance. Its colour is black or yellow, and its hair very bright and glittering. The Chinese put silver collars about the necks of these animals, and render them extremely familiar. As they are not common, they give a high price, both on account of their beauty, and because they destroy rats. Jean Bungartz also described the Chinese Lop-Eared Cat or Hanging-Ear Cat in his book Die Hauskatze, ihre Rassen und Varietäten (Housecats, Their Races and Varieties) from Illustriertes Katzenbuch (An Illustrated Book of Cats) in Berlin in 1896: The Chinese or Lop-Eared cat is most interesting, because it provides proof that by continual disuse of an organ, the organ withers. With the Chinese cat the hearing and ears have deteriorated. Michel says the Chinese, not only admire the cat in porcelain, but also value it for culinary reasons. The cats are regarded as special morsels and enjoyed particularly with noodles or with rice. This cat is bred particularly for the purpose of meat production, and is a preferred Chinese morsel; this is not unusual if one considers that the Chinese consume much the sight of which revolts the stomachs of Europeans. The poor creature is confined in small bamboo cages and fattened like a goose on plentiful portions. There is extensive trade with other parts of Asia and the canny Chinese allow no tomcats to be exported so there is no interference in this lucrative source of income. Due to the restrictive conditions that have deprived the cat of its actual use, its hearing has decreased because it is no longer needed for hunting its own food. With no need for watchfulness, it had no need of sharp hearing to listen for hidden things so its hearing became blunted and as a natural consequence its ears lost their upright nature, gradually becoming lower and becoming the hanging ear that is now the characteristic feature of the Chinese cat. At first impression this is a surprising and amusing look, but this impression is lost with closer examination. If one ignores the characteristic of the ears, one sees a beauty similar to the Angora cat: a long, close coat of hair, albeit less rich, covers the body. The hair is silky-soft and shining and the colour is usually isabelline or a dirty white yellow, although some have the usual colouring of the common house-cat. In size it is considerably larger and stronger than a housecat. The ears hang completely, as with our hunting dogs and are large in relation to the cat. Although the Chinese cat is found in considerable numbers in its homeland, it is rarely found at European animal markets. Only one such cat has reached us in the flesh; we acquired this years ago when a sailor returning from China brought it into Hamburg. The accompanying illustration is based on this cat. In character it is like the Angora cat and somewhat languid. It prefer to live by a warm fire, is rather sensitive to flattery, hears badly and is at its most animated when given milk or food. Apart from its unusual ears, it has no really attractive characteristics and is a curious specimen of housecat. In Frances Simpson's The Book of the Cat (1903), contributing author H.C. Brooke wrote: There is said to be a variety of Chinese cat which is remarkable for its pendent ears. We have never been able to ascertain anything definite with regard to this variety. Some years back a class was provided for them at a certain Continental cat show, and we went across in the hope of seeing, and if possible acquiring, some specimens; but alas the class was empty! We have seen a stuffed specimen in a Continental museum, which was a half long-haired cat, the ears being pendent down the sides of the head instead of erect; but do not attach much value to this. In 1926, H.C. Brooke wrote in the magazine Cat Gossip that for many years Continental cat shows had offered prizes for the Drop-eared Chinese Cat. On each occasion, the cat failed to materialise and Brooke considered it to be mythical. Other writers suggested the folded or crumpled ears were the result of damage or haematomas. Brooke wrote that although no-one ever saw the cat itself, one always met “someone who knows someone whose friends has often seen them”. Brooke himself had been assured by a Chinese gentleman he had met only once that “he knew them well”. HC Brooke, and several other cat fanciers, contacted the Chinese Embassy and Carl Hagenbeck's animal exchange in Hamburg and also a “certain well known author, who has lived for years in China and knows that country well”, but their enquiries bore no fruit. The search for this cat became so intense in the 1920s that The American Express Company instructed their representatives at Shanghai and Peking to make enquiries with the wild animal dealers who supplied zoos. They also had no success finding a Chinese Lop-Eared cat for western cat fanciers. Brooke said that in 1882 he had seen a stuffed specimen in a Continental museum. The specimen was “half-coated with yellowish fur” and Brooke admitted it might have been a fake or a cat with its ears deformed by canker. With all avenues of enquiry finally exhausted, Brooke declared the Chinese Drop-eared cat extinct. The last reported sighting of the Chinese Lop-eared cat was in 1938 when a droop-eared cat was imported from China. On that last occasion the mutation was believed to occur only in white longhaired cats. People are waiting to help.
fwe2-CC-MAIN-2013-20-44160000
●The more budgets are cut and taxes increased, the weaker an economy becomes. ●Austerity is the government’s method for widening the gap between rich and poor, which leads to civil disorder. ●Until the 99% understand the need for federal deficits, the upper 1% will rule. ●To survive long term, a monetarily non-sovereign government must have a positive balance of payments. ●Those, who do not understand the differences between Monetary Sovereignty and monetary non-sovereignty, do not understand economics. Unless you’re an ultra right-winger, you probably agree with the scientific consensus: We are in a period of global warming, which at least in part, is caused by humans But that brings us to the question(s): Is global warming negative for the world and the life on it – including humans – and should we should do everything possible to slow it, if not stop it, altogether? The media have answered, “Yes,” and have focused on the claimed negatives, which as a result, are well known. Increases in: 1. Number and severity of storms – hurricanes, tornadoes, blizzards, rain, lightning, tsumanis 2. Droughts, heat waves and (ironically) cold waves, desertification 3. Flooding, pollution 4. Volcanic activity 6. Food shortages 7. Species extinction 8. Spread of tropical diseases But global warming is more than a simple recitation of presumed negatives. The world is enormously complex, and not only are these negatives far from certain, but perhaps too little attention has been paid to the potential positives of global warming. For instance, global warming could help prevent future glaciation periods and could open millions of acres to agriculture. Maybe. Tellingly, few people know that today, we live in an ice age: An ice age, or more precisely, a glacial age, is a period of long-term reduction in the temperature of the Earth’s surface and atmosphere, resulting in the presence or expansion of continental ice sheets, polar ice sheets and alpine glaciers. Glaciologically, ice age implies the presence of extensive ice sheets in the northern and southern hemispheres. By this definition, we are still in the ice age that began 2.6 million years ago at the start of the Pleistocene epoch, because the Greenland and Antarctic ice sheets still exist. Much of the earth’s history has been warmer than today. Discussions of global warming often begin with the Arctic. Here are excerpts from an article in NewScientist Magazine: Industries make a dash for the Arctic 03 October 2012 by Fred Pearce, Sara Reardon and Catherine Brahic Last week, the Inuit-owned Nunavut Resources Corporation hit Wall Street asking for $18 million to help prospect half-a-million square kilometres of the Kitikmeot region in northern Canada. They expect to find gold, diamonds, platinum and lithium. . . . the shrinking ice cap will have profound consequences for the rest of the planet – including changed weather patterns and water distribution – and the region’s biota has undergone vast transformation. Most commentators expect the Arctic to play a key role in meeting the world’s energy needs in the 21st century. The US Geological Survey (USGS) says the continental shelves are the largest area on the planet not yet explored for oil and gas. It estimates that the Arctic contains 30 per cent of the world’s undiscovered natural gas, more than 80 per cent of it offshore. From the geology, the USGS reckons that the biggest oil and gas reserves will be off the north shore of Alaska, and beneath the Kara and Barents seas. Russia’s Yamal Peninsula already supplies around a fifth of the world’s natural gas. Exploration and mining activities are booming, bringing infrastructure such as roads, ports and new settlements. London-based insurers Lloyd’s earlier this year forecast that up to $100 billion of investment would pour into the Arctic in the next decade. Extracting hydrocarbons in the Arctic is scarcely new. Coal has been mined there for more than a century. But a combination of global shortages, rising prices, technical advances and the exposure of wide areas of the Arctic Ocean during summer melts, are triggering an explosion of activity. Inevitably, as global warming melts the ice, industry will enter – and pollute. On balance, will this prove to be beneficial? And, “beneficial to whom?” Nearly a million visitors go to the Arctic each year. They account for more than 80,000 hotel-nights on the Norwegian island of Svalbard. Even greater numbers visit Greenland, where they easily outnumber the local population of just 55,000 people. Canada’s Cambridge Bay – a stop on the North-West Passage – has seen a 30 per cent jump in tourists visiting the town in the past five years, with six cruise ships dropping anchor annually. The World – a giant residential vessel calling itself the world’s largest private mega-yacht – sailed through the North-West Passage for the first time in August. It was the largest passenger vessel to make the trip without an icebreaker to escort it. As the sea ice melts, sailing passages open, and more people not only will visit, but live in today’s remote northern climes. Warmer waters and a 20 per cent increase over the past decade in the volume of algae that sustain the marine food chain means there are more fish in the Arctic than ever before. And less ice means more open ocean in which to catch them. The number of voyages by fishing vessels in the Canadian Arctic increased sevenfold, to 221, between 2005 and 2010. The Inuit of Nunavut now run six factory ships trawling for turbot and other species in Baffin Bay and the Davis Strait, up from none 10 years ago. Climate change is altering the region’s fish population, as warmer water temperatures further south push commercial fish stocks into the Arctic circle. According to the US National Oceanic and Atmospheric Administration’s fisheries service, six species of fish have recently extended their range north through the Bering Straits into the Beaufort Sea in the Arctic. They include the Pacific cod, walleye pollock and Bering flounder. New fishing waters will open, providing relief to currently overfished areas. Burning oil helped melt Arctic ice in the first place. Now the estimated 90 billion barrels beneath it – 13 per cent of the world’s remaining total – promise profit to anyone able to reach them. Oil companies have operated onshore in every Arctic nation for decades, but the new frontier is offshore . . . A melted Arctic pushes back the date on which we will “run out of” energy, giving us more time to develop new sources. Mining is big business in the Arctic. Russia’s Norilsk mine is the world’s largest producer of nickel and palladium, and Alaska’s Red Dog mine is the world’s largest source of zinc. More record-beaters are set to break ground. Last month, the Nunavut environmental assessment agency gave the green light for the Indian metals giant ArcelorMittal to dig an open pit iron-ore mine on 170 square kilometres of tundra at Mary river on Baffin Bay, Canada. The $4 billion project will be connected to a port in Baffin Bay by the world’s most northerly railway. The south-west coast, around Kvanefjeld (Greenland), probably holds the world’s second largest deposit of rare earth elements and huge reserves of uranium and zinc – all together valued at almost half-a-trillion dollars. Last month, Greenland Minerals of Perth, Australia, announced plans to carry out a feasibility study. The project could keep miners busy for 100 years. It seems like only yesterday when we read about shortages of rare earths threatening computer development. Receding sea ice is opening up the Arctic to shipping. The North-East Passage, linking the North Atlantic to the Pacific via the Arctic waters north of Russia, was open for five months in 2011. More than 30 ships passed through, including a 120,000-tonne Russian gas tanker and Nordic and Japanese iron ore carriers taking Arctic minerals to China. The shortcut to Asia halves the shipping time from northern Europe to China to roughly 20 days, and avoids pirate-infested shipping lanes in the Indian Ocean. Russia expects a 40-fold increase in shipping along the route by 2020. American analysts say it could be carrying 5 per cent of world’s shipping by 2050. Bottom line: No one knows what the long term effects of global warming will be, and not knowing, no one can say whether on balance they will be beneficial or not. Even the concept of “on balance . . . beneficial” is shaky. “Beneficial” for whom and for what? Even if we focus on “beneficial for humans,” are we talking about long term or short term? Survival? Life span? Society? Progress? Happiness? Is there something about global warming that will help humans to better health in the short term, but give us less ability to survive in the long term? Will it assist tribal society at the expense of “modern” society? And what do we mean by “progress” and “happiness”? Robert Burns wrote: “. . .foresight may be vain: The best-laid schemes o’ mice an’ men, gang aft agley,” and the longer we try to peer into the future, the more “agley” our best-laid schemes become. The universe and our world in it, are victims of chaos, where: “Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for chaotic systems, rendering long-term prediction impossible in general. (Wikipedia) We can’t predict what volcanoes will erupt, nor what wars will be fought, nor the status of the stock market, nor the next coronal mass ejection, nor the next pandemic, nor scientific progress in a thousand areas. And we can’t predict the effects of global warming. At best, we can try to address our immediate problems and hope our efforts will bode well for the long term. We can and should try to reduce air, water and ground pollution. We can and should try to find cures for diseases. We can and should try to prevent wars and to make cars safer to drive, and to improve the education of our children and to explore the solar system and to save our forests. But, I suspect our efforts to reduce global warming are misplaced. We simply do not know what we are doing. Global warming very well could be what saves the human species. Rodger Malcolm Mitchell Nine Steps to Prosperity: 1. Eliminate FICA (Click here) 2. Medicare — parts A, B & D — for everyone 3. Send every American citizen an annual check for $5,000 or give every state $5,000 per capita (Click here) 4. Long-term nursing care for everyone 5. Free education (including post-grad) for everyone 6. Salary for attending school (Click here) 7. Eliminate corporate taxes 8. Increase the standard income tax deduction annually 9. Increase federal spending on the myriad initiatives that benefit America’s 99% No nation can tax itself into prosperity, nor grow without money growth. Monetary Sovereignty: Cutting federal deficits to grow the economy is like applying leeches to cure anemia. Two key equations in economics: Federal Deficits – Net Imports = Net Private Savings Gross Domestic Product = Federal Spending + Private Investment and Consumption – Net Imports
fwe2-CC-MAIN-2013-20-44161000
On November 26, 1941, a White House aide named Henry Field was summoned to the office of Franklin Roosevelt’s secretary, Grace Tully, for what seemed like a bizarre assignment. Tully instructed Field, one of the president’s bright young staffers, to produce, as quickly as possible, the names and addresses of all Japanese Americans, whether born in Japan or America. The assignment was "of the utmost urgency," said Tully, adding, "Use your own judgment to achieve results causing the least possible chance of a breach in security." This was eleven days before Pearl Harbor. That same day Secretary of State Cordell Hull issued what amounted to an ultimatum to two top Japanese diplomats, ambassador to the U.S. Kichisaburo Nomura and special envoy Saburo Kurusu. "Nomuru," writes John Toland in his book Infamy: Pearl Harbor and Its Aftermath, "was too stunned to talk," while Kurusu instantly saw that this would be regarded in Tokyo as "an insult." Having placed Japan under strain of severe economic sanctions, the United States now was showing no willingness to negotiate a way out of the impasse short of a Japanese humiliation. This was the day Roosevelt both ensured war with Japan and began preparing for the incarceration of Japanese-Americans when the war came. America today is once again on a path to war—this time with Iran—and the road is dotted with many of the same signposts seen in Roosevelt’s path to war seventy years ago. Like Roosevelt in his dealings with Japan, President Barack Obama has helped place Iran under severe strain of economic sanctions. Like Roosevelt, he has received from the adversary signals of flexibility in the search for a mutually satisfactory solution. Like Roosevelt, Obama has rebuffed those overtures. Roosevelt was under pressure from Britain’s prime minister Winston Churchill to hang tough, and Obama is under similar pressure from Israel’s Benjamin Netanyahu. There may be one big difference, but we can’t know for sure. While the historical record shows clearly that Roosevelt actually wanted war with Japan, it isn’t clear this is Obama’s desired outcome. If it is, his actions make sense. If not, his approach seems reckless. For there should be no mistaking the reality that the United States and Iran are on a collision course, as reflected in the ongoing negotiations between the so-called P5+1 (the United States, Britain, France, China, Russia and Germany) and Iran. The next session is set for June 18–19 in Moscow, and this session isn’t likely to lead to a blowup, not least because Obama has a large political incentive to keep the talks going at least through the November election. But the last session in Baghdad seemed to indicate that, if there is indeed any prospect for a negotiated settlement, Obama and the other P5+1 powers aren’t demonstrating any interest in exploring it. To understand this dynamic, it is helpful to review events leading up to the next negotiating session. Any such review should take into account the recent writings of Seyed Hossein Mousavian. The former spokesman for Iran’s nuclear negotiations team and also Iranian ambassador to Germany for seven years, Mousavian now is a research fellow at Princeton. He was arrested by Iranian president Mahmoud Ahmadinejad on charges of espionage in 2007 but was acquitted by the country’s judiciary. He is the author of a recently published book called The Iranian Nuclear Crisis: A Memoir. In his writings and public speaking, Mousavian disputes those in the West who declare Iran is bent on developing nuclear weapons. As he said in an interview with the Middle East Institute, "I am confident that Iran is not seeking to have nuclear weapons." Indeed, in the spring of 2005, Iran, in negotiations with European powers, offered to convert its enriched uranium to fuel rods, which would have precluded the country from using it for nuclear weapons. That was rejected by Britain at America’s insistence, says Mousavian. Later, in 2010 and 2011, Iran offered to limit its enrichment to 5 percent if the West would provide fuel rods for peaceful nuclear uses. Shortly thereafter, Russia put forth a "step-by-step" plan designed to break the impasse. Both times the United States balked, leading Russia’s then prime minister Vladimir Putin to suggest publicly that the West’s real design was regime change in Iran (a prospect guaranteed to generate powerful nuclear incentives in Tehran). Against this backdrop, Mousavian sees a possible avenue of peace. Iran is willing to curtail its nuclear program and accept transparency measures, he says, so long as the West recognizes Iran’s right to enrich uranium up to 5 percent, which is allowed under the Non-Proliferation Treaty, of which Iran is a signatory. This should satisfy Americans who want to see from Iran some form of confidence-building gesture. But he adds that Iran wants confidence-building gestures as well, and these should be in the form of some gradual lifting of sanctions. Under this concept, Iran and its negotiating adversaries could craft a step-by-step process designed to build confidence on both sides and reach an accommodation based on Iran giving up nuclear-weapon ambitions but retaining an ability to enrich uranium for peaceful purposes. "We take a step, you take a step," says Mousavian.
fwe2-CC-MAIN-2013-20-44164000
Having a colonoscopy might be pretty low on Latino adults’ to-do lists. Even hearing the term “colonoscopy” might make some people a bit squeamish. But it can also save your life. Just take it from Armida Flores, a promotora—a trained community health worker—at the Institute for Health Promotion Research (IHPR) at The UT Health Science Center at San Antonio. Flores spends her days helping Latinos confront cancers and illness. She knows first-hand that Latinos don’t get colon cancer screening enough. In fact, a new study found that only 28 percent of U.S. Latinos have had colon cancer screening, compared to 36 percent of African-Americans and 44 percent of whites. Because of these things, she began to worry about her own health and decided to schedule a colonoscopy, which can help identify colon cancer. “I was a little bit nervous about it but, to my surprise, the procedure was not too bad,” Flores said. “I was asleep, so I did not feel any pain or discomfort.” After explaining the procedure using simple medical terms, the doctor even offered to pray with her, an extra comfort that Flores welcomed. “The procedure was fast and the staff was caring.” The night before the procedure, Flores had trouble sleeping because of the liquid laxative solution she had to drink. However, she was surprised to discover that the liquid laxative, usually known for its horrible taste, actually wasn’t bad. “The taste was okay, it was kind of salty and sweet,” she recalled. The doctor found two small polyps in Flores’ colon that he was able to remove easily. Flores eliminated potential dangers to her health just by deciding to take action. She urges Latinos not to put themselves at risk just because of fear. “I think people are scared because of the word or because they heard something negative about it,” Flores said. “But a colonoscopy could save their life.” Amelie G. Ramirez, DrPH, directs the Institute for Health Promotion Research at the UT Health Science Center at San Antonio, which researches Latino health issues and founded the SaludToday Latino health blog, Twitter and Facebook. Dr. Ramirez, an internationally recognized cancer health disparities researcher, has spent 30 years directing research on human and organizational communication to reduce chronic disease and cancer health disparities affecting Latinos, including cancer risk factors, clinical trial recruitment, tobacco prevention, obesity prevention, healthy lifestyles, and more. She also trains/mentors Latinos in behavioral sciences and is on the board of directors for LIVESTRONG, Susan G. Komen for the Cure, and others. She was elected to the Institute of Medicine (IOM) of the National Academies in 2007.
fwe2-CC-MAIN-2013-20-44166000
Chapter 4: Knowing What to Get| How do you document your decision? It's now time to document the recommended technology solution that has emerged as a result of your thinking and analysis. The purpose of doing this is to present to the key decision makers in your organization (or to consider yourself) enough information for them to approve, modify or reject your recommendations. (If you feel that the likely outcome is rejection, then you are better served by developing a stronger case before presenting it.) Even if the decision making process is very informal, or if few people are involved, it is usually still a worthwhile exercise to document your plan as a check on its viability and your own thoroughness. If you can't articulate it, you may be missing a key element. A business case is the most useful format in which to prepare such documentation. It not only includes a description of your recommended solution, but also documents the anticipated costs and benefits. In short, it should give key decision makers all the information they need to make an approval decision. See Figure 4.4, Business Case Suggested Table of Contents For Further Information about the content of Technology @ Your Fingertips please contact [email protected].
fwe2-CC-MAIN-2013-20-44167000
Paternalism, the controlling of all aspects of an employee's life by the employer, was characteristic of many nineteenth- and early twentieth-century North Carolina mills and factories. The roots of paternalism were evident in an earlier era, when southern slaveholders came to regard taking good care of their slaves as of primary importance. Although partly a humanitarian concern, this focus on slaves ' welfare derived mostly from business considerations; sufficient food, housing, medical care, and clothing kept slaves at least outwardly content and enabled them to work more efficiently. This understanding of the importance of the quality of life of one's workforce continued to motivate owners in the tenancy system and later in the creation of mill villages. Paternalism was the philosophical and fiscal underpinning of many North Carolina cotton mill villages, which were organized as "company towns" to keep workers and their families satisfied and thus loyal and more productive. Paternalistic mill owners also claimed the right to discipline employees. Violators of specific rules and laws were first warned, then fired and made to vacate their house after a second offense. Drunkenness, spouse abuse, sexual immorality, and stealing were some of the most serious offenses, and only a small legal force, usually one man, was needed for the entire village. World War II essentially brought an end to paternalism, as most North Carolina mill villages and all of their homes, hospitals, libraries, and even community buildings were incorporated into neighboring towns. Jacquelyn Dowd Hall and others, Like a Family: The Making of a Southern Cotton Mill World (1987). Harriet L. Herring, Passing of the Mill Village: Revolution in a Southern Institution (1949). Oral Histories of the American South- Piedmont Industrialization- Employer Paternalism. Doc South, UNC Libraries: http://docsouth.unc.edu/sohp/browse/themes.html?theme_id=4&category_id=15&subcategory_id=131 Free dental dispensary for school children, Erlanger Mills, Lexington, NC, Davidson County, October 1918. From the Dr. George M. Cooper Photograph Collection,North Carolina State Archives, call #: PhC_41_161_4. Available from http://www.flickr.com/photos/north-carolina-state-archives/3059000658/ (accessed October 12, 2012). 1 January 2006 | Purcell, Gene
fwe2-CC-MAIN-2013-20-44169000
Cobalamin is a family of complex molecules, consisting of a cobalt-containing tetrapyrrole ring and side nucleotide chains attached to the cobalt atom.4 It is synthesized by anaerobic bacteria and is found in foods of animal origin (e.g., fish, meat, dairy products, and eggs), as well as fortified cereals.5–7 The Recommended Daily Allowance (RDA) of vitamin B12 is 2.4 μg per day (mcg/day) for persons over the age of 14 years. In the United States, the average daily adult dietary intake of vitamin B12 is about 5 mcg–30 mcg, of which only 1 mcg–5 mcg are effectively absorbed, given its complex absorption process. It is estimated that only 50% of dietary vitamin B12 is absorbed by healthy adults.7 Defects at any step of the absorption process can cause cobalamin deficiencies of varying degrees; 50%–90% of cobalamin stores in the body (3 mg–5 mg) are located in the liver. These stores help delay, often for up to 5 years, the onset of clinical symptoms due to insufficient cobalamin absorption. Dietary cobalamin is bound to animal proteins. In the stomach, hydrochloric acid (HCL) and pepsin are critical for the release of free cobalamin from the proteins. Glycoproteins called R-proteins (R) secreted from salivary glands and parietal cells bind free cobalamin in the stomach. Intrinsic Factor (IF), a weak binder of cobalamin in the presence of R, is also released by parietal cells in the stomach. In the duodenum, dietary- and bile-secreted cobalamin-R complexes are cleaved by pancreatic enzymes, and free cobalamin is then bound to IF with more affinity. Cobalamin–IF complexes are taken up by endocytosis, by adhering to cubilin receptors located on the distal ileal mucosa. Once inside the cell, cobalamin dissociates from IF. Free cobalamin is then bound to transporter proteins: transcobalamin (TC) I, II, and III, and transported to the liver. TC II represents about 10% of total transcobalamin and is the only cobalamin-transport protein that reaches target cell receptors. This biologically-active form of the vitamin can be taken up by cells via endocytosis for metabolic purposes. Up to 1%–5% of free cobalamin is also absorbed throughout the intestinal mucosa, via passive diffusion.1 This enables the absorption of high doses (at least 1 mg daily) of oral supplemental cobalamin, despite absorption-disease processes. Enterohepatic cobalamin absorption is another important source of vitamin B12. Cobalamin released through bile is reabsorbed in the ileum on a daily basis.8 The active forms of cobalamin (methylcobalamin and adenosylcobalamin) serve as co-factors for enzymes and exert their physiologic effects in the cytoplasm and the mitochondria. In the cytoplasm, methylcobalamin is a co-factor for methionine synthase, an enzyme necessary for two major cellular processes: 1) the conversion of homocysteine to methionine; and 2) the conversion of methyl-tetrahydrofolate (MTHF), the major circulating form of folate, to tetrahydrofolate (THF), the active form of folate, which is important for nucleotide and DNA synthesis. In the mitochondria, adenosylcobalamin catalyzes the conversion of methylmalonyl Coenzyme A (CoA) to succinyl-CoA, for lipid and protein metabolism.6 Disruptions in these pathways produce elevated levels of homocysteine (Hcy) and methylmalonic acid (MMA), respectively. Hcy is known to be neurotoxic, through overstimulation of the N-methyl-D-aspartate (NMDA) receptors, and toxic to the vasculature because of activation of the coagulation system and adverse effects on the vascular endothelium.9 MMA, a product of methylmalonyl-CoA, can cause abnormal fatty-acid synthesis, affecting the neuronal membrane.8 MMA and Hcy levels are elevated before any clinical manifestations of vitamin B12 deficiency and often precede low serum vitamin B12 levels.5 Neuropsychiatric symptoms usually precede hematologic signs and are often the presenting manifestation of cobalamin deficiency.10–12 Vitamin B12 deficiency definitions vary and usually rely on population statistics to establish normal serum-level thresholds (normal range: 180 pg/ml–900 pg/ml). This can be problematic because individual metabolic requirements may vary, and active disease can be present despite a “normal level.” False-negative results can also be explained because vitamin B12 levels are altered by the concentration of its binding proteins, and radioimmunoassays may detect inactive forms of cobalamin that may mask tissue deficiencies of active cobalamin. Studies have found that relying on the serum levels of vitamin B12 underestimated the prevalence of elevated metabolites that indicate tissue-deficiency by as much as 50%.13 As deficiency develops, serum values may be maintained at the expense of tissue cobalamin. Thus, a serum-cobalamin value above the lower normal cutoff point does not necessarily indicate adequate cobalamin status. A deficiency spectrum ranging from subclinical metabolic abnormalities to clinical symptoms could be better delineated by measuring Hcy and MMA levels2,4,14,15 or by measuring cobalamin bound to TC II (holo-transcobalamin) levels, which represent the active form of the vitamin.16,17 A recent study in elderly persons (N=700) found holo-transcobalamin (holo-TC) to be the best predictor for determining cobalamin deficiency, when compared with other measures (serum cobalamin, Hcy, and MMA) and was recommended as the first-line measure in assessing cobalamin status,18 but results have been inconsistent,19 and further research is warranted. It is estimated that between 3% and 40% of older adults have vitamin B12 deficiencies, where lower rates are seen in the community and higher rates in institutional settings.1,8,20–22 Prevalence rates vary according to economic status, age, and dietary choices.5,23 In a multi-ethnic study, elderly white men had higher deficiency prevalence rates than elderly black or Asian American women.24 The elderly population is especially at risk for cobalamin deficiency, given their higher prevalence of atrophic gastritis and other GI pathology, as well as the use of medications that can interfere with B12's absorption and/or metabolism: 10% to 30% of older people are unable to adequately absorb vitamin B12 from foods.23 Currently, it is estimated that food-cobalamin malabsorption syndrome causes most vitamin B12 deficiency, accounting for 60%–70% of cases, followed by pernicious anemia (PA), an autoimmune loss of secretion of intrinsic factor, which accounts for 15%–20% of cases.1 In food-cobalamin malabsorption, there is an inability to release cobalamin from food or transport proteins, thus affecting absorption, even though unbound cobalamin can be adequately absorbed. Vitamin B12 status relies not only on maintaining an adequate nutritional intake, but also ensuring an appropriate absorption process. Many different factors and conditions can interfere with this process, leading to deficiency states. The following are causes/risk factors for cobalamin deficiency:1,25–29 1. Food-Cobalamin Malabsorption: atrophic gastritis (>40% in elderly persons); chronic gastritis; and drug interactions, including metformin or commonly-prescribed drugs that can increase gastric pH, such as histamine receptor-2 antagonists (H2-blockers), proton-pump inhibitors (PPI), and antacids. These drugs may also increase small-intestinal bacterial overgrowth (SIBO), which is present in 15%–50% of elderly patients, which may also increase the risk of vitamin B12 deficiency. 2. Autoimmune: pernicious anemia, Sjogren's syndrome. 3. Surgical: post-gastrectomy syndrome, ileal resection. 4. Decreased intake or malnutrition: vegetarians; chronic alcoholism; elderly people. 5. Intestinal malabsorption: chronic pancreatitis (exocrine insufficiency), Crohn's disease, Whipple's disease, celiac disease, amyloidosis, scleroderma, intestinal lymphomas or tuberculosis, tapeworm infestation, bacterial overgrowth. 6. Drugs: metformin, antacids, H2-blockers, PPIs, colchicine, cholestyramine, anticonvulsants, antibiotics, nitrous oxide. 7. Increased demands: pregnancy and lactation. 8. Genetic: transcobalamin II deficiency. Cobalamin is critical to CNS functioning and brain aging status.30 Its deficiency can cause not only brain dysfunction, but structural damage, causing neuropsychiatric symptoms via multiple pathways. Possible mechanisms that could explain neuropsychiatric symptoms in cobalamin-deficiency states include 1) derangements in monoamine neurotransmitter production as cobalamin and folate stimulate tetrahydrobiopterin (BH4) synthesis, which is required for monoamine synthesis;13 2) derangements in DNA synthesis; as well as 3) vasculotoxic effects and myelin lesions associated with secondary increases in Hcy and MMA levels, respectively.31–33 Cobalamin deficiency may also indirectly cause a functional folate-deficiency state with its secondary metabolic consequences: high Hcy levels, decreased monoamine production, decreased S-adenosylmethionine (SAM) production, and abnormal methylation of phospholipids in neuronal membranes, potentially affecting ion channels and second messengers.13 FIGURE 1.Recommendations for Cobalamin Screening and Supplementation Cbl: serum cobalamin; MMA: serum methylmalonic acid. aPO (oral) supplementation is preferred unless it has been proven ineffective or compliance is limited; an alternative, parenteral approach is cyanocobalamin 1,000 mcg IM daily for 1 week, then weekly for 1 month, and monthly thereafter. In depression, disruption in methylation (one-carbon transfer) reactions in the CNS necessary for the production of monoamine neurotransmitters, phospholipids, and nucleotides34,35 may be a mechanism contributing to pathology. Cobalamin is also required for the synthesis of SAM, which is known to have antidepressant properties.36 In cognitive impairment, a proposed underlying pathophysiologic mechanism involves cobalamin deficiency leading to hyperhomocysteinemia (HHcy), which is a risk factor for dementia, by causing 1) agonism of N-methyl-D-aspartic acid (NMDA) receptors, leading to excessive intracellular calcium influx and cell death; 2) a state of hypomethylation, leading to DNA damage and apoptosis; 3) inhibition of hippocampal neurogenesis; 4) decreased gamma-amino butyric acid (GABA)-mediated inhibitory function; and 5) blood–brain barrier (BBB) dysfunction and endothelial cell toxicity. Cobalamin deficiency has also been found to cause myelin damage by increasing myelinotoxic and decreasing myelinotrophic growth factor and cytokines.32,37 In the absence of HHcy, however, there is less evidence to suggest that cobalamin deficiency is a risk factor for dementia.37 Low cobalamin levels have also been reported in normal-control subjects and non-dementia patients with other neurologic diseases.33 Radiologic manifestations of low vitamin B12 or HHcy include 1) leukoaraiosis: periventricular leukoencephalopathy or subcortical arteriosclerotic encephalopathy, manifested as white-matter hypodensity on CT scan or hyperintensity on T2-weighted MRI; 2) brain atrophy; and 3) silent brain infarcts. These findings have also been associated with other conditions, and, in the absence of HHcy, some studies have not found an increased risk for leukoaraiosis in people with low vitamin B12 levels.37 Neuropsychiatric symptoms due to vitamin B12 deficiency, which have been described since the early 1900s, often precede hematologic abnormalities.10,38 Commonly-described neuropsychiatric manifestations associated with vitamin B12 deficiency include motor, sensory, and autonomic symptoms; cognitive impairment; and mood and psychotic symptoms. The incidence of neuropsychiatric symptoms among individuals with vitamin B12 deficiency has been reported to be 4%–50%.33 Some of these symptoms include paresthesias, ataxia, proprioception and vibration loss, memory loss, delirium, dementia, depression, mania, hallucinations, delusions, personality change, and abnormal behavior.10,39–42 Neurologic symptoms have been the hallmark of vitamin B12 deficiency for many years, especially subacute combined degeneration (SCD) of the spinal cord in the context of pernicious anemia. In this condition, myelin in the lateral and posterior columns of the spinal cord degenerates secondary to cobalamin deficiency. It is now well known that neurologic signs and symptoms can develop before or in the absence of hematologic findings.10,42–44 These signs and symptoms include paresthesias, ataxia, proprioception and vibration loss, abnormal reflexes, bowel/bladder incontinence, optic atrophy, orthostatic hypotension, and autonomic disturbances. Clinical neurologic manifestations can correlate with radiologic findings in the spinal cord, and reversibility has been reported with early cobalamin treatment.45–47 Psychosis can be the presenting symptom in vitamin B12 deficiency.48 The association of psychotic symptoms and cobalamin deficiency has been described for more than a century, through case reports and other studies.10,12,13,38–40,48–50 Reported symptoms include suspiciousness, persecutory or religious delusions, auditory and visual hallucinations, tangential or incoherent speech, and disorganized thought-process.13 A causal association has been suggested since the early 1980s, when EEG abnormalities were documented in patients with pernicious anemia.12 Both EEG abnormalities and psychotic symptoms associated with cobalamin deficiency have shown a response to treatment with vitamin B12, strengthening the association.10,12,50 Another study found lower levels of vitamin B12 in patients with psychotic versus nonpsychotic depression.34 Some reports and studies have recognized the association of psychosis and cobalamin deficiency, specifically in older adults,1,8,39 where, given the higher prevalence of vitamin B12 deficiency, more cases are expected. A growing body of literature has documented an association of vitamin B12 deficiency and depressive symptoms in elderly patients.51,52 These studies go well beyond case-series reports and include large-scale, cross-sectional, and prospective studies. Similar associations had been found with low folate and hyperhomocysteinemia, but a cross-sectional study of community-dwelling individuals older than age 55 with depressive symptoms (N=278) found that vitamin B12 deficiency was independently associated with depression, whereas low folate and high homocysteine levels were associated with cardiovascular disease and physical comorbidity.51 In this study, causality was debated because it could not be demonstrated whether low vitamin B12 levels preceded depression or were a result of it, even though no relationship was found with self-reported decreased appetite and low vitamin B12 levels. These results were consistent with an earlier study of community-dwelling older women (N=700), which found a twofold risk of severe depression in elderly women with metabolically significant (elevated MMA levels) vitamin B12 deficiency.52 A community-based, cross-sectional study in Chinese elderly persons (N=669), reported that vitamin B12 deficiency (<180 pmol/liter) was significantly associated with depressive symptoms (odds ratio [OR]: 2.68), independent of folate and homocysteine levels.53 Another recent cross-sectional and prospective study, in Korean people older than age 65, without depression (N=521) found that lower baseline vitamin B12 and folate levels, and raised Hcy levels, were risk factors that predicted onset of late-life depression.54 These findings suggest an important association between vitamin B12 levels and depressive symptoms, supporting the approach of measurement and replacement of vitamin B12 in the treatment of depression in clinical practice. Adequate vitamin B12 levels may also play a role in depression treatment - response; a naturalistic prospective study (N=115) of outpatients with major depressive disorder (MDD) reported that adequate levels of vitamin B12 correlate with a better response in the treatment of depression.55 There is currently no recommendation to use vitamin B12 prophylaxis for depression, and a randomized placebo-controlled trial in elderly men did not find a difference in reducing the severity or incidence of depressive symptoms over 2 years when using vitamins B12, B6, and folate.56 Nevertheless, we recommend ensuring adequate vitamin B12 levels and replacing deficient levels in order to improve treatment response. Symptoms of mania have been described in the presence of vitamin B12 deficiency for decades,39 even though very few case reports have been published in subjects without other comorbidities that could contribute to such symptoms.57–60 Given the pathophysiologic mechanisms described above leading to white-matter lesions and the known association of white-matter lesions with bipolar disorder,61 we believe it is very likely that manic symptoms can be associated with vitamin B12 deficiency. We recognize that there is a need for further research to better understand this association and possible mechanisms, yet we recommend screening and supplementing vitamin B12 when appropriate in the presence of mania, especially when there is no psychiatric or family history of bipolar disorder. Low serum vitamin B12 levels have been correlated negatively with cognitive functioning in healthy elderly subjects.62,63 The association of vitamin B12 deficiency and cognitive dysfunction has been extensively documented,25,33,64–66 and some authors state that it can be linked credibly to mental decline.67 Symptoms described include slow mentation, memory impairment, attention deficits, and dementia.25,33,41 It has been suggested that low vitamin B12 levels may cause a reversible dementia that can be differentiated from Alzheimer's disease through neuropsychological evaluation,68 but other authors argue that there is insufficient evidence to support a specific profile of cognitive impairment associated with vitamin B12 deficiency69 and that dementia of the Alzheimer type is a compatible diagnosis.37 In patients with Alzheimer's disease, low vitamin B12 level has been associated with greater cognitive impairment.70 When considering dementia related to vitamin B12 deficiency, an important challenge has been addressing causality, because a decline in functioning and changes in nutrition associated with dementia can cause vitamin B12 deficiency. A previous association has not been consistently documented, as some cohort studies have shown that low vitamin B12 level increases the risk for cognitive impairment or dementia,65,71–76 whereas other studies have not demonstrated an increased risk.37,76–84 The evidence is more consistent when HHcy is present, and vitamin B12 deficiency can lead to HHcy, a risk factor for cognitive impairment and dementia.37 The reversibility of this dementia syndrome has also been questioned, given that studies reviewing large series of cases or decades of literature have yielded one and three cases of vitamin B12 reversibility, respectively.39,85 The evidence for response to treatment is better when pernicious anemia has been identified as the cause of vitamin B12 deficiency and it has been treated early in the course of the disease, before irreversible damage occurs.37 We acknowledge that the severity and chronicity of symptoms, as well as comorbid conditions and adequacy of treatment, are all important factors affecting response and reversal of symptoms. Current guidelines suggest assessing vitamin B12 levels in patients with cognitive impairment, or as part of a workup for dementia. We believe this remains a sound clinical judgment until newer evidence can clarify the issue, as vitamin B12 deficiency can lead to HHcy, a known risk factor for dementia. If vitamin B12 deficiency is diagnosed and treated early in the course of the disease, neuropsychiatric symptoms may be prevented or even reversed. The hallmark of delirium remains a fluctuating level of consciousness, with attention deficits. Vitamin B12 deficiency has been associated with attention deficits, acute mental-status changes, and acute cognitive changes, with EEG abnormalities.13,86 Case reports describe associations of vitamin B12 deficiency and delirium with or without other risk factors such as dementia and infection.87,88 In a prospective study of patients with mild-to-moderate dementia with low vitamin B12 levels that were supplemented, delirium risk was reduced significantly; however, no long-term improvement was seen in cognition or behavioral problems.89 Screening for vitamin B12 deficiency should start by developing a clinical awareness of the population at risk. These include elderly persons, vegans, alcoholics, malnourished persons, and patients with GI pathology, neuropsychiatric symptoms, or autoimmune diseases. Common suggestive laboratory findings include macrocytosis with or without anemia and hypersegmented neutrophils. Special attention should also be given to patients on medications such as PPIs, H2-receptor antagonists, antacids, metformin, colchicine, cholestyramine, and patients chronically on anticonvulsants or antibiotics. Serum cobalamin levels are unreliable when assessing vitamin B12 status, and there has been a lack of scientific consensus defining cutoff values to diagnose deficiency states.3,90 However, until better diagnostic tools are available, initial screening should start with assessing a serum cobalamin level. An adequate supply is suggested by levels above 350 pg/ml.22,29 We recommend assessment of MMA in elderly patients when cobalamin levels are below 350 pg/ml.20,26 If MMA levels are elevated, rule out other possible causes of elevated MMA, including renal insufficiency or intravascular volume depletion.25 Patients taking antibiotics may have low levels of MMA despite vitamin B12 deficiency, as propionic acid, a precursor of MMA,14,25 is generated by the anaerobic gut flora, which are depleted by the chronic use of antibiotics. HHcy can be more sensitive to cobalamin deficiency, but it can also reflect folate deficiency, whereas elevated MMA has a similar sensitivity, but more specificity to metabolic vitamin B12 deficiency.15 Assessment of holo-TC, the active fraction of cobalamin, may also provide reliable information to evaluate vitamin B12 status,18 but further research is warranted. Vitamin B12 deficiency is suspected when serum cobalamin levels are low (<350 pg/ml) and when both MMA and Hcy are elevated, or when MMA is elevated in the absence of renal disease or volume depletion, or when Hcy is elevated in the absence of folate deficiency. Several conditions can falsely elevate or decrease serum cobalamin levels, but a normal MMA and Hcy level suggest the absence of vitamin B12 deficiency.2 However, clinical judgment is warranted, as it has been reported that some patients may improve clinically when supplemented with vitamin B12, despite normal levels of vitamin B12, Hcy, and MMA, especially when PA is present.91 When PA is suspected, or if patients fail to respond to oral, transnasal, or buccal cobalamin preparations, antiparietal cell and anti-intrinsic factor antibodies should be tested. Alternatively, if the cost of assessing MMA levels (e.g., availability, financial, time) exceeds the diagnostic benefit, we recommend doing a risk/benefit analysis to consider supplementing vitamin B12, without further testing, in patients where deficiency is suspected, and serum cobalamin levels are less than 350 pg/ml. Efficacy in the treatment of vitamin B12 deficiency due to food-cobalamin malabsorption with both parenteral and oral routes has been demonstrated. A systematic review of randomized, controlled trials of oral (PO) versus intramuscular (IM) vitamin B12 for the treatment of cobalamin deficiency found adequate efficacy with both routes of administration;92 PO supplementation is usually more cost-effective and convenient, and is therefore the preferred route of initial therapy. The recommended doses for PO administration vary from 125 mcg/day–1,000 mcg/day of crystalline cyanocobalamin to 1,000 mcg/day–2,000 mcg/day,28,93,94 with a mean dose of 1,000 mcg/day being common practice. It is reasonable to initiate therapy with vitamin B12 or a multivitamin or B-complex supplement containing at least 1,000 mcg of cobalamin daily. Transnasal and buccal cobalamin preparations are also available. The IM route of replacement should be initiated in cases where PO, transnasal, or buccal preparations are ineffective or compliance is limited. The parenteral treatment for most cases of vitamin B12 deficiency, where a dietary deficiency is not implicated, involves IM administration of cyanocobalamin in doses of 100 mcg/month–1,000 mcg/month, indefinitely.8,28,95,96 An alternate scheme recommends administering 1,000 mcg/day IM for 1 week, then weekly for 1 month, and then monthly thereafter.28 Once treatment has been initiated, obtain repeat plasma vitamin B12, MMA, and Hcy levels in 4-to-6 weeks to assess response to treatment. Once stable mid-normal cobalamin levels are achieved, monitoring of vitamin B12 levels should be performed every 6-to-12 months. Even though supplementation with vitamin B12 has been proven safe, hypokalemia has been reported when treating patients with severe anemia.33,64 Evidence for the improvement or reversal of neuropsychiatric symptoms varies according to symptom severity, duration, and clinical diagnosis. It has been proposed that treating deficiencies in the early stages yields better results, as structural and irreversible changes in the brain may occur if left untreated. Vitamin B12 status has been associated with severity of white-matter lesions, especially periventricular in some,97 but not all studies.98 The partial reversal of white-matter lesions has been documented with cobalamin treatment,32,91 emphasizing the importance of early detection and treatment of vitamin B12 deficiency. A correlation of vitamin B12 treatment and decreases in MMA and total Hcy has been shown,10 suggesting a reversal of metabolic abnormalities. Some evidence suggests that EEG, visual and somatosensory evoked potentials, and P300 latency abnormalities readily improve with treatment even if no clinical benefits are observed.13,33 Vitamin B12 deficiency is a common and often missed problem in geriatric patients. Neuropsychiatric manifestations can be the presenting and only sign of this deficiency even in the absence of hematologic abnormalities. Vitamin B12 deficiency can occur despite “normal” serum cobalamin levels; therefore, measuring Hcy and MMA can decrease false-negative findings. Early detection and treatment are important to prevent structural and irreversible damage leading to treatment-resistant symptoms. Oral treatment can be as efficacious as parenteral treatment even in the presence of pernicious anemia. Because the neurologic damage caused by cobalamin deficiency is often irreversible, and progression of disease can be abated by cobalamin replacement, it is important to maintain plasma cobalamin levels in the mid-normal range among elderly persons. 1. Screen annually for vitamin B12 deficiency in at-risk patients by measuring serum cobalamin levels. 2. Measure MMA levels in patients with serum cobalamin levels <350 pg/ml. High levels, in the absence of renal insufficiency or volume depletion, are suggestive of vitamin B12 deficiency. 3. If there is no access to MMA or the cost outweighs the diagnostic benefit, a clinical approach can be to supplement after a risk/benefit analysis and monitor for response when low serum cobalamin levels are present (<350 pg/ml) in the context of suggestive clinical findings. 4. Administer cyanocobalamin 1,000 mcg PO daily, even if pernicious anemia has been identified. An alternative parenteral treatment includes cyanocobalamin 1,000 mcg IM daily for 1 week, then weekly for 1 month, and then monthly thereafter. 5. Evaluate for pernicious anemia by requesting antiparietal cell and anti-intrinsic factor antibodies in patients with clinical symptoms of subacute combined degeneration of the spinal cord and suggestive hematological manifestations (e.g., macrocytic anemia). Patients with pernicious anemia require lifetime cobalamin supplementation. 6. Monitor vitamin B12 serum levels at least yearly in patients who have stopped supplementation after symptoms have improved or cobalamin levels have been replenished. 7. Maintain plasma vitamin B12 levels in the mid-normal range (400 pg/ml–500 pg/ml) to reduce risk of developing vitamin B12-related neuropsychiatric disorders.
fwe2-CC-MAIN-2013-20-44175000
US space agency Nasa has launched two spacecraft that are expected to make the first 3D movies of the Sun. CMEs will typically throw a billion tonnes of matter into space The Stereo mission will study violent eruptions from our parent star known as coronal mass ejections (CMEs). The eruptions create huge clouds of energetic particles that can trigger magnetic storms, disrupting power grids and air and satellite communications. The mission is expected to help researchers forecast magnetic storms - the worst aspects of "space weather". "Coronal mass ejections are a main thrust of solar physics today," said Mike Kaiser, the Stereo project scientist at the US space agency's (Nasa) Goddard Space Flight Center. "With Stereo, we want to understand how CMEs get started and how they move through the Solar System." The mission comprises two spacecraft, lofted on a Delta-2 rocket from Cape Canaveral, Florida. The two near-identical satellites will orbit the Sun, but one of them will move slightly ahead of the other, to provide stereo vision. Technical hitches have delayed previous attempts at launching. Coronal mass ejections erupt when "loops" of solar material lifting off the Sun suddenly snap, hurling a high-temperature (hundreds of thousands of degrees) plasma into space. The plasma is formed of electrons and ions of hydrogen and helium. A CME will contain typically a billion tonnes of matter and move away from the Sun at about 400km/s. Much of the time, these outbursts are directed away from the Earth, but some inevitably come our way. When they do, the particles, and the magnetic fields they carry, can have highly undesirable effects. "When a big storm hits and the conditions are just right, you can get disturbances on power grids and on spacecraft - they are susceptible to high-energy electrons and protons hitting them," Dr Kaiser told BBC News. "These particles are hazardous to astronauts; and even airline companies that fly polar routes are concerned about this because CMEs can black out plane communications, and you can get increased radiation doses on the crew and passengers. "If we know when these storms are going to hit, we can take preventive action." At the moment, solar observatories, because they look at the Sun straight on, have great difficulty in determining the precise direction of a CME. By placing two spacecraft in orbit to look at the Sun-Earth system from two widely spaced locations, scientists will be able look at the storms from the side - to work out very rapidly if a cloud of plasma is going to hit our planet. "In solar physics, we make a remarkable leap in understanding either by producing new instruments that have better resolution, so you can probe deeper into the Sun or see structures you've never seen before; or by going to a different vantage point," said Stereo program scientist Dr Lika Guhathakurta. "This is where Stereo comes in; it is not that its instrumentation is a breakthrough in terms of resolution, but it will see the Sun in all its 3D glory for the first time - all the way from the surface of our star out to the Earth. It's going to be spectacular." The Stereo spacecraft each carry 16 instruments. These include telescopes, to image the Sun at different wavelengths, and technologies that will sample particles in CMEs. The UK has a significant role on the mission, having provided all the camera systems on board the spacecraft. It has also delivered a Heliospheric Imager (HI) for each platform. The spacecraft are identical apart from a few structural details This instrument will follow the progress through space of a bubble of plasma by tracing its reflected light. The engineering demands on the British team have been exacting. "The reflected light from these coronal mass ejections is extremely faint," explained Dr Chris Eyles of the University of Birmingham. "It is typically a [100 trillion] times fainter than the direct light from the Sun's disc, so we have to use a sophisticated system of baffles to reject that direct light. "Critical to the HI's operation has been cleanliness of assembly. If we get dust particles, fibres of even hairs on critical surfaces inside the instrument, they would scatter sunlight and destroy the performance of the instrument." The Stereo spacecraft will send their data straight to the US National Oceanic and Atmospheric Administration (Noaa), the agency which makes the space weather forecasts used worldwide by satellite and airline operators. The new information is expected to lengthen the advance warning forecasters are able to give - from the current few hours to a couple of days. With our ever increasing dependence on spacecraft in orbit - for communications and navigation - the Stereo mission comes not a moment too soon. Cleanliness is paramount in the instruments' preparation Earth's magnetic field gives the planet and its inhabitants a good measure of protection, but with space agencies seemingly intent on sending astronauts to the Moon and even to Mars in the next few decades, there is a pressing need for a fuller understanding of the Sun's activity. Moon or Mars bases will have to be carefully designed shelters, and astronauts will need very good advice before deciding to venture too far from such protection. August 1972 saw a solar storm that is legendary at Nasa. It occurred between two Apollo missions, with one crew just returned from the Moon and another preparing for launch. If an astronaut had been on the Moon at the time, they might have received a 400 rem (Roentgen Equivalent Man) radiation dose. Not only would this have caused radiation sickness, but without rapid medical treatment such a sudden dose could have been fatal. Dr Chris Davis from the UK's Rutherford Appleton Laboratory underlined the power of CMEs. "The energy in a CME is typically about 10-to-the-power-of-24 joules. That is the same as a bus hitting a wall at 25mph a billion, billion times. It's 100 times the energy stored in the world's nuclear arsenal," he said. The spacecraft launched on a trajectory that goes past the Moon The lunar swingby will position the spacecraft in widely spaced orbits One will lead the Earth in its orbit, the other will lag behind Over the course of their mission, the twins will continue to separate Their different views will be combined to make 3D movies of CMEs
fwe2-CC-MAIN-2013-20-44183000
The findings are published in today's issue of Nature. "This calcium transporter really is an important key to understanding how the heart is regulated," said Dr. Donald Hilgemann, professor of physiology and senior author of the study. "At every beat, calcium in heart cells increases. And it's calcium that is the messenger to the heart to get it to contract. "We knew for a long time that NCX1 brings calcium into and out of heart cells by exchanging it for sodium. And in doing so it generates important electrical currents in the heart. The surprise is that this transporter dances more than just that old waltz from Vienna. It knows Salsa!" The research reveals two new modes of operation of NCX1. First, the membrane protein can move sodium into heart cells without moving calcium out. This mode generates an electrical current independent of calcium transport that contributes to excitation of the heart. The second mode is to move calcium into heart cells without generating any electrical current. This mode, Dr. Hilgemann said, may determine the calcium that remains in heart cells after each beat and thereby determines the strength of cardiac contraction over many beats. Using so-called "giant membrane patch" techniques together with highly sensitive ion detection techniques, both developed and implemented by Dr. Hilgemann, UT Southwestern researchers were able to determine precisely how NCX1 works as an ion exchanger, how many calcium and sodium ions move across the membrane, when they are exchanged, and, surprisingly, when they move together. "Transporters move ions across membranes by grabbing hold of them and transferring the energy of one type of ion to another type, just one Contact: Amy Shields UT Southwestern Medical Center
fwe2-CC-MAIN-2013-20-44184000
Photograph by George Steinmetz, National Geographic Published November 30, 2012 This piece is part of Water Grabbers: A Global Rush on Freshwater, a special National Geographic News series on how grabbing land—and water—from poor people, desperate governments, and future generations threatens global food security, environmental sustainability, and local cultures. The piece is also this week's "In Focus" series—stepping back, looking closer. Mayor Daouda Sanankoua had traveled overnight by boat to see me, through flooded forests and submerged banks of hippo grass. There was no other way. Sanankoua's domain, the district of Deboye in the heart of Mali in West Africa, is on the edge of the Sahara. Yet Sanankoua's homeland is mostly water. His people live by catching fish, grazing cattle, and harvesting crops in one of the world's largest and most fecund wetlands, a massive inland delta created by the meandering waters of one of Africa's mightiest waterways, the Niger River. Nearly two million Malians live on the delta. "Everything here depends on the water," said the mayor. "But"—and here he paused gravely, pushed his glasses down an elegant nose, and began waving a long finger—"the government is taking our water. They are giving it to foreign farmers. They don't even ask us." What is happening here in Mali is happening all over the world. People who depend on the natural flow of water, and the burst of nature that comes with it, are losing out as powerful people upstream divert the water. As the mayor talked in the schoolyard of Akka village, on an island in the heart of the Niger inland delta, women rushed around putting straw mats on the ground, and bringing bowls of food. By torchlight, we savored a supper of smoked fish, millet porridge, and green vegetables, all products of the waters around us. This aquatic world, a green smudge on the edge of the Sahara 250 miles (402 kilometers) across, seemed well. It is a major wintering ground for millions of European birds. On the way to Akka, I constantly grabbed binoculars to watch birds I knew from back home. In England, kingfishers are rare; here they seemed to be everywhere. There were other European water birds in profusion, like cormorants and herons, along with endangered local birds such as the black crowned crane. Without being too romantic, there seemed to be a remarkable degree of harmony between nature and human needs. I saw the Bozo people, the delta's original inhabitants, ply their canoes from dawn to dusk, casting nets that catch an estimated 100,000 tons of fish a year—from the ubiquitous Nile perch and bottom-living cichlids to favorite local species that live only amid roots in the flooded forests. The Bambara, founders of the great 13th-century Mali Empire, planted millet and rice in the delta mud as the waters receded. By the early 19th century the Fulani arrived from across West Africa to graze their cattle and goats on the aquatic pastures of hippo grasses. There have been disputes, of course, but for the most part, by concentrating on different activities, the different groups have been able to respect each other's rights to harvest the wetland over generations. All the scientific evidence suggests that nature thrived too—until recently. For the mayor was clear that the waters are receding. Fish catches are down. The flooded forests are being left high and dry. He fears his world could soon be gone. His people are doing their best to cope. The following morning, I watched the women of Akka scrape channels in caked and cracked soils on the edge of the village, in an effort to persuade water from the lake to reach their kitchen gardens. Each year, it got harder, they said. Diverting the Niger River Some blame failing rains and changing climate for this crisis on the delta. Not so, said the mayor. Upstream diversions of water are to blame. Back on dry land, I found the source of the mayor's ire just a few miles away, where engineers were constructing concrete barrages to tame the Niger River's flow and digging canals to divert its water just before it enters the wetland. The aim is to provide water for Chinese sugar farms, Libyan rice growers, and German-, French-, and American-funded agricultural development schemes, in a region managed by a government irrigation agency called the Office du Niger. The government sees such development as the route to modernizing its agriculture through encouraging foreign investment. But critics say ministers in Bamako, the capital, are oblivious to the shortage of water that is a critical constraint on achieving this goal. The Office du Niger already presides over a quarter of a million acres (roughly 100,000 hectares) of irrigated rice fields. That land takes 8 percent of the river's flow, according to the agency's records. That figure can rise to 70 percent in the dry season, says Leo Zwarts, a Dutch government hydrologist who is a leading authority on the Niger River. The local engineer in charge of the main diversion structure on the river, the Markala barrage, agrees. Sitting on the riverbank beside the massive dam-like structure, Lansana Keita told me that he and his colleagues often failed to ensure the release of 1,413 cubic feet (40 cubic meters) a second, the official minimum flow of water downstream into the wetland. "We do our best, but irrigation has priority," he said. That was evident. During the dry months, there is often more water in the canals that lead from the barrage to the fields than there is in the river itself as it heads for the delta. As a result, the delta is already diminishing. Zwarts estimates that existing abstractions—diversions—have cut the area of delta that is flooded annually by an average of 232 square miles (600 square kilometers), killing many flooded forests and expanses of hippo grasses. He has a pair of graphs that show how the amount of fish sold in local markets goes up and down with the size of the delta inundation the previous year. In recent years, both have been declining. But that is just the start. Behind Keita was a large metal sign displaying a map of the domain of the Office du Niger. It showed small areas painted green where there is already irrigation, and much larger areas painted yellow to show where irrigation is planned. All three main canals from the barrage were being enlarged during my visit. The government eventually wants to irrigate ten times more land than today, and is bringing in foreign companies to do it. They are offered free land and as much water as they need. Zwarts predicts that the diversions could soon take the entire flow of the Niger River during the dry season. Add to that the impact of a hydroelectric dam planned farther upstream by the government of Guinea, and Zwarts says the delta could dry up every fourth year. The Mali government does not confirm this analysis, but its own figures show that a fall in water levels of just one foot would dry out half of the delta. In an interview, the (now former) head of the Office du Niger said the government's targets for minimum flows will protect the delta. But he also said his office is tasked with increasing irrigation for agriculture. When I pointed out these two goals seem to be in contradiction, he declined to comment. Mali's Water Deals This won't all happen overnight. Political unrest in the north of Mali in recent months has discouraged foreign investment. A multiyear aid scheme funded by the U.S. government's Millennium Challenge Corporation to irrigate some 35,000 acres and turn herders into rice farmers was terminated a few months early, although many Malians did receive farm supplies. But a 50,000-acre sugar scheme masterminded by the Chinese state-owned China Light Industrial Corporation for Foreign Economic and Technical Co-operation is close to completion. And other projects are expected to follow once peace returns, including the biggest of them all, a Libyan plan to grow rice on a quarter-million acres (roughly 100,000 hectares). The huge diversion canal for what is known as the Malibya project is already dug and full of water. Critics of these megaprojects say the government of Mali is blind to the damage the water abstractions will do to the wetland, a mysterious region where officials seldom go. "The government is so obsessed with getting investment for its agriculture that it cannot see when that investment will do more harm than good to its people," Lamine Coulibaly of the National Coordination of Peasant Organizations of Mali told me. Jane Madgwick, head of Wetlands International, a science-based NGO based in the Netherlands that is working with people on the delta, agrees. Far from filling the bellies of Malians, "these projects will decrease food security in Mali, by damaging the livelihoods of those most vulnerable," she says. Water Grabbing: A Global Concern? The situation in Mali may be part of an emerging global pattern. From the papyrus swamps of Lake Victoria in East Africa to the flooded forests of Cambodia's Great Lake, from the dried-up delta of the Colorado in Mexico to the marshes of Mesopotamia, those living downstream have been at the mercy of those they call water grabbers. Some—like those in the Niger Delta—worry that they may become victims of the "next Aral Sea," the doomed body of water in central Asia that was once the world's fourth largest inland sea. Half a century ago, Soviet engineers began to grab its water to grow cotton. Over a few decades, they largely emptied the sea and created a giant new desert. Today, the formerly profitable fishing fleets and fertile wet-delta pastures are all gone. The surrounding region is poisoned by salt blown from the dried-up seabed, the climate is changing, the people are departing, and most of the sea is a distant memory. Madgwick of Wetlands International says that what Mali plans for the inner Niger Delta would be similar, "a human catastrophe as vicious and shameful as the drainage of the Aral Sea." Out on the delta today, the Bozo and Bambara and Fulani people await news of their fate. Fred Pearce is a journalist and author on environmental science. His books include When the Rivers Run Dry and The Land Grabbers, both for Beacon Press, Boston. He writes regularly for New Scientist magazine, Yale Environment 360, and The Guardian, and has been published by Nature and The Washington Post. The World's Water The world's increasing population and development of agricultural land are putting pressure on the Earth's limited freshwater supplies. Find out what's at stake and how you can help. Learn more about the world's water challenge with photos, stories, videos, and more. You might be surprised to see how the daily choices you make affect critical watersheds around the world. Water Currents, by Sandra Postel and Others Drillers turn to alternative sources of water for fracking, but concerns remain. California's mountain yellow-legged frog gets new hope from captive breeding. A year in the making, this video highlights nature's splendor. Connect With Nat Geo Stories From Experts in the Field National Geographic Fellow Zeb Hogan tells us what needs to happen in order to save the region's giant fish. Sunita Narain tells us how one remote village is setting an example for the rest of the country—and world. National Geographic Freshwater Fellow Sandra Postel describes one of the biggest success stories in urban water management. Special Ad Section Shop National Geographic Great Energy Challenge Blog - Study Says: Hey, You, Get Onto the Cloud (It Saves Energy) - Who Will Swelter This Summer? The Pressures on the Nation’s Power Grid - Tar Sands Tour: Boomtown, Scarecrows, and Spin; “We Have Met the Enemy, and He is Us” - Climate Change: China, U.S. Bring Toy Fire Truck to Seven-Alarm Fire - Student Infographic Contest Paints Bright Picture of Youth Concern on Energy and Climate
fwe2-CC-MAIN-2013-20-44191000
Call it the fish version of instant messaging. When a fish is injured, it secretes a compound that makes other fish dart away (as seen in the latter half of the sped up video above, when the red light flashes). The substance, named Schreckstoff (German for "scary stuff"), protects the entire community of fish, but no one knew how it worked. Now they do, thanks to an analysis of fish mucus reported today in Current Biology. The key ingredient in Schreckstoff is a sugar called chondroitin sulfate, which is found in abundance in fish skin. When the skin is torn, enzymes break the compound down into sugar fragments that activate an unusual class of sensory neurons known as crypt cells in other fish. And the fish take off. See more Videos.
fwe2-CC-MAIN-2013-20-44192000
viyh writes with coverage on MSNBC of the discovery of ancient microbes fossilized in the gut of a termite. "One hundred million years ago a termite was wounded and its abdomen split open. The resin of a pine tree slowly enveloped its body and the contents of its gut. In what is now the Hukawng Valley in Myanmar, the resin fossilized and was buried until it was chipped out of an amber mine. The resin had seeped into the termite's wound and preserved even the microscopic organisms in its gut. These microbes are the forebears of the microbes that live in the guts of today's termites and help them digest wood. ... The amber preserved the microbes with exquisite detail, including internal features like the nuclei. ... Termites are related to cockroaches and split from them in evolutionary time at about the same time the termite in the amber was trapped."
fwe2-CC-MAIN-2013-20-44193000
Research shows benefits of poverty simulation for university students March 25, 2011Print - Denise Horton Athens, Ga. - An article by two University of Georgia researchers in the latest issue of the Journal of Poverty demonstrates that students participating in a simulation "soften their attitudes" regarding those who live in poverty. Sharon Y. Nickols, the Janette McGarity Barber Distinguished Professor in the College of Family and Consumer Sciences, and Robb Nielsen, an assistant professor in the college, conducted both a qualitative and quantitative study to determine whether students developed "social empathy" after participating in a two-and-a-half hour simulation titled, "Welcome to the State of Poverty." During the simulation, students in Nickols' course on managing family resources are clustered into various family groups-two parents and two children; an older woman living alone; a single mother with two children; and a cohabiting couple, for example. Faculty members and other volunteers play the roles of community members, such as the town banker, pawn shop owner and a social services employee. During the course of the simulation, the participants must accomplish a variety of tasks, including buying groceries, paying their bills and caring for both toddlers and aging parents while subsisting on low wages and other issues, such as being unable to speak English. During the course of each 15-minute "month," new situations are randomly interjected. In some cases, these are helpful events, such as an unemployed parent receiving a job. In other cases, the events add to the families' difficulties, such as a family without health insurance facing illness. The simulation, which is led by Cooperative Extension Multicultural Specialist Sharon Gibson, has been used for many years with a variety of community leaders to help them realize the complexities of poverty, but the study by Nickols and Nielsen is apparently the first to measure its impact on college students. In conducting their study, Nickols and Nielson used two ways of measuring students' attitudes-a pre- and post-test and a reflective paper that was written after the simulation. What they found, according to Nielsen, was that the students were better able to identify with the experiences and reactions of those in adverse or difficult situations. "It wasn't a dramatic change, but we didn't expect a dramatic change," he said. "These students started relatively empathetic and became more empathetic." Among the changes, participants in the simulation shifted their opinions about whether people who are poor attempt to get out of poverty; whether they attempt to save money; and whether they'd rather work than be on welfare. In addition, their views on whether the poor have equal access to health care and whether the government does enough to help those who are poor, also shifted. They gained a better understanding of the fact that there are more children than adults living in poverty. In looking at the reflective papers the students wrote a week after the simulation, the researchers found that 65 of the 75 students who wrote papers described themselves as having gained greater insights into the lives of the poor as a result of the simulation. Among the remaining students, seven reported no change in their opinions (in some cases, they stated they already were empathetic to the poor) and the responses of three students were ambiguous. "I began to understand and realize that it's not always a person's fault for being in a poverty-stricken lifestyle," wrote one student. "Just sitting in an environment of failure makes your own drive to succeed that much harder." Another student was surprised by the difficulty of assessing social services: "I knew very little about TANF (Temporary Assistance for Needy Families). I cannot imagine that everyone that is in need of help knows all about the programs available to them." One finding the authors hadn't anticipated, based on previous studies that examined empathy, was the stress the participants felt as they inhabited the roles of those living in poverty. "The stress...was brought on entirely by my family's financial insecurity," a student said. "I had little time to do anything other than go to work, run errands and pay the bills; I barely saw my children or husband and never had the chance to relax." "Getting groceries, applying for TANF and food stamps and going to the QuickCash all took so long to get accomplished," wrote another. "I think that many people in poverty would feel like they were on a treadmill, not really getting anywhere." "Much of what students learn in the family resources class emphasizes the breadth of resources that are available, including time, space, and family and community support, in addition to the monetary and material goods we frequently think of," Nickols said. "Part of what this simulation demonstrates is what happens when you're missing a number of those resources." Filed under: Culture / Living
fwe2-CC-MAIN-2013-20-44197000
UGA researcher developing new vaccine to fight resurging mumps virus June 13, 2012Print Athens, Ga. - Mumps may seem like a disease of a bygone era to many people in the U.S. who, thanks to immunization programs, have been spared the fever, aches and characteristic swollen jawline of the once common viral infection. Biao He, a University of Georgia professor of infectious diseases and a Georgia Research Alliance distinguished investigator in the College of Veterinary Medicine, worries that a new strain of the virus is spreading, and it could lead to the widespread reintroduction of mumps. Now, thanks in part a $1.8 million grant from the National Institutes of Health, He and his team are working on a new vaccine to stop it. Although not typically a life-threatening disease, mumps can lead to serious health problems such as viral meningitis, hearing loss and pancreatitis; and it can cause miscarriage during early pregnancy. Vaccinations diminished the number of cases dramatically, and at one point it appeared that the U.S. was on pace to eradicate the disease. But two large outbreaks of the virus in 2006 and 2010 involving thousands of confirmed cases in the Midwest and Northeast put the hope of eradication on hold. He is concerned that the current vaccine, which has been in use since 1967, may be showing signs of weakness. "The virus is always evolving and mutating, and new viruses will emerge," He said. "It's only a matter of time until the old vaccine we have doesn't work." The current vaccine is commonly called the Jeryl Lynn strain and is named after the daughter of inventor Maurice Hilleman. It is based on a specific genotype of the mumps virus called genotype A. However, the 2006 and 2010 mumps outbreaks were caused by another strain, genotype G. Even more troubling is that most of the people who contracted mumps during the 2006 and 2010 outbreaks had received the recommended two-dose vaccination in their early childhood, meaning that the virus was spreading even among the vaccinated population. "The question is: With this new genotype virus emerging in the vaccinated population, what do you do about it?" He said. Some have suggested administering a third Jeryl Lynn vaccine to boost immunity later in life, but it is unclear if that approach would be successful. He suggests that modern scientific techniques have made the creation of some vaccines much easier, so producing a new mumps vaccine may be the most effective method of controlling the emerging threat. "In the past few years, we have taken advantage of genetic engineering, and my lab is particularly good at engineering viruses," He said. "We can take a virus, look at its genetic sequence, take bits and pieces away and generate a new virus with less virulence that will work as a vaccine." Before the advent of genetic engineering, the process of creating a vaccine could be intensely laborious, as researchers would have to pass the virus through many generations of reproduction until they found a naturally occurring weakened virus. This process can take long periods of time, and there is little guarantee that the weakened virus will work as a vaccine. Genetic engineering allows He's lab to produce an effective and safe vaccine much more quickly. Vaccine safety became a topic of much discussion after British medical researcher Andrew Wakefield suggested that there was a link between the measles, mumps and rubella vaccine and autism. However, his claims were found to be fraudulent, and Wakefield was barred from practicing medicine in the United Kingdom. Much of the fallout from the Wakefield case remains, and some are still hesitant to have their children vaccinated, but He is insistent that administering vaccines to children is the safe and responsible thing to do. "The No. 1 issue for us in making a pediatric vaccine is safety," He said. "So far our testing suggests we are on the right track." Once He and his laboratory have devised a safe, reliable method to create vaccines for genotype G, they can apply that knowledge to rapidly produce vaccines for the other 12 mumps genotypes currently circulating in populations throughout the world. Health professionals were able to contain the outbreaks of 2006 and 2010, but He thinks that the large global population and ease with which people move from one location to another make humankind vulnerable to rapid disease spread. "It's almost like a small fire; if it stays small, we can put it out," He said. "But if conditions are right, and the wind begins to blow, the fire can take over." Research reported in this publication was supported by the National Institutes of Health under award number 1R01AI097368-01A1.
fwe2-CC-MAIN-2013-20-44198000
Cool Crop Circles Last year, quite a complex and elaborate 100ft diameter circle appeared overnight in a field of oil seed rape near Silbury Hill, Wiltshire Cool Crop Circles Unseasonably warm and wet spring weather has seen many summer flowers appearing earlier than usual, and, surprisingly for so early in the year as things normally go, has sparked the crop circle creationists into early action. Last year, quite a complex and elaborate 100ft diameter circle appeared overnight in a field of oil seed rape near Silbury Hill, Wiltshire, causing a bit of a stir, a fabulous floral creation of six interlocking 'petal' like crescent shapes, the very first proper design of the season, according to expert Lucy Pringle. From Petersfield in Hampshire, Lucy is a founder member of the Centre for Crop Circle Studies, widely known as an international authority on crop circles, having carried out research over several years into both physiological and psychological effects on those visiting such installations. Her research has revealed that there are measurable changes to hormone levels and brain activity in humans after coming into the vicinity of these creations, which have in past years included triangles, birds, complex 3-D geometric shapes, as well as ahidden mathematical codes, such as that found in 2008 near Wroughton, Wiltshire, thought to represent the first ten digits of the pi number. A massive crop circle 200ft across appeared overnight close to the age-old topic of conversation amongst scholars, Stonehenge, long thought of as a hot spot for this bizarre practice, being the tallest prehistoric man-made mound in Europe, iand an obvious focal point. Whilst there are indeed those who believe crop circles an entirely man-made phenomenon, others believe them caused by the magnetic field of the planet, while those out on the periphery think them the work of extra-terrestrial being trying to communicate. Exactly how crop circles are created is still a mystery in many ways, and enthusiasts argue that not enough night hours exist in summer to allow humans to complete the complex creations. Whatever the truth is, the rash of environmental art, of sorts, which gives farmers cause to feel frustration at the mindless destruction of good crop plants, is likely to always be a feature of the summer months. Whether or not somebody ever manages to establish just how they are created remains to be seen, but I personally would not bet against the E.T. idea. The truth, as they say, is out there.
fwe2-CC-MAIN-2013-20-44199000
Graves are about identity, that we are here – that we exist. Here lie one hundred of our ancestors in unmarked graves. Although a free people they were taken to Wybalenna to help establish the ‘friendly mission’ and a promise of return to their homelands. It soon became a ‘death camp’. They were betrayed, abused and left to die. Some of our old people died not only at the hands of the soldiers but of dispossession of land and broken hearts. In this field were once one hundred crosses that marked these graves. The stolen plaque read: To commemorate approximately 100 Aborigines buried in the vicinity of Wybalenna 1833–1847 erected by the Junior Farmers of Flinders Island. Phyllis Pitchford, 1994
fwe2-CC-MAIN-2013-20-44208000
For too many New Jerseyans, addiction begins in the medicine cabinet. The New Jersey Division of Consumer Affairs has developed Project Medicine Drop as an important component of its effort to halt the abuse and diversion of prescription drugs. It allows consumers to dispose of unused and expired medications anonymously, seven days a week, 365 days a year, at “prescription drug drop boxes” located within the headquarters of participating police departments. Each Project Medicine Drop box is installed indoors, affixed to the floor or wall in a secure area within police department headquarters, within view of law enforcement officers, in an area to which members of the public may be admitted to dispose of their unused medications. Their prominent “Project Medicine Drop” logos make the boxes highly visible and recognizable. This initiative builds on the success of the U.S. Drug Enforcement Administration's National Take Back Initiative, and the American Medicine Chest Challenge, which is sponsored in New Jersey by the DEA, Partnership for a Drug Free New Jersey, and Sheriffs' Association of New Jersey. Both programs provide single-day opportunities to drop off unused medications at pre-identified, secure locations. Project Medicine Drop provides the opportunity to discard unused prescription medications every day throughout the year. The participating police agencies maintain custody of the deposited drugs, and dispose of them according to their normal procedures for the custody and destruction of controlled dangerous substances. They report the quantity of discarded drugs to the Division of Consumer Affairs on a quarterly basis. The Division plans to expand the program in 2012, to include police departments in each of New Jersey's 21 counties. The facts and statistics about prescription drug abuse are staggering: - Every day, 40 Americans die from an overdose caused by prescription painkiller abuse, according to the U.S. Centers of Disease Control. Overdoses of opioid prescription drugs now kill more people in the U.S. than heroin and cocaine combined. - Two in five teenagers mistakenly believe prescription drugs are "much safer" than illegal drugs, according to the DEA, and three in 10 teens mistakenly believe prescription painkillers are not addictive. - In the United States, every day 2,500 youths take a prescription pain reliever for the purpose of getting high for the very first time, according to the Office of National Drug Control Policy. - The US Drug Enforcement Administration reports that prescription drugs, including opioids and antidepressants, are responsible for more overdose deaths than "street drugs" such as cocaine, heroin, and methamphetamines. - The number of American teenagers and adults who abuse prescription drugs is greater than those who use cocaine, hallucinogens, and heroin combined, according to the 2009 National Survey on Drug Use and Health, compiled by the US Department of Health and Senior Services. - In June 2011, the New Jersey State Commission of Investigation reported that a growing number of young people are abusing prescription drugs, and noted a significant trend in which the practice has led to increases, not only in the number of young people addicted to painkillers, but to the number of young people using heroin as well.
fwe2-CC-MAIN-2013-20-44213000
The earliest record of human activity in northern Europe Parfitt, Simon A.; Barendregt, Rene W.; Breda, Marzia; Candy, Ian; Collins, Matthew J.; Coope, G. Russell; Durbridge, Paul; Field, Mike H.; Lee, Jonathan R.; Lister, Adrian M.; Mutch, Robert; Penkman, Kirsty E.H.; Preece, Richard C.; Rose, James; Stringer, Christopher B.; Symmons, Robert; Whittaker, John E.; Wymer, John J.; Stuart, Anthony J.. 2005 The earliest record of human activity in northern Europe. Nature, 438. 1008-1012. 10.1038/nature04227Full text not available from this repository. (Request a copy) The colonization of Eurasia by early humans is a key event after their spread out of Africa, but the nature, timing and ecological context of the earliest human occupation of northwest Europe is uncertain and has been the subject of intense debate1. The southern Caucasus was occupied about 1.8 million years (Myr) ago2, whereas human remains from Atapuerca-TD6, Spain (more than 780 kyr ago)3 and Ceprano, Italy (about 800 kyr ago)4 show that early Homo had dispersed to the Mediterranean hinterland before the Brunhes–Matuyama magnetic polarity reversal (780 kyr ago). Until now, the earliest uncontested artefacts from northern Europe were much younger, suggesting that humans were unable to colonize northern latitudes until about 500 kyr ago5, 6. Here we report flint artefacts from the Cromer Forest-bed Formation at Pakefield (52° N), Suffolk, UK, from an interglacial sequence yielding a diverse range of plant and animal fossils. Event and lithostratigraphy, palaeomagnetism, amino acid geochronology and biostratigraphy indicate that the artefacts date to the early part of the Brunhes Chron (about 700 kyr ago) and thus represent the earliest unequivocal evidence for human presence north of the Alps. |Programmes:||BGS Programmes > Other| |Date made live:||29 Nov 2011 15:13| Actions (login required)
fwe2-CC-MAIN-2013-20-44217000
Warm Springs: A Classic Boondoggle? By Harold Gilliam If President Carter wants to bolster his anti- inflation program by saving the government at least $200 million, I have a suggestion for him. The planned Warm Springs dam, on a tributary of the Russian river in Sonoma County, is a classic boondoggle, a pork barrel item that was somehow overlooked when the President compiled his nationwide ''hit list" of water projects that he boldly said he would veto—and did. Even in the Corps of Engineers' optimistic estimate of the dam's benefits, Warm Springs is a marginal project, with a benefit-cost ratio of 1.1. That means for every dollar of cost, there will be benefits of $1.10. If the benefits were a shade less, the dam could not be financially justified as a federal project. Let's look at the principal benefits- recreation and water supply. The federal flood-control program was born 40 years ago in an effort to do a job that no state could handle alone — prevent the kind of disastrous floods that were occurring along the Mississippi, with tragic loss of life and property. What happens on the Russian river is quite different. Floods there come not with a rushing wall of water that wipes out whole communities but with a gradual rising of the river until it laps at the doors of buildings in resort communities like Guerneville and sometimes inundates basements and ground floors. Traditionally, after this kind of flood, the owners shovel out the mud and go back to work. It's a nuisance, but it's been happening on the Russian river for generations, and no one buying property there has any excuse for being ignorant of the river's habit of rising in heavy rains. So we may wonder by what right property owners now demand the federal government bail them out. There are ways of flood- proofing buildings that would accomplish much the same results the dam would provide — lowering the high-water level by two or three feet. Along the river and its tributaries there are places where the water is cutting its banks into agricultural land. Riprap or other channel work could curtail the bank erosion without the dam. Some of the agricultural fields along the river — mostly in grapes now — are flooded by high water, which deposits layers of silt in the vineyards. This process is precisely what caused the land to be productive in the first place, under the natural cycles of soil replenishment. Stopping the process may be a convenience to growers, but in the long run it would amount to a death sentence on the land that is produced and sustained by river overflow. Why should the federal government be subsidizing destruction of the soil's fertility? But even the protection that the dam would provide to existing buildings and farmlands would not add up to enough dollar benefits to pay for the dam's flood-control cost. That cost can only be met by an ingenious accounting gimmick: "benefits" to buildings and other developments that do not exist- but that might exist if the dam were built. If the dam lowers the flood crest, certain lands that otherwise would be in the flood zone could be used for building. So flood protection to those ghostly structures is counted as a benefit in order to justify the cost of the dam. Why should the federal government be subsidizing development in the flood plain? Why should federal taxpayers be giving handouts to those lucky landowners? Even all this remarkable accounting still would not pay for the dam. To help justify the cost, the Corps counts recreational benefits. Boaters and other users of "Lake Sonoma," the reservoir behind the dam, would eventually spend more than $1 million a year there. By some puzzling financial legerdemain these millions are counted as part of the benefits supplied by the dam. But who gets the benefits? If a typical family spends, say $100 a year at Lake Sonoma — on boats and gasoline and hot dogs — that's $100 that it won't spend someplace else, such as the Bay Area. Why should the federal taxpayer be subsidizing a diversion of recreation expenditures from the Bay Area to Lake Sonoma? What possible federal benefit is involved? The dam would also supply water for use in Sonorma county and adjoining areas. The water would be paid for by the water user. Theoretically. Actually the payment would extend over a 50 to 60 year peroid. But the dam has to be paid for when it is built, not some time in the next century. So Uncle Sam in effect lends the water users their share of the paid dam's cost, to be paid back over that 50-60 year period. But because the dam was originally authorized in 1967, at a time when interest rates were only 31/8 per cent, the water users would pay at that bargain basement rate. The federal taxpayers would have to make up the difference between that and the current market interest rate, which will be three or four times that much. And that difference over a half-century period, could amount to the biggest subsidy of all. If Sonoma county wants to double or triple or quadruple its population so that the water supply available from Warm Springs will be used and paid for, if the people of Santa Rosa want that city to become another congested San Jose, sprawling out into the farmlands, I suppose you could say that's their business. But why should the rest of us — the federal taxpayers — pay the bill? Somona taxpayers will be paying for the dam on top of their other taxes for generations to come. The result will predictably be irresistible pressure on all local agencies to promote the fastest possible urbanization and industrialization of the county in order to get a broader tax base to pay the bills for the dam. Growth under pressure. The dam will create jobs, yes. But if we are to rely on dams for employment, when Warm Springs is finished it will be necessary to build another big dam and yet another when that is finished ad infinitum. There is room for argument about how much water will be needed and where it will come from. Without Warm Springs, water could be available from wells (the Russian river drainage, particularly the Santa Rosa plain, is rich in ground water), from increasing the capacity of Coyote dam on the upper Russian, from conservation and waste water recycling. How much would be needed from these or other sources—including Warm Springs—depends on what assumptions you make about population growth and about more efficient use of existing water supplies. The Warm Springs Task Force, a Sonoma county group opposed to the dam, has a court suit maintaining that the Corps' environmental impact statement does not adequately consider alternatives to the dam, earthquake risks and other matters. The 9th Circuit Court of Appeals heard the case last spring but for some inscrutable reason has not yet spoken. Meantime, the preliminary work goes on, millions have already been spent and construction on the main dam is about ready to begin. The corps has estimated that the water supply portion of the dam, to be paid for by Sonoma county taxpayers, would amount to $60 million dollars. But the Task Force comes up with another figure. Adding the interest to be paid over the life of the project, plus an inflation factor, plus a cost-overrun figure, the Task Force calculates that the dam will cost the Sonoma county taxpayers $230 million -- almost $1000 added to the property tax for every man, woman and child now living in the county. No matter whose figures are accepted, Warm Springs seems inordinately expensive way for federal taxpayers as well as for the county to subsidize urban sprawl, riverbank landowners, subdivisions on farmland, reservior recreation, and future construction in flood- prone areas. Photograph caption: SITE OF THE WARM SPRINGS DAM PROJECT. Even the Engineers consider it a marginal project with a benefit-cost ratio of 1.1 S.F. Sunday Examiner & Chronicle Click tabs to swap between content that is broken into logical sections.
fwe2-CC-MAIN-2013-20-44219000
As part of the wrap-up for this course, we’re looking back at some of the first things we wrote in June (Introduction to WebTools, Setting the Stage, and Guiding Principles for Tech Use in the Classroom). I don’t know that my thinking/philosophy on using technology has changed dramatically in the past two and a half months. I was on-board with tech use in the classroom with the goal of improved learning and connection, and I was excited to try out some new tools and learn from a new and diverse group of educators. I still am. I do have a clearer picture of some specific tools that I’d like to implement this year in my classes, and I am happy to have made many new connections in my continually-expanding PLN. What has changed for me is a renewed focus on the idea that the best web tools allow us to do something completely new. I find myself coming back to three points from Jeff Utecht’s article “Evaluating Technology Use in the Classroom”: - Does the technology allow students to learn from people they never would have been able to without it? - Does the technology allow students to interact with information in a way that is meaningful and could not have happened otherwise? - Does the technology allow students to create and share their knowledge with an audience they never would have had access to without technology? [my emphasis] I’ve been focused primarily on the second bullet point (which isn’t horrible). If that’s all we do with new technology, it still represents movement in the right direction. I’ve made some progress on the third point (through student blogging), but I don’t think I’ve tapped into the full potential there. My students were very excited to keep track of their blog’s Page Views counter, and they broadened their readership by putting their new biology blog posts up on Facebook. (Which, come to think of it, is actually a pretty significant step. I wonder if they were sharing any of their history essays, Spanish translations, or math problem sets on FB?) But I want to try to find some ways to have them interact with people outside of our classroom, outside of our state and country, if possible. That’s a new goal of mine for the year. Lastly, we should recognize that we’re going to ask our students to jump into this whole using tech in the classroom in new ways thing along with us. They’ll get their own crash courses in web tools in the coming year (in many of our classes), and they’ll be fine. They’ll learn the content (most of it, hopefully), and there will be some tools they like better than others (just like us). And all we can hope for at the end of the day is that they’re willing to try new things, that they work hard, and that they’re curious. It is science, right? What’s not to be curious about? In the process, hopefully they’ll understand more about themselves as learners. And as many have said before, the tech is not the point, it’s just a tool, but if it improves learning then we’re moving in the right direction.
fwe2-CC-MAIN-2013-20-44236000
Healthy diet means better mental health (Australia) Australian researchers have shown that a nutritious diet has a significant, positive effect on mental health and can even aid in the prevention and treatment of depression and anxiety. Researchers from the University of Melbourne have been studying the impact of teenage diets on depressive symptoms since 2005. Over 2000 study participants, age 11 to 18 years, were sampled from 2005 to 2006, and again in 2007 to 2008. Diet quality and mental health baselines were established at the beginning of the study and followed up throughout the project. After adjustments for sociodemographic variables and exercise, it was found that a good quality diet predicted better mental health than any other factor. Furthermore, dietary changes matched mental health states during the investigation. In other words, improved diet was reflected in improved mental health and a poor quality diet filled with snacks and highly processed foods was associated with a deterioration in mental health. These results corroborate the 2010 findings of the same research team wherein diet and mental health outcomes of Australian women across a wide range of ages were studied. In that investigation, researchers found that women who ate a diet of vegetables, fruit, whole grains, high quality meat and fish reduced their risk of depression, dysthymia and anxiety by more than 30 per cent. On the other hand, women eating a diet with high quantities of refined and processed foods as well as saturated fats, had a 50 per cent increased likelihood of developing depression. To read the online study, “A Prospective Study of Diet Quality and Mental Health in Adolescents”, go to www.plosone.org.
fwe2-CC-MAIN-2013-20-44241000
Water supply and water quality problems facing the City of El Paso and Ciudad Juarez are complex and interrelated. The twin cities share the water resources of the Hueco Bolson, a Tertiary and Quaternary basin fill aquifer that spans the international border. The binational metroplex is located at the junction between the western edge of Texas and the northernmost part of Chihuahua, Mexico. Over-pumping of the Hueco Bolson aquifer has resulted in drawdown of the water table, encroachment of brackish groundwater, and the early retirement of wells. In response to these issues, Mexican and American universities formed a partnership to study the surface and ground-water resources of the El Paso/Juarez area. Governmental agencies are participating in the project by providing existing data, access to water wells, and other support services. The research team is applying a suite of isotopic tracers to provide an understanding of the spatial dynamics of the aquifers by tracing water from areas of recharge to regions of discharge. The team is also using a variety of geochemical and isotopic tracers to answer questions about increasing salinity in the developed parts of the aquifer. With an increased understanding of the flowpaths of the aquifer systems, the team is addressing stream-aquifer interactions between the groundwater systems and the Rio Grande. By combining an understanding of isotopic and geochemical changes in the river system with the information about the groundwater systems, the team is calculating fluxes of water and solutes from the groundwater system to the river system. Finally, this geochemical and isotopic information is being used by the municipal partners to constrain physical and management models of groundwater to utilize the fresh and saline water resources of the Hueco Bolson more effectively.
fwe2-CC-MAIN-2013-20-44242000
Soon he came upon a peasant singing and scything. ‘You there, varlet,’ said Shrek. ‘Why so blithe?’” – William Steig, “Shrek” “Shrek” inspired me to let the words fly around my children. Nope, I’m not talking about the movie “Shrek”, but about William Steig’s wonderful picture book “Shrek.” And I’m not talking about using curse words around my children, but about using a more sophisticated vocabulary in ordinary conversation. The vocabulary in “Shrek” is extravagant. It’s so baroque that I did some research to find out why Steig had included phrases like “shady copse,” “churlish knave,” “rosy wens,” and “fusty fens.” Had he picked words at random from a dictionary and challenged himself to work them into his story? Did he have pet underused words that he was determined to bring back into favor? (I myself have waged a losing campaign to popularize “chirk.”) Or had it been the product of a bet? Dr. Seuss wrote his masterpiece “Green Eggs and Ham” using just 50 different words, after his publisher, Bennett Cerf, bet him $50 that he couldn’t compose a book with such a limited vocabulary. (The words? A, am, and, anywhere, are, be, boat, box, car, could, dark, do, eat, eggs, fox, goat, good, green, ham, here, house, I, if, in, let, like, may, me, mouse, not, on, or, rain, Sam, say, see, so, thank, that, the, them, there, they, train, tree, try, will, with, would, you.) But while I couldn’t find an explanation for Steig’s flamboyant vocabulary, I was inspired by his example — and by my daughters’ unquestioning acceptance of his range — to use more sophisticated vocabulary when talking to children. It made me realize that I’d unconsciously been simplifying my language, even though my daughters were perfectly able to handle words like “nacreous,” “nonplussed,” “ambivalent” and “palanquin.” It’s a Secret of Adulthood: If we can express ourselves precisely, we can think precisely, and I want my children to be able to think as precisely as possible. Plus, it was hilarious to hear a 2-year-old use the word “unwieldy.” Do you tailor your vocabulary to your children’s age? (Special case: do you use curse words in front of them?)
fwe2-CC-MAIN-2013-20-44264000
First, the truth is that no one really knows why some products succeed and others don’t. As the purchasing of goods in the market done by multiple individuals whose decisions are often personal and multi-factorial, direct observation and dissection of behavior is nearly impossible. There are some theories, though (largely from Van den Bulte, 2007). So that I don’t forget, and purely for my own benefit, here’s a breakdown: 1. People who buy early are different from those who buy late. For example, the people who sit in the cold waiting to buy the new iPhone on the day it is released are vastly different from those who wait until the price drops 6 months later. It’s hard to tell who’s smarter. Me, I like heat. (See Rogers, 2003.) 2. There are market leaders that other people like to follow. People buy products because they want to imitate others, who might mostly be those who pick up on fads early, i.e. there are “innovators” and “imitators.” People who bought to iPhone 1 (what did that look like?) early showed it to their friends, who bought one, too. (see Bass, 1969) 3. People buy products autonomously, because of influence from above, or because of peer influence. Some people buy stuff caring little for anyone else. Some people buy stuff because an authority said it was a good idea. Some people buy stuff because their friends do. (see Riesman, 1950 and Schor, 1998) 4. Purchase decisions depend on social status. Some people buy stuff because they want to emulate those higher on the social ladder than they are. Similarly, those on top buy new stuff because they don’t want to fall behind or be unseated as a high profile consumer. Some people tend to want to buy slightly more car than they can afford, so that they can feel more like those with more money than they have. The stratified nature of society, thus, perpetuates a system of striving to consume more beyond one’s means. This desire is, of course, endless. (see Simmel 1971 and Burt 1987) 5. Marketing is a two step process. Ads are only effective at influencing behavior of leaders, who, in turn influence their followers. I call this the “Economist effect.” Only a few sad people (such as myself) read the British magazine, the Economist. When the Economist endorses a Presidential candidate, it would seemingly have little effect since only about .0028% of the American populace is paying attention. However, the readership of the Economist consists of educated and well positioned people who have the capacity to influence large numbers of people who don’t read the Economist. On numbers alone, an endorsement from that newspaper would seem meaningless, but as a conduit to the less engaged, the effect could be considerable. (Fortunately, though, no one cares what I think.) (see Lazarsfeld, 1944) 6. There are risks to adopting new products, fashions, etc. Very, very poor people are very similar to very, very wealthy people in that they have nothing to lose by taking adopting new products or behaviors. Ever think about the crazy stuff that some homeless people wear? Is it any crazier than high fashion? Think of Juggalos vs. Comme de Garcons. (I don’t know anything about fashion; that was all I could come up with). People in the middle, however, have a lot to lose by dressing crazy, so they end up really boring. (see Homans, 1961)
fwe2-CC-MAIN-2013-20-44272000
AI and Society 9 (1):29-42 (1995) |Abstract||This paper first illustrates what kind of ethical issues arise from the new information, communication and automation technology. It then argues that we may embrace the popular idea that technology is ethically neutral or even ambivalent without having to close our eyes to those issues and in fact, that the ethical neutrality of technology makes them all the more urgent. Finally, it suggests that the widely ignored fact of normal responsible behaviour offers a new and fruitful starting point for any future thinking about such issues| |Keywords||No keywords specified (fix it)| |Through your library||Configure| Similar books and articles Thomas W. Cooper (1998). New Technology Effects Inventory: Forty Leading Ethical Issues. Journal of Mass Media Ethics 13 (2):71 – 92. William P. Cordeiro (1997). Suggested Management Responses to Ethical Issues Raised by Technological Change. Journal of Business Ethics 16 (12-13):1393-1400. Robin S. Dillon (2010). Respect for Persons, Identity, and Information Technology. Ethics and Information Technology 12 (1). Patrick Feng (2000). Rethinking Technology, Revitalizing Ethics: Overcoming Barriers to Ethical Design. Science and Engineering Ethics 6 (2):207-220. Walter Maner (1996). Unique Ethical Problems in Information Technology. Science and Engineering Ethics 2 (2). David Wright (2011). A Framework for the Ethical Impact Assessment of Information Technology. Ethics and Information Technology 13 (3):199-226. Richard De George (2006). Information Technology, Globalization and Ethics. Ethics and Information Technology 8 (1). Bernd Stahl, Richard Heersmink, Philippe Goujon, Catherine Flick, Jeroen van den Hoven, Kutoma Wakunuma, Veikko Ikonen & Michael Rader (2010). Issues, Concepts and Methods Relating to the Identification of the Ethics of Emerging ICTs. Communications of the IIMA 10 (1):33-43. Michael D. Myers & Leigh Miller (1996). Ethical Dilemmas in the Use of Information Technology: An Aristotelian Perspective. Ethics and Behavior 6 (2):153 – 160. Chris Gastmans (ed.) (2002). Between Technology and Humanity: The Impact of Technology on Health Care Ethics. Leuven University Press. Nabila Boukef Charki (forthcoming). Toward an Ethical Understanding of the Controversial Technology of Online Reverse Auctions. Journal of Business Ethics. Bernd Carsten Stahl, Richard Heersmink, Philippe Goujon, Catherine Flick, Jeroen van den Hoven, Kutoma Wakunuma, Veikko Ikonen & Michael Rader (2010). Identifying the Ethics of Emerging Information and Communication Technologies: An Essay on Issues, Concepts and Method. International Journal of Technoethics 1 (4):20-38. Iordanis Kavathatzopoulos (2003). The Use of Information and Communication Technology in the Training for Ethical Competence in Business. Journal of Business Ethics 48 (1):43-51. Mike Cooley (1995). The Myth of the Moral Neutrality of Technology. AI and Society 9 (1):10-17. Matteo Turilli, Antonino Vaccaro & Mariarosaria Taddeo (2012). Internet Neutrality: Ethical Issues in the Internet Environment. Philosophy and Technology 25 (2):133-151. Sorry, there are not enough data points to plot this chart. Added to index2010-08-30 Total downloads1 ( #277,212 of 556,837 ) Recent downloads (6 months)0 How can I increase my downloads?
fwe2-CC-MAIN-2013-20-44278000
The easiest way is to buy yourself a 3D camera. This option has an excellent advantage: You can see the 3D effect while you compose and when reviewing your images which lets you know if the shot you take worked to give the 3D impression or not. Otherwise you have to take 2 nearly identical photos with slightly different viewpoints. There are three methods to do this: - Take a photo, move the camera and take a second photo keeping everything constant: Focus, DOF, exposure, ISO, white-balance. This is easier to do with a camera with manual controls, although I suspect you can use Panorama Assist mode of compact-cameras too. They key is to move the camera along a level path a relatively small distance. The ideal distance between the two shots depends on focal-length, focus distance and desired perspective. - Take two photos simultaneously: Get two identical cameras and set everything including focus distance and focal-length to exactly the same settings. Triggering them simultaneously using an IR remote is ideal. You can get away with mechanically triggering them if there are no movements in the scene. You can buy a dual tripod plate which can hold two cameras to help with this. - Use an anamorphic 3D lens: These lenses capture two images side-by-side on your sensor. You need special software (supplied with cameras that support this lens) to transform the resulting image into an actual 3D image. The distance between the two shots has to be such that the objects in the plane of focus appear slightly different but not too much. There is no ideal distance. The further the subject you are trying to focus on appears, the wider apart the pictures must be taken. This should take into consideration actual distance and focal-length, so longer a focal-length requires less movement between the shots. You can view these images, which are actually stereoscopic images, by various means: - Many new HDTV support 3D HDMI input which you can see using special glasses (not red-blue). Some display can also display the 3D effect without viewing glasses as long as you are standing with a certain distance and angle from the screen. - You can have your images on paper using lenticular printing services. See this question. - Get a 3D Digital photo frame. The software you need depends on your viewing device. If you have a 3D display device you have to make sure which format they use. So far, the MPO format is most popular, although Stereo JPEG (JPS) images exist. Fuji has software to convert between MPO and pairs of JPEGs. A number of free utilities exist but I have not much experience with them.
fwe2-CC-MAIN-2013-20-44279000
After Higgs Boson, scientists prepare for next quantum leapFebruary 13th, 2013 in Physics / General Physics A graphic distributed on July 4, 2012 by CERN in Geneva shows a representation of traces of a proton-proton collision measured in the search for the Higgs boson. Seven months after its scientists made a landmark discovery that may explain the mysteries of mass, Europe's top physics lab will take a break from smashing invisible particles to recharge for the next leap into the unknown. Seven months after its scientists made a landmark discovery that may explain the mysteries of mass, Europe's top physics lab will take a break from smashing invisible particles to recharge for the next leap into the unknown. From Thursday, the cutting-edge facilities at the European Organisation for Nuclear Research (CERN) will begin winding down, then go offline on Saturday for an 18-month upgrade. A vast underground lab straddling the border between France and Switzerland, CERN's Large Hadron Collider (LHC) was the scene of an extraordinary discovery announced in July 2012. Its scientists said they were 99.9 percent certain they had found the elusive Higgs Boson, an invisible particle without which, theorists say, humans and all the other joined-up atoms in the Universe would not exist. The upgrade will boost the LHC's energy capacity, essential for CERN to confirm definitively that its boson is the Higgs, and allow it to probe new dimensions such as supersymmetry and dark matter. "The aim is to open the discovery domain," said Frederick Bordry, head of CERN's technology department. "We have what we think is the Higgs, and now we have all the theories about supersymmetry and so on. We need to increase the energy to look at more physics. It's about going into terra incognita (unknown territory)," he told AFP. Theorised back in 1964, the boson also known as the God Particle carries the name of a British physicist, Peter Higgs. He calculated that a field of bosons could explain a nagging anomaly: Why do some particles have mass while others, such as light, have none? That question was a gaping hole in the Standard Model of particle physics, a conceptual framework for understanding the nuts-and-bolts of the cosmos. One idea is that the Higgs was born when the new Universe cooled after the Big Bang some 14 billion years ago. It is believed to act like a fork dipped in honey and held up in dusty air. Most of the dust particles interact with the honey, acquiring some of its mass to varying degrees, but a few slip through and do not acquire any. With mass comes gravity—and its pulling power brings particles together. Supersymmetry, meanwhile, is the notion that there are novel particles which are the opposite number of each of the known particle actors in the Standard Model. This may, in turn, explain the existence of dark matter—a hypothetical construct that can only be perceived indirectly via its gravitational pull, yet is thought to make up around 25 percent of the Universe. At a cost of 6.03 billion Swiss francs (4.9 billion euros, $6.56 billion dollars), the LHC was constructed in a 26.6-kilometre (16.5-mile) circular tunnel originally occupied by its predecessor, the Large Electron Positron (LEP). That had run in cycles of about seven months followed by a five-month shutdown, but the LHC, opened in 2008, has been pushed well beyond. "We've had full operations for three years, 2010, 2011 and 2012," said Bordry. "Initially we thought we'd have the long shutdown in 2012, but in 2011, with some good results and with the perspective of discovering the boson, we pushed the long shutdown back by a year. But we said that in 2013 we must do it." Unlike the LEP, which was used to accelerate electrons or positrons, the LHC crashes together protons, which are part of the hadron family. "The game is about smashing the particles together to transform this energy into mass. With high energy, they are transformed into new particles and we observe these new particles and try to understand things," Bordry explained. "It's about recreating the first microsecond of the universe, the Big Bang. We are reproducing in a lab the conditions we had at the start of the Big Bang." Over the past three years, CERN has slammed protons together more than six million billion times. Five billion collisions yielded results deemed worthy of further research and data from only 400 threw up data that paved the road to the Higgs Boson. Despite the shutdown, CERN's researchers won't be taking a breather, as they must trawl through a vast mound of data. "I think a year from now, we'll have more information on the data accumulated over the past three years," said Bordry. "Maybe the conclusion will be that we need more data!" Last year, the LHC achieved a collision energy level of eight teraelectron volts, an energy measure used in particle physics—up from seven in 2011. After it comes back online in 2015, the goal is to take that level to 13 or even 14, with the LHC expected to run for three or four years before another shutdown. The net cost of the upgrade is expected to be up to 50 million Swiss francs. CERN's member states are European, but the prestigious organisation has global reach. India, Japan, Russia and the United States participate as observers. (c) 2013 AFP "After Higgs Boson, scientists prepare for next quantum leap." February 13th, 2013. http://phys.org/news/2013-02-higgs-boson-scientists-quantum.html
fwe2-CC-MAIN-2013-20-44281000
Please Read How You Can Help Keep the Encyclopedia Free Proclus of Athens (*412–485 C.E.) was the most authoritative philosopher of late antiquity and played a crucial role in the transmission of Platonic philosophy from antiquity to the Middle Ages. For almost fifty years, he was head or ‘successor’ (diadochos, sc. of Plato) of the Platonic ‘Academy’ in Athens. Being an exceptionally productive writer, he composed commentaries on Aristotle, Euclid and Plato, systematic treatises in all disciplines of philosophy as it was at that time (metaphysics and theology, physics, astronomy, mathematics, ethics) and exegetical works on traditions of religious wisdom (Orphism and Chaldaean Oracles). Proclus had a lasting influence on the development of the late Neoplatonic schools not only in Athens, but also in Alexandria, where his student Ammonius became the head of the school. In a culture dominated by Christianity, the Neoplatonic philosophers had to defend the superiority of the Hellenic traditions of wisdom. Continuing a movement that was inaugurated by Iamblichus (4th c.) and the charismatic figure of emperor Julian, and following the teaching of Syrianus, Proclus was eager to demonstrate the harmony of the ancient religious revelations (the mythologies of Homer and Hesiod, the Orphic theogonies and the Chaldaean Oracles) and to integrate them in the philosophical tradition of Pythagoras and Plato. Towards this end, his Platonic Theology offers a magisterial summa of pagan Hellenic theology. Probably the best starting point for the study of Proclus' philosophy is the Elements of Theology (with the masterly commentary by E.R. Dodds) which provide a systematic introduction into the Neoplatonic metaphysical system. - 1. Life and Works - 2. The Commentator of Plato - 3. Philosophical views - 4. Influence - Academic Tools - Other Internet Resources - Related Entries Since Proclus' extant works contain almost no evidence about his biography, we have to rely on the information transmitted by his direct pupil Marinus of Neapolis in the eulogy he devoted to his predecessor Proclus or on Happiness. Moreover, some scattered remarks on Proclus and valuable information about the schools in Athens and Alexandria can be found in Damascius' Life of Isidorus (called by other scholars The Philosophical History). As with Porphyry's Life of Plotinus, both Marinus' and Damascius' works are biographies written by students praising extensively the achievements of their teachers both in doctrine and in philosophical life. On Proclus' works see Beutler (1957), 190–208, Saffrey-Westerink (1968), lv–lx, Rosán (²2009), 266-274, and the overview given below (1.2). Although a large part of his numerous writings is lost, some major commentaries on Plato have survived (though incomplete) and some important systematic works. Moreover, later Neoplatonists such as Damascius, Olympiodorus, Simplicius, and Philoponus have conserved many extracts of lost work, but these fragments have never been collected. Proclus or On Happiness sets out to prove that Proclus reached in his life the culmination of happiness (eudaimonia) and wisdom because he ascended the scale of all virtues, the natural, the ethical, the political, the purifying, the intellectual, and the so called theurgic virtues, the latter of which make humans ‘act with the gods.’ (The different virtues have been interpreted in various ways in the Neoplatonic tradition; ultimately they refer to different stages in the purification and ascent of the human soul, see Saffrey/Segonds 2001, lxix–c.) Proclus was born in Constantinople/Byzantium (now Istanbul) into a rich Lycian family in 412. Not long after his birth his parents returned to their hometown Xanthos in Lycia, a maritime area of what is now southwest Turkey. He began his education in Xanthos and moved from there to Alexandria (Egypt) to pursue the study of rhetoric in order to become a lawyer, as was his father. However, during a journey to Byzantium he discovered philosophy as his vocation. Back in Alexandria he studied Aristotle and mathematics. Marinus reports that the very gifted pupil easily learned all of Aristotle's logical writings by heart. In 430–431, 18 years old, Proclus moved to Athens, attracted by the fame of the Platonic School there. He studied for two years under the direction of Plutarch (of Athens; to be distinguished from the 1st-2nd c. philosopher/biographer), reading with him Plato's Phaedo and Aristotle's De anima. After Plutarch's death in 432, Syrianus became the head of the Academy. Proclus followed with him the usual curriculum of the school (going back to Iamblichus), reading first Aristotle's works and after that entering the ‘greater mysteries,’ the Platonic dialogues. Under Syrianus, Proclus also came into contact with the older traditions of wisdom such as the theology of the Orphics and the Chaldaean Oracles. Among Syrianus' lost works we find a treatise On the harmony of Orpheus, Pythagoras and Plato with the Chaldaean Oracles. As the Suda lexicon attributes a work with this title also to Proclus, it is not unlikely that he published Syrianus' treatise, adding comments of his own. Since Syrianus and Proclus worked intensively together for six years, Proclus was strongly influenced by his teacher. On many occasions Proclus praises the philosophical achievements of his teacher and he never criticizes him. Because of this, it is almost impossible to distinguish between Proclus' original contribution and what he adopted from Syrianus. After Syrianus' death (437), Proclus succeeded as head of the Athenian school, and he kept this position for almost fifty years until his death in 485. His tight schedule of the day, starting with a prayer to the sun at sunrise (repeated at noontime and at sunset), included lectures, reading seminars, discussions with students, and literary work of his own. Besides his philosophical activities, Marinus also portrays Proclus as an experienced practitioner of theurgy (Life of Proclus, § 28–29; on theurgy see below 3.6). The practice of these pagan rites could only be continued in the private sphere of the School's grounds. Though Proclus was in Athens a highly respected philosopher and had some Christian students, he had to be prudent to avoid anti-pagan reactions. Marinus tells that he had to go into exile for about one year to Lydia (in Asia) to avoid difficulties (Life of Proclus § 15). Marinus notes that Proclus was an extremely industrious writer, having an “unbounded love of work” (Life of Proclus § 22). Apart from an impressive teaching-load and several other commitments, Proclus wrote every day about 700 lines (about 20–25 pages). It is unlikely that Proclus published all of them. However that may be, from Proclus' extant works and the information about his lost works it emerges that he was a productive writer indeed. Roughly two thirds of Proclus' output is now lost and several works, especially his commentaries on Plato, have been transmitted in a mutilated form. Among Proclus' surviving works we have five commentaries on Plato (on Alcibiades, Cratylus, Republic, Timaeus, and Parmenides), one commentary on Euclid, two manuals on physics and metaphysics respectively (Elements of Physics, Elements of Theology), an astronomical work (Hypotypôsis), three monographs (Tria opuscula) on providence, fate, and free will and the origin of evil, and the Platonic Theology, which offers an impressive summa of Plato's theology, as well as theological Hymns. See the supplement on Proclus’ Works (the main extant works). Some of his works have been completely lost, such as his commentaries on Aristotle (the Organon), of others only a few fragments remain. See the supplement on Proclus’ Complete Works (extant, lost, and spurious) It is difficult to establish a chronology of Proclus' works. The Platonic Theology is generally considered to be his last work. In writing the Theology Proclus heavily depends on his interpretation of the Parmenides and often refers to his commentary on this dialogue, which must have been finished some time before. We know from Marinus (Life of Proclus §13) that Proclus finished his Commentary on the Timaeus by the age of 27. However, it cannot be excluded that Proclus rewrote or modified it later. As the Alcibiades came at the beginning of the curriculum in the school, its commentary may also be an early work. The Commentary on the Republic is not a proper commentary, but a collection of several essays on problems and sections in this dialogue. These essays may have been written at different times in Proclus' life and only later put together (by Proclus himself or by someone else). The Hypotypôsis (Exposition of Astronomical Hypotheses) was written in the year after Proclus' exile in Lydia, but we do not know when exactly that took place. The Tria opuscula all deal with similar topics, but they need not have been composed at the same time. There are plausible arguments to put the second treatise, On What Depends on Us, some years after the events forcing Proclus to go into exile. The first treatise, which in some parts depends very much on Plutarch (of Chaironea, 1st-2nd c. C.E.), could be set earlier in his career. It also contains a discussion on the nature of evil, which is much simpler than what we find in the treatise On the Existence of Evils, which is more sophisticated and probably was composed later. Because of its introductory character, one may be inclined to consider the Elements of Physics as an early work. This has also been claimed for the Elements of Theology, which, however, shows all the sophistication of Proclus's mature thought. It may be possible that Proclus revised this text several times in his career. The center of Proclus' extensive oeuvre is without doubt his exegesis of Plato, as is shown by the large commentaries he devoted to major dialogues. This Platonic focus is also evident in the composition of his systematic works. The Platonic Theology offers a systematic exposition of theology based on an interpretation of all relevant sections on the gods and their attributes in Plato's dialogues, and in particular on the Parmenides, considered as the most theological of all dialogues. Proclus probably commented on all dialogues included in the curriculum of the school since Iamblichus. In addition Proclus wrote the commentary on the Republic mentioned above. The curriculum consisted of altogether 12 dialogues distributed into two cycles. The first cycle started with Alcibiades (on self-knowledge) and ended with the Philebus (on the final cause of everything: the good), comprising two dialogues on ethics (the Gorgias and the Phaedo), two on logic (the Cratylus and the Theaetetus), two on physics (the Sophist and the Statesman), and two on theology (the Phaedrus and the Symposium). The second cycle included the two perfect dialogues that were considered to encompass Plato's whole philosophy (In Tim. I 13.14–17), namely, the Timaeus (on physics) and the Parmenides (on theology). In the form and method of his commentaries, Proclus is again influenced by Iamblichus. He assumes that each Platonic dialogue must have one main theme (skopos) to which all parts of the arguments ought to be related. To interpret the text, different approaches are possible (theological, mathematical, physical, ethical exegesis), but they are all interconnected according to the principle ‘everything in everything’ (panta en pasin). Thus, the Timaeus has in all its parts as its purpose the explanation of nature (physiologia). Even the introductory sections, the summary of the discussion in the Republic and the anticipation of the story about Atlantis, must be understood from this point of view; for they contain, in the mode of ‘images and examples,’ a description of the fundamental forces that are at work in the physical world. Also the long treatise on human nature, which concludes Timaeus' exposition, has ultimately a cosmological meaning, as the human animal is a microcosmos wherein all elements and all causes of the great universe are found. More problematic was the determination of the skopos of the Parmenides. In a long discussion with the whole hermeneutical tradition since middle-Platonism, Proclus defends a theological interpretation of the dialogue. According to him, the dialectical discussion on the One and the Many (ta alla) reveals the first divine principles of all things. With the exception of the commentary on the Cratylus, of which only a selection of notes from the original commentary is preserved, the exegetical works of Proclus have a clear structure. They divide the Platonic text in different lemmata or cited passages, discussing first the doctrine exposed in the particular section (pragmata, later called theoria), next commenting on the formulation of the argument (called lexis) [see Festugière 1963]. Whereas modern scholars usually accept a development in Plato's thought and distinguish between an early, middle, and late Plato, the Neoplatonists take the Platonic corpus as the expression of a divinely inspired and unitary philosophical doctrine. This enables them to connect different Platonic dialogues into one system and to see numerous cross-references within the Platonic oeuvre. What may seem to be contradictions between statements made in different dialogues, can be explained by different pedagogical contexts, some dialogues being rather maieutic than expository, some elenctic of the sophistic pseudo-science, some offering a dialectical training to young students. A Neoplatonic commentary offers much more than a faithful interpretation of an authoritative text of Plato. Plato's text gives the commentator an opportunity to develop his own views on the most fundamental philosophical questions, the first principles, the idea of the Good, the doctrine of the Forms, the soul and its faculties, nature, etc. As was said, the two culminating dialogues, the Timaeus and the Parmenides, offer together a comprehensive view of the whole of Platonic philosophy. Since the whole philosophy is divided into the study of intelligibles and the study of things within the cosmos – and quite rightly so, as the cosmos too is twofold, the intelligible and the sensible, as Timaeus himself will say in what follows (Timaeus 30c) – the Parmenides comprehends the study (pragmateia) of the intelligibles and the Timaeus the study of things within the cosmos. For the former teaches us all the divine orders and the latter all processions of things within the cosmos. (In Tim. I 12.30–13.7) The interpretation of the Parmenides thus prepares the way for the Platonic Theology, offering the systematic structure for a scientific demonstration of the procession of all the orders of gods from the first principle. As Proclus explains at Theol. Plat. I 2, p. 9.8–19, the Platonic Theology falls into three parts (after a long methodological introduction). The first part (Theol. Plat. I 13–29) is an investigation into the common notions (koinai ennoiai) of the gods as we find them in Plato's dialogues: it is a treatise on the divine names and attributes. The second part (Theol. Plat. II–VI), which is incomplete, unfolds in a systematic way the procession of the divine hierarchies, from the One, that is the first god, to the ‘higher kinds,’ i.e., angels, daimones, and heroes, while the third part, which is altogether missing, was supposed to deal with the individual hypercosmic and encosmic gods. Before presenting his own views, Proclus usually critically evaluates the opinions and interpretations of his predecessors. In this respect, his commentaries are a rich and indispensible source for the history of Middle and Neo-Platonism. Thus, in his Commentary on the Timaeus Proclus reports and criticizes the views of Atticus, Numenius, Longinus, Plotinus, Porphyry, Iamblichus, Theodorus of Asine and many others, ending usually in full agreement with the explanation of his master Syrianus. Besides, in explaining Plato's text, Proclus frequently seeks confirmation of his exegesis in the Chaldaean Oracles or the Orphic tradition. As Syrianus (see Helmig 2009), Proclus is often very critical of Aristotle and refutes his criticism of Plato's views. He is certainly not an advocate of the “harmony of Plato and Aristotle,” which became the leading principle of the Alexandrian commentaries (of Ammonius and Simplicius). Proclus notes significant differences between the two philosophers in epistemology (theory of abstraction vs. learning as recollection), metaphysics (first principle, theory of Forms, theory of universals), physics (Plato's Timaeus vs. Aristotle's Physics), political philosophy (Aristotle's criticism of Plato's Republic), and language (Cratylus vs. De Interpretatione). According to Proclus, Plato is not only far superior to Aristotle in his theology (as only Plato ascended beyond the intellect to posit the One as the ineffable principle of all things), but in all other philosophical disciplines, where we owe to him all important discoveries. Whereas the Peripatetics were accustomed to defend the superiority of Aristotle over Plato with reference to his impressive physical project, Proclus considers the latter as inferior to the great achievement of Plato in the Timaeus (see Steel 2003). Aristotle's natural philosophy is the work of a zealous admirer, a disciple who tried to be better than the master: It seems to me that the excellent Aristotle emulated the teaching of Plato as far as possible when he structured the whole investigation about nature. (In Tim. I 6.21–24) Following Plato, Aristotle explains in his Physics the general principles of natural things: form, matter, nature, the essence and principles of movement, time and place; again taking inspiration from the Timaeus, he studies in other works the specific principles of the distinct regions of the physical world, thus in the De Caelo the celestial and the sublunary realm, and in On generation and corruption and in Meteorologica the sublunary realm. In this domain, it cannot be denied, Aristotle did much more than his master. According to Proclus, however, he developed the subject ‘beyond what is needed’. The same remark must be made about Aristotle's extensive zoological research. Whereas Plato limited himself in the Timaeus to an analysis of the fundamental principles of all living organisms, Aristotle gave most of his attention to the material components of animals and scarcely, and only in few cases, did he consider the organism from the perspective of the form. Plato, on the contrary, when explaining the physical world, never got lost in a detailed examination. When trying to determine Proclus' profile as a philosopher, one has to keep in mind that Platonists were not keen on introducing new elements into the Platonic doctrine. They despised innovation (kainotomia). Yet it cannot be denied that Neoplatonic philosophy differs considerably from what we read in Plato's dialogues. There is also overwhelming evidence for continual discussions in the school on the right interpretation of Plato or on certain doctrinal points (such as the transcendence of the One, or the question whether the soul wholly descended from the intelligible world). In order to evaluate Proclus' originality, one ought to compare his views with those of the Neoplatonists before him, such as Plotinus, Porphyry, Iamblichus, and Syrianus. Only with regard to Plotinus is this possible to a great extent, because we still have the full corpus of Plotinus' writings. Proclus certainly admired the first ‘founder’ of the new Platonism and even devoted a commentary to the Enneads, of which, alas, we have only some fragments. He shared Plotinus' views on the three principal hypostases the One, the Intellect and the Soul, and often uses language inspired by his reading of Plotinus, as in his description of the union of the soul with the ineffable One. Yet on many points, he is very critical of Plotinus, pointing to contradictions, rejecting provocative views such as the thesis that One is cause of itself (causa sui), the doctrine of the undescended soul, or the identification of evil with matter. Another radical difference from Plotinus (and Porphyry) is the importance attributed to theurgy for the salvation of the soul and the authority of Chaldaean Oracles. As said before, it is very difficult to mark off Proclus' originality with regard to his teacher Syrianus, the only predecessor he never criticizes. Of the literary production of the latter, we have only his Commentary on Aristotle's Metaphysics. It is possible that most of Syrianus' courses on Plato never were published, but were continued and further worked out by Proclus himself. We have, however, the commentary on the Phaedrus by Hermeias, who was sitting together with Proclus, in Syrianus' course. One gets the impression that Syrianus was very interested in Orphic theogony, whereas for Proclus the Chaldaean Oracles are more authoritative when developing a Platonic theology. But here again, it is difficult to compare as we do not possess Proclus' own commentary. Is Proclus after all then not so original, but only an excellent teacher and wonderful systematizer of the new Platonic doctrines which became dominant in the school since Iamblichus on? We shall never know, and it is after all not so important when assessing the philosophical merits of his works. To praise Proclus' philosophical achievements, Marinus devotes in Life of Proclus one chapter to the discussion of the doctrines we owe to him (§ 23). Surprisingly, for all his admiration for the master, he can only enumerate a few innovative doctrines; and they are of such a minor importance that we shall not even discuss them in this article. In late antiquity, Aristotle's Metaphysics was considered to be a theological work, because Aristotle investigates in this treatise the first principles of all being. This discipline may be called theology, because the principles of beings and the first and most perfect causes of things are what is most of all divine. (Asclepius, In Metaph. 4.1–3) Indeed, there is precedent for this in Aristotle himself, for in Metaphysics VI, 1, 1026a15ff, he classifies “first philosophy,” or metaphysics, as theology. Proclus himself often uses the term ‘theology’ in this metaphysical sense for the study of the first (‘divine’) principles of all things. His Elements of Theology can in fact be considered an introduction to his metaphysics. The work is a concatenated demonstration of 217 propositions, which may be divided into two halves: the first 112 propositions establish the One, unity without any multiplicity, as the ultimate cause of reality and lay down basic metaphysical concepts/structures such as causality, participation, the relation of wholes to parts, infinity, and eternity. The second half deals with the three kinds of true causes within reality recognized by Proclus: gods (which he calls henads or “unities,” see below), intellects, and souls. This elaborate metaphysical framework makes it possible for Proclus to develop a scientific theology, i.e., a demonstration of the procession and properties of the different classes of gods. In what follows we will only discuss some characteristic features of Proclus' metaphysics (see further Steel 2011). On the whole, Proclus’ doctrine of first principles is a further development of Plotinus' innovative interpretation of Platonic philosophy. With Plotinus, Proclus recognizes three fundamental levels of reality called ‘hypostases’ (or self-subsistent entities): One, Intellect, and Soul. However, following a concern of his predecessor Iamblichus for greater precision in the relationship and distinction between the One and Intellect, Proclus distinguishes between the intelligible Being (to noêton—what is the object of intellectual intuition) and the intellective (to noeron—what is intelligizing), and introduces between both, as an intermediary level, the noêton-noeron (what is being intelligized and intelligizing). These three ontological levels thus correspond to the triad of Being, Life, and Intellect, which already play an important role in Plotinus' and Porphyry's speculations about the procession or ‘emanation’ of the intelligible world from the One, without, however, being hypostasized. Since Zeller (influenced by Hegel) the application of the triadic structure to reality has been seen as the characteristic feature of Proclus' system, but see Dodds 19632, pp. xxii and 220, on possible sources of the doctrine. Although the distinction of aspects of reality as distinct hupostases and the multiplication of triads might suggest a loss of Plotinus’ intuition of the unity of reality, it is important to stress that each part of the triad of Being, Life and Intellect, mirrors within itself their triadic relationship. This redoubled triadic structure must be understood as expressing an intrinsic and essential relation between successive levels of being. The intimate relation between Being, Life, and Intellect is the origin of the basic structure uniting all causes to their effects, namely the relation of immanence, procession and reversion (monê-prohodos-epistrophê, see Elem. Theol. § 35). This triad has been called the “triad of triads,” the underlying principle of all triadic structures: Every effect remains in its cause, proceeds from it, and reverts upon it. For if it should remain without procession or reversion, it would be without distinction from, and therefore identical with, its cause, since distinction implies procession. And if it should proceed without reversion or immanence [sc. in the cause], it would be without conjunction or sympathy with its cause, since it would have no communication with it. And if it should revert without immanence [sc. in the cause] or procession, how can that which has not received its being from the higher revert existentially upon a principle thus alien? [Elem. Theol. § 35, transl. E.R. Dodds] Another fundamental triad is the triad Unparticipated-Participated-Participating (amethekton-metechomenon-metechon). Plato's theory of participation, which explains the relation between the intelligible world and the sensible reality it grounds, raised many problems, several of which Plato himself brings up in the first part of his Parmenides. Most pressing was the puzzle: How can a Form be at the same time one and the same and exist as a whole in many participants? (see Plato, Parmenides 131a-b). The basic idea of the triad of participation, which can also be seen as responding to Aristotle's criticism of participation, is to maintain the transcendence, and hence, the unity of the Form, while allowing for its presence in the participants. Thanks to the existence of an ‘unparticipated’ principle, that is, one that is not such as to be participated in by anything, to which the ‘participated’ entities, the ones that are participated in by something, are connected by means of “the triad of triads” (Elem. Theol. § 23), the universal nature of the Form can be safeguarded. Proclus, however, also applies this principle to explain the most difficult problem facing Neoplatonic metaphysics, namely, how to understand the procession of the manifold from the One. How can the One be wholly without multiplicity, when it must somehow be the cause of any and all multiplicity? The One remains in itself absolutely unparticipated; the many different beings proceeding from it participate in a series of participated henads or unities (gods). According to some scholars it was Iamblichus who introduced this innovative doctrine, others attribute it to Proclus' teacher Syrianus. Even if the doctrine does not originate as such from Iamblichus himself, the existence of the divine henads somehow follows from his law of mean terms. This law states that “every producing cause brings into existence things like to itself before the unlike.“ (Elem. Theol. § 28). Thus there are no leaps in the chain of being, but everything is linked together by similar terms. The henads fulfill this function, for as participated unities they bridge the gap between the transcendent One and everything that comes after it. The doctrine of the henads can thus be seen as a way of integrating the traditional gods of Greek polytheistic religion into the Neoplatonic metaphysics of the One. a. Auxiliary and true causes. From Middle Platonism onwards, various attempts were made to integrate the Aristotelian doctrine of causes within the Platonic philosophy (see Steel 2003). In Plato's work, it was argued, one can find the four types of causality that Aristotle distinguishes, to wit formal, material, efficient, final, and, besides, the paradigmatic cause, which Aristotle wrongly rejected. This system of causes (with the addition of the instrumental cause as a sixth) became standard in later Neoplatonism. In his commentary on the Timaeus, Proclus observes that Aristotle never rises to the proper level of causality. For the four causes, as Aristotle understands them, can only be applied to the explanation of processes in the sublunary world. In the Platonic view, however, the material and formal causes are only subservient or instrumental causes. Those causes are in fact immanent in their effects and constitutive elements of the thing they produce. As Proclus asserts in prop. 75 of the Elements of Theology, “that which exists in the effect is not so much a cause as an auxiliary cause (sunaition) or an instrument of the producer.” Causes in the proper sense must act upon their effects from outside, while transcending them. For a proper understanding of what the true causes are of all things, Proclus argues, one must follow Plato, who lifts us up to the level of the transcendent Forms and makes us discover the creative causality of the demiurge and the finality of the Good as the ultimate explanation of all aspirations. Although Aristotle also discusses efficient and final causes, he falls short of a true understanding of creative causality because he abandons the hypothesis of the Forms. Without the transcendent Forms, there can be no explanation of the being of things, only an explanation of their movement and change. Given Aristotle's narrow understanding of nature, it must come as no surprise, Proclus notices, that he admits of cases of ‘spontaneous generation’ in the sublunary realm, which again restricts the purport of efficient causality. Moreover, because of his rejection of the demiurge (and of the One), Aristotle is also forced to limit efficient causality to the sublunary realm. In fact, in his view there is no cause of existence of the celestial bodies or of the sensible world as a whole: they exist necessarily in all eternity. But, as Proclus argues, such a position will force him to admit that the world has the capacity to constitute itself, which is absurd (see below). The Neoplatonic concept of causality is therefore quite different from that of the Peripatetics, even if both share the same terminology, such as final or efficient cause. Aristotle's causes are primarily intended to explain how things move and change, come to be and cease to be, but also offer to explain what given things are. For the Neoplatonists, generalizing a principle formulated in the Philebus — “that everything that comes to be comes to be through a cause” (26e, cf. Tim. 28a) — causality is of much wider application than the explanation of change and motion, it is not only about what things are, but about what constitutes (hupostatikos) their being, and it can be, analogously, used to explain relations between all levels of being. Thus we can say of the One that it is the cause of Intellect, and of Intellect that it is cause of Soul. In the Timaeus, however, the main interest is to understand what is the cause of the sensible world and all the cosmic beings: this is primarily the demiurge or creator of the world (the One is not the ‘creator’ of Intellect). b. Corporeal and incorporeal causes. According to the Stoics only bodies and powers or qualities of bodies are capable of acting and being acted upon (see Steel 2002). The Platonists often criticized the Stoic view and pointed to what they thought were the many contradictions involved, in particular, in the materialistic explanation of psychic activities or dispositions such as virtues. They defended the opposite view: all forms of causality must ultimately be explained as emanating from incorporeal entities. Proclus adopts Plotinus' view (IV, 7 8a), that only incorporeal beings can be causes in the strict sense, and includes it among the basic theorems of his metaphysics. See Elem. Theol. § 80 (cf. Theol. Plat. I 14, p. 61.23–62.1): Every body has by its own nature the capacity to be acted upon, every incorporeal thing the capacity to act, the former being in itself inactive, the latter impassive; but through association with the body, the incorporeal too is acted upon, just as bodies too can act because of the participation in incorporeal entities. In this proposition Proclus first sets apart the corporeal and incorporeal as being active/impassible and passive/inactive respectively. However, the two realms are not absolutely separate from each other. The soul, which is an incorporeal substance, enters into association with the body and thus becomes itself, though only accidentally, subject to different passions. The body, on the contrary, may gain great profit from the association with the incorporeal. This is evident in the case of animated bodies, which owe all their vital activities to the presence of the soul in them. But also inanimate natural bodies acquire all capacities and powers from nature and its inherent logoi or organizing rational principles (see Steel 2002). c. The relation of cause to its effect. The relation between a cause and its effect is characterized by both similarity and dissimilarity. For every cause produces something that is similar to it, and every effect thus resembles its cause, though in a secondary and less perfect way. But in so far as the effect is really distinguished from its cause, it acquires its own characteristic form of being, which was not yet developed on the level of its cause. For this reason each thing can be said to exist in three manners (Elem. Theol. § 65). First, it is in itself as expressing formally its own character (kath’ hyparxin). Second, it exists in a causal manner (kat’ aitian) being anticipated in its cause. Finally, it exists as being participated (kata methexin) by the next level of being, which is its effect. Thus life is a property of a living organism as being participated by it. Life characterizes the soul formally. Life also exists qua Form in the divine mind. Finally, Proclus stresses that the higher a cause, the more comprehensive it is, and the further its effects reach (Elem. Theol. § 57). All things, including matter, which has in itself, apart from the forms existing in it, no ‘being’, participate in the One; all beings participate in Being; all plants and animals participate in Life; all rational souls participate in Intellect. Proclus' epistemology is firmly rooted in his theory of the soul. For Proclus, souls as self-moving principles represent the lowest level of entities that are capable of reverting upon itself (so called self-constituted beings [authypostata], see Elem. Theol. § 40–51). They are incorporeal, separable from bodies and indestructible/immortal (Elem. Theol. § 186–7). Yet, they are principles of life and of movement of bodies (Elem. Theol. § 188). In accordance with Proclus' general metaphysical principles (cf. above 3.1), from the unparticipated soul-monad proceed different kinds of participated soul: divine souls, daemonic souls, human souls, souls of animals). As with other Platonists, Proclus frequently discusses the vexed question as to why a soul would descend into a body at all (‘fall of the soul’) (see Dörrie / Baltes (2002.2) 163–218). Moreover, the Neoplatonist distinguishes between altogether three so-called vehicles (ochêmata) of the soul. The rational soul is permanently housed in the luminous vehicle, while the non-rational soul is located in the pneumatic vehicle. By being incarnated in a human body, soul, or rather, the vegetative soul attains thus a (third) ‘shell-like’ vehicle. The theory of the different vehicles or the psychic ‘astral body,’ familiar nowadays from modern theosophic theories, fulfils several crucial functions in Neoplatonic psychology: it explains (a) how an incorporeal soul can be linked to a body, (b) how souls can move in space, (c) how souls can be punished after death (cf. Plato's myths), (d) where certain faculties of the soul such as imagination are located. Proclus distinguishes between two kinds of vehicles, one mortal and the other immortal (In Tim. III 236.31 ff. and Elem. Theol. § 207–210). Proclus also adheres to the Platonic theory of transmigration, but argues that human souls never enter animal bodies as their constitutive forms. For only animal souls can be organizing principles of animal bodies. If some rational souls are ‘degraded’ in the next life and forced to live in an animal body because of their misdemeanour in this life, they are only ‘relationally’ (schesei) present to this animal body. Proclus distinguishes between the following faculties of soul: sense perception, imagination (phantasia), opinion, discursive thought, and intellection. While sense perception and imagination belong to the non-rational soul, opinion forms the lowest level of rationality. The aim of epistemological ascent is to free oneself eventually from the lower psychic faculties, including the lower rational ones, in order to enjoy a state of pure contemplation. As with many other Platonists, Proclus' epistemology is based on a theory of innate knowledge (in accordance with the Platonic dictum that ‘all learning is recollection [anamnêsis]’). Proclus refers to the innate contents of the soul as its reason-principles (logoi) or Forms (eidê). These innate reason-principles constitute the essence of soul. That is why they are called ‘essential reason-principles’ (logoi ousiôdeis) (Steel 1997). The traditional translation reason-principles was chosen on purpose, because on an ontological level these same logoi serve as principles of all things. They are extended or unfolded images of the Forms that exist in intellect; and by means of them the world-soul with the assistance of Nature brings forth everything. In other words, the psychic logoi are instantiations of Platonic Forms on the level of soul as are the logoi in Nature and the forms immanent in matter. According to the fundamental Neoplatonic axiom panta en pasin (‘all things are in all things’), Forms exist on all levels of reality. But the logoi in soul also offer the principles of all knowledge and are the starting points of demonstration. At In Parm. IV 894.3–18 (ed. Steel) Proclus argues that only with reference to these notions within the soul predication is possible (see Helmig 2008), since they are universal in the true sense of the word. On the other hand, both transcendent Platonic Forms and forms in matter are not taken to be universals proper by Proclus. The former are rather intelligible particulars, as it were, and cannot be defined (Steel 2004), while the latter are strictly speaking instantiated or individualised universals that are not shared by many particulars (see Helmig 2008, cf. above 3.1–2). For this reason, it does not make much sense to talk about ‘the problem of universals’ in Proclus. It is another crucial assumption of Proclus' epistemology that all souls share the same logoi (Elem. Theol. § 194–195). In terms of concept-formation this entails that psychic concepts, once they are grasped correctly, are universal, objective, and shareable (see Helmig 2011). Moreover, if all souls share the same logoi, and these logoi are the principles of reality (see above), then by grasping the logoi souls come to know the true principles or causes of reality. Already Aristotle had written that to know something signifies to know its cause (Met. A 3, 983a25–26 and An. Post. I 2, 71b9–12). In his Commentary on Plato's Timaeus, Proclus introduces an interesting distinction. Taking his start from the problem of how we can recognise certain objects, he considers the example of an apple. The different senses tell us that there is something sweet, red, even, with a nice smell. And while common sense (koinê aisthêsis) can distinguish the different impressions of the special senses, only opinion (doxa) is capable of saying that the object there on the table is an apple. Doxa is able to do this, because it has access to the innate logoi of the soul. However, as Proclus explains (In Tim. I 248.11 ff.), opinion only knows the ‘that’ (hoti), that is, it can recognize objects. Discursive thought (dianoia), on the other hand, also knows the ‘why’ (dihoti), that is, the causes of something. This distinction can also be rephrased in terms of concepts, implying a distinction between factual concepts that allow us to identify or recognise certain objects, and concepts that fulfil an explanatory role. On the whole, Proclus' reading and systematisation of Plato's doctrine of learning as recollection makes Platonic recollection not only concerned with higher learning, since already on the level of object recognition we employ concepts that originate from the innate logoi of the soul (Helmig 2011). Proclus argues at length that the human soul has to contain innate knowledge. Therefore, one should not consider it an empty writing tablet, as Aristotle does (Aristotle, De anima III 4). He is wrong in asserting that the soul contains all things potentially. According to Proclus, the soul contains all things (i.e., all logoi) in actuality, though due to the ‘shock of birth’ it may seem as if the soul has fallen to potentiality. At In Crat. § 61, Proclus asserts that the soul does not resemble an empty writing tablet (agraphon grammateion) and does not possess all things in potentiality, but in act. In Eucl. 16.8–13 expresses the same idea: “the soul is not a writing tablet void of logoi, but it is always written upon and always writing itself and being written on by the intellect.” As with his philosophy of mathematics, Proclus presents a detailed criticism of the view that universal concepts are derived from sensible objects (by abstraction, induction, or collection). In the fourth book of his Commentary on Plato's Parmenides and in the two prologues of the Commentary on Euclid we find the most comprehensive criticism of abstractionism in antiquity (see Helmig 2010 and 2011). Proclus devoted three entire books or ‘monographs’ (monobiblia) to problems of providence, fate, free choice, and evil. The first treatise (Ten problems concerning providence) examines ten different problems on providence that were commonly discussed in the Platonic school. For Proclus providence (pronoia) is the beneficent activity of the first principle (the ‘source of goods’) and the gods (henads), who have their existence before intellect (pro-nou). One of the problems discussed is the question of how divine foreknowledge and human free choice can be reconciled. For if god knows not only past and present, but also future events, the outcome of future events is already pre-determined (as god has a determinate knowledge of all things), and hence there is no free choice for humans. Proclus' answer, which ultimately goes back to Iamblichus, consists in applying the principle that the mode of knowledge is not conditioned by the object known but by the knower. In the case of gods, this entails that they know the contingent event in a non contingent manner, the mutable immutably. They have an undivided knowledge of things divided and a timeless knowledge of things temporal (Elem. Theol. § 124, cf. De decem dub. § 6–8). Proclus' answer was later taken up by Ammonius in his Commentary on the De Interpretatione IX and in Boethius' Consolation of Philosophy V 6 as well as in his Commentary on the De Interpretatione IX. The second treatise (On providence fate and what depends on us) replies to a letter of Theodore, a former friend of Proclus. In this letter Theodore, an engineer, had defended with several arguments a radical determinism, thus entirely excluding free choice. Before refuting Theodore's arguments, Proclus introduces some fundamental distinctions in order to solve the problems raised by his old friend. The first distinction is between providence and fate: Providence is essentially a god, whereas fate is something divine, but not a god. This is because it depends upon providence and is as it were an image of it. (De prov. § 14) The second distinction is that between two types of soul: the rational soul is separable from the body, the irrational resides in the body is inseparable from its substrate; “the latter depends in its being upon fate, the former upon providence” (De prov. § 15 ff.). The third distinction concerns knowledge and truth: One type of knowledge exists in souls that are bound to the process of generation; […] another type is present in souls that have escaped from this place. (De prov. § 3.1–4.3) These three distinctions taken together make it possible for Proclus to ultimately reconcile providence, fate, and free choice. In so far as we are rational agents and let ourselves being determined in our choices only by intelligible principles, we may transcend the determinism of fate to which we belong as corporeal beings. Yet, our actions are integrated into the providential order, as we willingly obey the divine principles. The third treatise (On the existence of evils) asks why and how evil can exist if the world is governed by divine providence. Proclus argues that evil does not have an existence of its own, but only a derivative or parasitic existence (par-hypostasis, sc. on the good) (De mal. § 50). In order to exist in a proper sense, an effect must result from a cause which proceeds according to its nature towards a goal that is intended. […] Whenever an effect is produced that was not intended or is not related by nature or per se to the agent, it is said to exist besides (para) the intended effect, parasitically upon it, as it were. (Opsomer-Steel 2003, 25) This is precisely the case with evils, which are shortcomings and mistakes. As a failure is never intended qua failure by an agent, but is an unfortunate by-effect of its action, so is evil qua evil never caused by a cause. Therefore, Proclus continues, it is better to call its mode of existence a parhupostasis, rather than a hupostasis, a term that belongs to those beings “that proceed from causes towards a goal.” Parhupostasis or “parasitic existence,” on the contrary, is the mode of existence of “beings that neither appear through causes in accordance with nature nor result in a definite end.” Evils are not the outcome of goal-directed processes, but happen per accidens, as incidental by-products which fall outside the intention of the agents. […] Therefore it is appropriate to call such generation a parasitic existence (parhupostasis), in that it is without end and unintended, uncaused in a way (anaition pôs) and indefinite. (De mal. § 50.3–9, 29–31, transl. by Opsomer-Steel 2003) Dionysius the Areopagite adopted Proclus' views on evil in his work On the Divine Names. Thanks to this adaptation Proclus' doctrine of evil had an enormous influence on the later medieval discussions on evil both in Byzantium and in the Latin West and dominated the philosophical debates on evil up to the 19th century. A theological physics Although Proclus composed a short (presumably early) treatise where he summarises Aristotle's theory of movement (Elements of Physics), he does not understand Physics primarily as the study of movement and change of natural phenomena, but rather seeks to connect these phenomena to their intelligible and divine causes (Physics as a kind of Theology, In Tim. I 217.25). In the preface to his commentary on Plato's Timaeus Proclus sets out to prove why Plato's physics, as developed in the Timaeus, is superior to natural science in the Aristotelian sense (see Steel 2003). In Proclus' view Plato's Timaeus not only offers a physiologia, a science of nature in its many aspects, but also presents an explanation of the whole of nature by paying due attention to its incorporeal, divine causes: the natural world proceeds from the demiurge as the expression of an ideal paradigm and aims at the ultimate Good. Therefore, Plato's physio-logy is also a sort of theo-logy: The purpose of Timaeus will be to consider the universe, insofar as it is produced by the gods. In fact, one may consider the world from different perspectives: insofar as it is corporeal or insofar as it participates in souls, both particular and universal, or insofar as it is endowed with intellect. But Timaeus will examine the nature of the universe not only along all those aspects, but in particular insofar as it proceeds from the demiurge. In that respect the physiology seems also to be a sort of theology, since also natural things have somehow a divine existence insofar as they are produced by the gods. (In Tim. I 217.18–27) Before offering an explanation of the generation of the world, Timaeus sets out the fundamental principles that will govern his whole explanation of the physical world (Tim. 27d5–28b5). As Proclus observes, it is the task of a scientist to formulate at the start of his project the principles proper to the science in question, and not just to assume some general axioms. The science of nature too is based on specific axioms and assumptions, which must be clarified before we can move to the demonstration. In order to make phusiologia a real science, the philosopher must deduce his explanation, as does the geometer, from a set of fundamental propositions or axioms. If I may say what I think, it seems to me that Plato proceeds here in the manner of the geometers, assuming before the demonstrations the definitions and hypotheses through which he will make his demonstrations, thus laying the foundations of the whole science of nature. (In Tim. I.217.18–27) Starting from these fundamental propositions, Proclus argues, Plato deduces the different types of causality that are required for a truly scientific understanding of nature (efficient, exemplary, and final cause; see Steel 2003 and above 3.2). Time and eternity Proclus discusses eternity and time in his commentary on the Timaeus and in propositions 53–55 of the Elements of Theology (see Steel 2001). Aristotle had defined time as a “measure of movement according to the before and after.” Therefore, anything measured by time must have a form of existence or activity in which a past and a future state can be distinguished. In fact, an entity in time is never wholly and simultaneously what it is, but has an existence extended in a process of before and after. Opposed to it stands the eternal, which exists as a simultaneous whole and admits of no composition or change. “There is no part of it,” writes Proclus, “which has already subsisted and another that will subsist later, but as yet is not. All that it is capable of being, already possesses it in entirety without losing it or without accumulating” (Elem. Theol. § 52). One must distinguish the temporality of things in process from the time by which they are measured. Temporal things participate in time, without being time. “Time exists prior to all things in time” (Elem. Theol. § 53). With Iamblichus, Proclus distinguishes absolute time, which is not participated in and exists ‘prior’ to all temporal things, from participated time, or rather the many participated times. The same distinctions must also be made regarding eternity. For Eternity precedes as cause and measures the multiple eternal beings that participate in it. “Every Eternity is a measure of things eternal, every Time of things in time; and these two are the only measures of life and movement in things” (Elem. Theol. § 54). To conclude, there are two measures of the duration of things. First there is eternity, which measures at once the whole duration of a being. Second, there is time, which measures piecemeal the extension of a being that continually passes from one state to another. Eternity can be seen as the prefiguration of time; time as the image of eternity. Each of them governs a separate sphere of reality, eternity the intelligible being, time the temporal (corporeal and psychic) world of change. Notwithstanding the sharp distinction between the temporal and the eternal realm, there are beings that share in both eternity and time. As Proclus notes in the corollary to Elem. Theol. § 55, “of the things which exist in time, some have a perpetual duration.” Thus the universe as a whole and the celestial spheres in it are both eternal and temporal. They are eternal because they never come to existence in time and never will cease to exist. But they are temporal because they possess their being only through a process of change in a sequence of moments. The same holds true for the psychic realm: all souls are immortal and indestructible; nevertheless, they are continually undergoing change. Therefore, as Proclus says, “‘perpetuity’ (aidiotês) is of two kinds, the one eternal (aiônion), the other in time; […] the one having its being concentrated in a simultaneous whole, the other diffused and unfolded in temporal extension (paratasis); the one entire in itself, the other composed of parts, each of which exists separately in a sequence of prior and posterior.” (Elem. Theol. § 55, trans. Dodds, modified). The eternity of the world Against Aristotle's critique in De Caelo I 10, Proclus defends the view that the cosmos is “both eternal and generated (genêtos).” As a corporeal being, the universe cannot produce itself and maintain itself in being. It depends for its existence upon a superior cause, and it is for that reason “generated.” This does not prevent it, however, from existing for ever, in an infinite time. As we just saw, Proclus distinguishes between what is eternal in an absolute sense (the intelligible realm) and what is eternal because it continues to exist for the whole of time, what Boethius later called “aevum” in distinction from “aeternum.” As Proclus notices, at the end of the Physics (8.10, 266a27–28), Aristotle himself establishes that no body can possess from itself an unlimited power to exist. If the world exists eternally, it must have this power from an incorporeal principle. Therefore, Aristotle too is forced to admit that the world is somehow generated, though it continues to exist for eternity. For it always receives from its cause its infinite power and never possesses it at once as a whole, because it is limited. The world is eternal, because it has an infinite power of coming to be, not because it exists of infinite power (In Tim. I 252.11–254.18). This disagreement between Plato and Aristotle is ultimately due to a different view about the first principles of all things. Aristotle denies the existence of Platonic Forms and therefore cannot admit an efficient or creative cause of the universe in the true sense of the word. Efficient causality only concerns the sublunary world. The celestial bodies and the world as a whole have no efficient cause of their being, but only a final cause. From this misunderstanding about the first principles follow all the other views that distinguish Aristotle from Plato. One gets the impression, Proclus says, that Aristotle, because he could not grasp the first principle of all things - the One – has always to find an explanation of things on a lower level: Whatever Plato attributes to the One, Aristotle attributes to the intellect: that it is without multiplicity, that it is object of desire, that it does not think of secondary things. Whatever Plato attributes to the demiurgic intellect, Aristotle attributes to the heaven and the celestial gods. For, in his view, creation and providence come from them. Whatever Plato attributes to the substance of heavens [sc. time], Aristotle attributes to their circular motion. In all these issues he departs from the theological principles and dwells upon the physical explanations beyond what is needed. (In Tim. I 295.20–27) The celestial bodies and the place of the universe Related to the eternity of the world is the question of the nature of the celestial bodies. Aristotle argues in De Caelo I 2 that the celestial bodies, which move with a natural circular motion, must be made of a simple substance different from the four sublunary simple bodies (whose natural movements are in a straight line: up or down). This ‘fifth element,’ which is by nature imperishable, is the ether. With this explanation Aristotle seems to oppose the view Plato defends in Timaeus where it is said that the Demiurge made the divine celestial bodies “mostly out of fire” (40a2–4). Proclus admits that the heaven is composed out of the four elements with a preponderance of fire, but he insists that the elements are not present in the celestial bodies in the same mode as they exist in the sublunary bodies. Therefore Aristotle is right when he considers the heavens to constitute a fifth nature besides the four elements. “For in the heavens the elements are not the same as they are here, but are rather the summits of them” (In Tim. II 49.27–29). If one counts the whole heaven composed out of the best of the elements as one nature and adds to it the four sublunary elements, we may speak of five natures altogether. Contrary to Aristotle, Proclus argues that the whole universe (to pan) is in a place (topos). He can do this because his conception of place differs in many respects from Aristotle's own. The latter defined place as “the unmoved limit of the surrounding body” (Physics IV 4, 212a21–22). From this it follows as a necessary corollary that the universe as a whole cannot be in a place, because there is simply nothing outside it. Aristotle's definition, as we learn from Simplicius' and Philoponus' Corollary on Place, had been criticized by all later Neoplatonists (Syrianus, Proclus, Damascius, Simplicius, and Philoponus). It is notable that Proclus' own theory of place, as reported by Simplicius, differs considerably from other Neoplatonic theories in that he considered place an immaterial ‘body’, namely a special kind of immobile light. As emerges from Proclus' Commentary on Plato's Republic, his theory took inspiration from the column of light mentioned at Republic X, 616b. Since the heavenly bodies were considered divine, because they are eternal and living beings, the study of the heavens was of special importance to Neoplatonists. In the preface to his treatise On Astronomical Hypotheses (a summary and evaluation of astronomical views of his time), Proclus makes it clear that his approach is based on Plato's remarks on astronomy (especially in the Republic and in the Laws). He feels the need to go through the different theories, because one can observe a great disagreement among ancient astronomers on how to explain the different phenomena (Hyp. I § 33). Fundamental to Proclus' approach is the distinction between two kinds of astronomy (Hyp. I § 1–3). The first kind contents itself with observing the heavenly phenomena and formulating mathematical hypotheses to explain them and make calculations and prognostics possible. This is the astronomy as practiced by the most famous astronomers before Proclus' time (Aristarchus, Hipparchus, and Ptolemy). The second, which is developed by Plato in the Timaeus, and is confirmed by the tradition of the “Chaldaeans and Egyptians,” investigates into the intelligible causes of heavenly movements. An example for this approach can be found in his Commentary on Plato's Republic (In Remp. II 227.23–235.3). There, Proclus explains that the seemingly irregular movements of the planets ought not to be explained by means of Ptolemy's complicated theory of excentric spheres and epicycles, but are rather due to the fact that the planets are moved by intelligent souls which express in the movements of their bodies “the invisible powers of the Forms” (232.1–4). Yet Proclus appreciates Ptolemy's astronomy as long as it is seen only as a mathematical-mechanical construction making it possible to calculate and predict the positions of planets, and as long as it does not claim to have any real explanatory value. For the history of astronomy Proclus' Astronomical Hypotheses remains a most valuable document, since it represents one of the best introductions to Ptolemy's Almagest extant from antiquity and since it explains the most important ancient astronomical theories, in order finally (in chapter seven of the work) to evaluate them critically. Proclus' arguments also played an important role in the scientific discussion of the Ptolemaic hypotheses in the 16th and 17th century. Proclus' distinctively non-empirical approach towards physics and astronomy also influences his philosophy of mathematics, which is set out in the two prologues to his commentary on the first book of Euclid's Elements. The first prologue deals with the mathematical sciences in general, while the second prologue focuses on geometry proper. Proclus argues in great detail that the objects of mathematical sciences cannot be derived from sensible particulars by means of abstraction. Because of the imperfect and deficient character of the sensible objects one cannot derive from them objects that are as perfect and as precise as mathematical objects are. Therefore, mathematical objects reside primarily in intellect and secondarily in souls (as logoi). As universal concepts (cf. 3.2) we can grasp mathematical objects by means of recollection (anamnêsis). Since geometrical objects are not universal, but particulars, and since by definition they possess extension, Proclus argues that their place is human imagination (phantasia). Imagination acts as a mirror and provides the mathematical objects which are projected into it by the soul with intelligible matter. By means of the latter geometrical objects gain extension and particularity. As with physics and astronomy, the ultimate aim of geometry is not the study of these extended, material objects. Rather, geometry serves an anagogical task (just as in Plato's Republic), leading the soul upwards to a study of the true and unextended causes of geometrical objects in the divine mind (In Eucl. 54.14–56.22). Relying on Plato, Theaetetus 176a-b late Platonists saw the assimilation to god (homoiôsis theôi) as the goal (telos) of philosophy. Proclus was faithful to this ideal, as is attested by his biographer Marinus (Life of Proclus § 25). There was a fundamental discussion in late Neoplatonism on how this assimilation to the divine was possible for humans. Damascius (In Phaed. I § 172 Westerink) distinguishes two tendencies: Plotinus and Porphyry preferred philosophy, which makes us understand the divine principles of reality through rational explication, while others like Iamblichus and his followers, Syrianus, and Proclus, gave priority to hieratic practice or theurgy (theourgia, hieratikê [sc. technê]). Their different evaluation of respectively theory and theurgy as means of salvation may be explained by their different views on the human soul and its possibilities of ascent to the divine realm. While Plotinus and Porphyry claimed that the superior part of the human soul always remains within the intelligible realm, in touch with the divine principles, and never completely descends into the body, Iamblichus, followed by Proclus, criticised such a view. The soul does indeed wholly descend into the body (Steel 1976, 34–51). Hence the importance of theurgic rites established by the gods themselves, to make it possible for the human soul to overcome the distance between the mortal and the divine, which cannot be done through increasing philosophical understanding. In Theol. Plat. I 25, Proclus expresses his great admiration for the power of theurgy, which surpasses all human knowledge. Allegedly, Neoplatonic theurgy originated with Julian the theurgist, who lived in the time of emperor Marcus Aurelius. At first sight, theurgy seems to share many characteristics with magic (theory of cosmic sympathy, invocations, animation of statues of gods and demons), but it is, as far as we can judge from the extant sources, clearly different from it. In his De Mysteriis Iamblichus developed a theology of the hieratic rituals from Platonic principles, which clearly sets them apart from the vulgar magical practices. While magic assumes that the gods can be rendered subservient to the magicians, Platonic philosophers consider this impossible. According to Plato's principles of theology (Republic II and Laws X), the gods are immutable, unchangeable, and cannot be bribed by means of sacrifices. Proclus' views on theurgy (of which only some fragments belonging to his treatise On Hieratic Art [i.e., theurgy] survived) are fully in line with these fundamental Platonic axioms. But how, then, does theurgy work? The theurgists take up an old belief, shared also by many philosophers, namely the natural and cosmic ‘sympathy’ (sumpatheia) pervading all reality. As with an organism, all parts of reality are somehow linked together as one. Another way of expressing this idea is in the Neoplatonic principle, going back at least to Iamblichus, that everything is in everything (panta en pasin). According to Proclus, all reality, including its most inferior level, matter, is directed upwards towards the origin from which it proceeds. To say it in the words of Theodorus of Asine, whom Proclus quotes in his Commentary on the Timaeus (I 213.2–3): “All things pray except the First.” As stated before (cf. 3.3), the human soul contains the principles (logoi) of all reality within itself. The soul carries, however, also sumbola or sunthêmata which correspond to the divine principles of reality. The same symbols also establish the secret correspondences between sensible things (stones, plants, and animals) and celestial and divine realities. Thanks to these symbols, things on different levels (stones, plants, animals, souls) are linked in a ‘chain’ (seira) to the divine principle on which they depend, as the chain of the sun and the many solar beings, or the chain of the moon. Of great importance in the rituals was also the evocation of the secret divine names. In his Commentary on the Cratylus, Proclus compares divine names to statues of the gods used in theurgy (In Crat. § 46), pointing to the fact that also language is an important means in the ascent to the divine. Proclus evokes the Platonic background of his theurgical beliefs, namely his theory of love (erôs) as expressed in the Symposium and the Phaedrus, in his treatise On Hieratic Art: Just as lovers move on from the beauty perceived by the senses until they reach the sole cause of all beautiful and intelligible beings, so too, the theurgists (hieratikoi), starting with the sympathy connecting visible things both to one another and to the invisible powers, and having understood that all things are to be found in all things, established the hieratic science. (trans. Ronan, modified) In the wake of an article of Anne Sheppard (1982), scholars usually distinguish between three kinds of theurgy in Proclus. The first kind, as described in the above quoted treatise On Hieratic Art, was mainly concerned with animating statues (in order to obtain oracles or to evoke divine apparitions) or, in general, with activities related to physical phenomena or human affairs (influencing the weather, healing illnesses etc.) (see Life of Proclus § 28–29). As emerges from our sources, it is this kind of theurgy that involved much ritualistic practice, including hymns and prayers. The second kind of theurgy makes the soul capable of ascending up to the level of the hypercosmic gods and the divine intellect. This second kind too operates by means of prayers and invocations and it seems especially characteristic of Proclus' Hymns. And finally, the third kind of theurgy establishes unity with the first principles, that is the One itself. This third kind corresponds to the level of the highest virtues (i.e., ‘theurgic virtues’) in the scale of virtues. It is not clear whether some form of ritual is involved here at all. For this last stage of the Platonic homoiôsis theôi the following elements are of major importance: negative theology (culminating in the negation of the negation), mystic silence and the intriguing notion of faith (pistis), which thus enters with a non-Platonic meaning - though even for the latter notion Proclus will search for confirmation in the Platonic dialogues. Those who hasten to be conjoined with the Good, do no longer need knowledge and activity, but need to be established and a stable state and quietness. What then is it which unites us to the Good? What is it which causes in us a cessation of activity and motion? What is it which establishes all divine natures in the first and ineffable unity of goodness? […] It is, in short, the faith (pistis) of the Gods, which ineffably unites all the classes of Gods, of daemons, and of blessed souls to the Good. For we should investigate the Good not through knowledge (gnôstikôs) and in an imperfect manner, but giving ourselves up to the divine light, and closing the eyes, to become thus established in the unknown and occult unity of beings. For such a kind of faith is more venerable than cognitive activity, not in us only, but with the Gods themselves. (Proclus, Platonic Theology, I 25, trans. Th. Taylor, modified). In his Lectures on the History of Philosophy, in the chapter on Alexandrian Philosophy, Hegel said that “in Proclus we have the culminating point of the Neo-Platonic philosophy; this method in philosophy is carried into later times, continuing even through the whole of the Middle Ages. […] Although the Neo-Platonic school ceased to exist outwardly, ideas of the Neo-Platonists, and specially the philosophy of Proclus, were long maintained and preserved in the Church.” That Proclus, who set up his elaborate Platonic Theology in an attempt to rationally justify a pagan religious tradition whose existence was threatened by the upcoming Christian civilization, would have had such an influence in Christian medieval thought might seem surprising. His influence, however, is mainly indirect, as his ideas circulated under the names of other philosophers. There was, of course, a direct confrontation with the works of Proclus in the later Neoplatonic school (via Damascius and Ammonius, 5th-6th cc) and in Byzantium. In the 11th century, Michael Psellus studied Proclus intensively and even preserved fragments of his lost works. One of his disciples was the Georgian Ioanne Petritsi, who translated Proclus' Elements into Georgian and composed a commentary on it (Gigineishvili 2007). In the 12th century, bishop Nicolaus of Methone wrote a Christian reply to Proclus' Elements, thus showing indirectly that the work was still attracting interest. Moreover, Isaac Sebastocrator (11–12th century) produced a Christian adaption of the Tria opuscula. Around 1300 Proclus attracted the interest of the philosopher George Pachymeres, who prepared an edition of Proclus' Commentary on the Parmenides, which was only preserved in a very corrupt tradition, and even composed a commentary to the last part of the dialogue where Proclus' commentary was lacking. Cardinal Bessarion was an attentive reader of Proclus' works and possessed several manuscripts. We owe to the interest of scholars such as Psellus, Pachymeres, and Bessarion, the preservation of the work of the pagan Proclus, who had not such a good reputation in theological circles in Byzantium. And yet, the number of direct readers of Proclus before the Renaissance was very limited. During the Middle Ages Proclus' influence was mainly indirect, above all through the writings of the Christian author Dionysius the Areopagite and the Arabic Liber de causis. Dionysius was a Christian author writing around 500, who was deeply fascinated by Proclus. He fully exploited Proclus' works – which he must have read intensively — to develop his own original Christian Platonic theology. He presented himself as a disciple of Saint Paul, a pretence which was generally accepted until the late 19th century, thus giving his works, and indirectly Proclus' theology, an almost apostolic authority. As Dodds ²1969, xxviii, has nicely put it: “Proclus was […] conquering Europe in the guise of an early Christian.” The well known Book of Causes is an Arabic adaption of the Elements of Theology, made in the 9th century. Translated in the 12th century, the Liber de causis circulated in the Middle Ages under the name of Aristotle, and was considered as a complement to the Metaphysics, offering a treatise on the divine causes. The text entered the corpus of Aristotelian works and was intensively studied and commented at the universities. Thomas Aquinas is the first to have discovered that this work derived in fact from Proclus' Elements of Theology, of which he had obtained a Latin translation made by his Dominican confrere William of Moerbeke in 1268 (see Thomas Aquinas, Commentary on the Liber de causis, introduction). Moerbeke also translated the Tria opuscula and the huge commentary on the Parmenides, but these works had almost no readers in the Middle Ages. Berthold of Moosburg wrote in the 14th century a comprehensive commentary on the Latin Elements of Theology . The real rediscovery of Proclus started in the Italian Renaissance, mainly thanks to Marsilio Ficino who followed Proclus' influence in his Platonic commentaries and even composed, in imitation of Proclus, a Christian Platonic Theology on the immortality of the soul. Before Ficino, Nicolaus Cusanus had already intensively studied Proclus in translations. Proclus continued to enjoy wide interest at the turn of the 18th century. Thomas Taylor (1758–1835) translated all of Proclus' works into English (reprinted by the Prometheus Trust [London]) and tried to reconstruct the lost seventh book of the Platonic Theology. Victor Cousin (1792–1867) aimed at a complete edition of his preserved work. At the beginning of the 20th century we have the great editions of commentaries in the Teubner collection. Renewed philosophical interest in Proclus in the last century started with the edition of the Elements of Theology by Eric Robertson Dodds, and carried on with the edition of the Platonic Theology by Henry Dominique Saffrey, Leendert Gerrit Westerink and, not least, in Germany with the works of Werner Beierwaltes. Lists of Proclus' works are available in the two supplements on 1. Elements of Theology - Dodds, E.R., 1933, 19632, The Elements of Theology, Oxford: Clarendon. - Boese, H., 1987, Proclus: Elementatio theologica, translata a Guillelmo de Morbecca, (Series: KUL, Ancient and medieval philosophy, De Wulf-Mansion centre, Series, 1, vol. 5), Leuven: Leuven University Press. 2. Platonic Theology - Saffrey, H.D., and L.G. Westerink, 1968–1997, Proclus: Théologie platonicienne, 6 vol., (Series: Collection des Universités de France), Paris: Les Belles Lettres. - Taylor, Th, 1816, Proclus' Theology of Plato, (Series: The Thomas Taylor Series, VIII), London: Prometheus Trust. 3.-5. Tria opuscula (Latin) - Boese, H., 1960, Procli Diadochi tria opuscula (De providentia, libertate, malo) Latine Guilelmo de Moerbeka vertente et graece ex Isaacii Sebastocratoris aliorumque scriptis collecta, (Series: Quellen und Studien zur Geschichte der Philosophie, 1), Berlin: de Gruyter. 3. Ten Problems Concerning Providence - Isaac, D., 1977, Proclus: Trois études sur la providence, I. Dix problèmes concernant la providence, (Series: Collection des Universités de France), Paris: Les Belles Lettres. - Opsomer, J. and Steel, C., forthcoming, Proclus: Ten Doubts Concerning Providence, (Series: The Greek commentators on Aristotle), London: Duckworth. 4. On Providence, Fate and What Depends on Us - Isaac, D., 1979, Proclus: Trois études sur la providence, II. Providence, fatalité, liberté, (Series: Collection des Universités de France), Paris: Les Belles Lettres. - Steel, C., 2007, Proclus: On Providence, (Series: The Greek commentators on Aristotle), London: Duckworth. 5. On the Existence of Evils - Isaac, D., 1982, Proclus: Trois études sur la providence, III. De l'existence du mal, (Series: Collection des Universités de France), Paris: Les Belles Lettres. - Opsomer, J., and C. Steel, 2003, Proclus: On the Existence of Evils, (Series: The Greek commentators on Aristotle, 50), London: Duckworth. 6. Commentary on Plato's Alcibiades (up to 116b) - Segonds, A.-Ph., 1985-1986, Proclus: Sur le premier Alcibiade de Platon, 2 vol., (Series: Collection des Universités de France), Paris: Les Belles Lettres. - O'Neill, W., 1964, 19712, Proclus: Alcibiades I, The Hague: Martinus Nijhoff. 7. Commentary on Plato's Cratylus (up to 407c) - Pasquali, G., 1908, Proclus Diadochus in Platonis Cratylum commentaria, (Series: Bibliotheca scriptorum Graecorum et Romanorum Teubneriana), Leipzig: Teubner [Reprint Stuttgart: Teubner, 1994]. - Duvick, B., 2007, Proclus. On Plato's Cratylus, (Series: The Ancient Commentators on Aristotle), London: Duckworth. 8. Commentary on Plato's Timaeus (up to 44d) - Diehl, E., 1903–1906, Procli Diadochi In Platonis Timaeum commentaria, (Series: Bibliotheca scriptorum Graecorum et Romanorum Teubneriana), Leipzig: Teubner [Reprint Amsterdam: Hakkert, 1965]. - Tarrant, H., 2007, Proclus. Commentary on Plato's Timaeus. Vol 1, book I: Proclus on the Socratic State and Atlantis, Cambridge: Cambridge University Press. - Baltzly, D., 2007, Proclus. Commentary on Plato's Timaeus. Vol 3, book III: Proclus on the World's Body, Cambridge: Cambridge University Press. (see the review by C. Steel, Exemplaria Classica 14 (2010), 425–433). - Runia, D.T., and M. Share, 2008, Proclus. Commentary on Plato's Timaeus. Vol 2, Book II: Proclus on the Causes of the Cosmos and its Creation, Cambridge: Cambridge University Press. - Festugière, A.-J., 1966-1968, Commentaire sur le Timée, 5 vol., (Series: Bibliothèque des textes philosophiques), Paris: Vrin. 9. Commentary on Plato's Parmenides (up to 142a) - Steel, C., 2007–2009, Procli in Platonis Parmenidem commentaria (edition prepared with the collaboration of P. d'Hoine, A. Gribbomont, C. Macé and L. Van Campe) (Series: Oxford Classical Texts), 3 volumes, Oxford: Clarendon. - Segonds, A.-Ph., and C. Luna, 2007, Proclus. Commentaire sur le Parménide de Platon. Tome 1, 1re partie: Introduction générale, 2e partie: Livre I, texte, (Series: Collection des Universités de France), Paris: Les Belles Lettres (on this edition see C. Steel, Mnemosyne 63 (2010), 120–142). - Morrow, G.R., and J.M. Dillon, 1987, Proclus' commentary on Plato's Parmenides, Princeton (New Jersey): Princeton University Press. 10. Commentary on Plato's Republic (in different essays) - Kroll, W., 1899–1901, Procli Diadochi in Platonis rem publicam commentarii, 2 vol., (Series: Bibliotheca scriptorum Graecorum et Romanorum Teubneriana), Leipzig: Teubner [Reprint Amsterdam: Hakkert, 1965]. - Festugière, A.-J., 1970, Proclus: Commentaire sur la république, 3 vol., (Series: Bibliothèque des textes philosophiques), Paris: Vrin. 11. Elements of Physics - Ritzenfeld, A., 1912, Procli Diadochi Lycii institutio physica, (Series: Bibliotheca scriptorum Graecorum et Romanorum Teubneriana), Leipzig: Teubner. - Boese, H., 1958, Die mittelalterliche Übersetzung der Stoicheiosis phusike des Proclus, (Series: Deutsche Akademie der Wissenschaften zu Berlin, Institut für griechisch-römische Altertumskunde, Veröffentlichungen 6), Berlin: Akademie Verlag. 12. Commentary on Euclid's Elements, Book I - Friedlein, G., 1967, Procli Diadochi in primum Euclidis elementorum librum commentarii, (Series: Bibliotheca scriptorum Graecorum et Romanorum Teubneriana), Leipzig: Teubner [Reprint Hildesheim: Olms, 1967]. - Morrow, G.R., 1970, A Commentary on the First Book of Euclid's Elements, Princeton (N.J.): Princeton University Press [Reprinted 1992, with a new foreword by I. Mueller]. - New edition prepared by C. Steel, G. Van Riel and L. Van Campe, first volume forthcoming, Paris, Vrin. 13. Exposition of Astronomical Hypotheses - Manitius, C., 1909, Procli Diadochi hypotyposis astronomicarum positionum, (Series: Bibliotheca scriptorum Graecorum et Romanorum Teubneriana), Leipzig: Teubner; reprint Stuttgart: Teubner 1974. 14. (frag.) On the Eternity of the World, against the Christians (18 arguments) - Rabe, H, 1899, Ioannes Philoponus: De aeternitate mundi contra Proclum, Leipzig: Teubner [Reprint Hildesheim: Olms, 1963]. - Lang, H.S., Macro, A.D., and J. McGinnis, 2001, Proclus: On the Eternity of the world (de Aeternitate mundi), Berkeley / Los Angeles / London: University of California Press. - Gleede, B., 2009, Platon und Aristoteles in der Kosmologie des Proklos. Ein Kommentar zu den 18 Argumenten für die Ewigkeit der Welt bei Johannes Philoponos (Series: Studien und Texte zu Antike und Christentum) Tübingen: Mohr. 15. (frag.) Commentary on Hesiod, Works and Days - Marzillo, P., 2010, Der Kommentar des Proklos zu Hesiods ‘Werken und Tagen’. Edition, Übersetzung und Erkläuterung der Fragmente, Tübingen: Narr. - Vogt, E., 1957, Procli hymni accedunt hymnorum fragmenta; epigrammata, scholia, fontium et locorum similium apparatus, indices, Wiesbaden: Harrassowitz. - Van Den Berg, R.M., 2001, Proclus' Hymns: Essays, Translations, Commentary, Leiden – Boston – Köln: Brill. 17. Life of Proclus - Saffrey, H.D., and A.-P. Segonds (together with C. Luna), 2001, Proclus ou Sur le bonheur, (Series: Collection des universités de France), Paris: Les Belles Lettres. - Edwards, M., 2000, Neoplatonic Saints. The Lives of Plotinus and Proclus by their Students, Liverpool: Liverpool University Press, pp. 58–115. - Scotti Muth, N., 1993, Proclo negli ultimi quarant'anni. Bibliografia ragionata della letteratura primaria e secondaria riguardante il pensiero procliano e suoi influssi storici (anni 1949–1992), (Series: Publicazioni del Centro di ricerche di metafisica. Temi metafisici e problemi del pensiero antico. Studi e testi, 27), Milano: Vita e Pensiero. - d'Hoine, P., Chr. Helmig, C. Macé, L. Van Campe under the direction of C. Steel, 2002 (immo 2005), Proclus: Fifteen Years of Research (1990–2004). An Annotated Bibliography, (Series: Lustrum, 44). - An online-bibliography of Proclus, including a list of editions and translations of his works can be found on the website of the Leuven project “Plato Transformed” (see below internet resources). - Beutler, R., 1957, “Proklos, 4) Neuplatoniker,” in Realencyclopädie der classischen Altertumswissenschaft, 23.1, Stuttgart: Alfred Druckenmüller, coll. 186–247. - Zeller, E., and R. Mondolfo, 1961, La filosofia dei Greci nel suo sviluppo storico, Parte III: La filosofia post-aristotelica, vol. VI: Giamblico e la Scuola di Atene, Firenze: La Nuova Italia, pp. 118–196. - Bastid, P., 1969, Proclus et le crépuscule de la pensée grecque, (Series: Bibliothèque d'histoire de la philosophie), Paris: Vrin. - Le Néoplatonisme, 1971, Actes du colloque international organisé à Royaumont 9–13 juin 1969, (Series: Colloques Internationaux du CNRS), Paris: Éditions du CNRS. - Trouillard, J., 1972, L'Un et l'âme selon Proclos, (Series: Collection d'études anciennes), Paris: Les Belles Lettres. - De Jamblique à Proclus, 1975, Neuf exposés suivis de discussions, (Series: Entretiens sur l'Antiquité Classique, 21), Vandoeuvres-Genève: Fondation Hardt. - Beierwaltes, W., 1965, 1979², Proklos. Grundzüge seiner Metaphysik, Frankfurt am Main: Vittorio Klostermann. - Pépin, J., and H.D. Saffrey (eds.), 1987, Proclus lecteur et interprète des anciens, actes du colloque international du CNRS, Paris 2–4 oct. 1985, (Series: Colloques Internationaux du CNRS), Paris: Éditions du CNRS. - Boss, G., and G. Seel (eds.), 1987, Proclus et son influence, actes du colloque de Neuchâtel, juin 1985, Zürich: Éditions du Grand Midi. - Duffy, J., and J. Peradotto (eds.), 1988, Gonimos. Neoplatonic and Byzantine Studies presented to Leendert G. Westerink at 75., Buffalo (New York): Arethusa. - Reale, G., Introduzione a Proclo, (Series: I Filosofi, 51), Roma-Bari: Laterza. - Bos, E.P., and P.A. Meijer (eds.), 1992, On Proclus and his Influence in Medieval Philosophy, (Series: Philosophia antiqua, 53), Leiden-Köln-New York: Brill. - Siorvanes, L., 1996, Proclus. Neo-Platonic Philosophy and Science, New Haven: Yale University Press. - Cleary, J. (ed.), 1997, The perennial tradition of neoplatonism, (Series: Ancient and medieval philosophy, Series I, 24), Leuven: Leuven University Press. - Segonds, A.Ph., and C. Steel (eds.), 2000, Proclus et la Théologie platonicienne, actes du colloque international de Louvain (13–16 mai 1998) en l'honneur de H.D. Saffrey et L.G. Westerink, (Series: Ancient and medieval philosophy, Series I, 26), Leuven-Paris: Leuven University Press / Les Belles Lettres. - Perkams, M., and R.M. Piccione (eds.), 2006, Proklos. Methode, Seelenlehre, Metaphysik, Akten der Konferenz in Jena am 18.-20. September 2003, (Series: Philosophia antiqua, 98), Leiden-Boston: Brill. - Steel, C., 2006, “Neoplatonism” and “Proclus,” in Encyclopedia of Philosophy, D.M. Borchert (ed.), Detroit : Macmillan Reference USA, vol. 6, col. 546–557; vol. 8, col. 40–44. - Beierwaltes, W., 2007, Procliana. Spätantikes Denken und seine Spuren, Frankfurt am Main: V. Klostermann. - Steel, C., 2011, “Proclus,” in The Cambridge History of Philosophy in Late Antiquity, L. Gerson (ed.), Cambridge: Cambridge University Press, vol. 2, pp. 630–653. - Adamson, P., H. Baltussen, and M.F.W. Stone (eds.), 2004, Philosophy, Science and Exegesis in Greek, Arabic and Latin Commentaries, vol. I, (Series: Bulletin of the Institute of Classical Studies. Supplement, 83.1), London: Institute of Classical Studies. - Athanassiadi, P., 1999, “The Chaldean Oracles: Theology and theurgy,” in Pagan monotheism in late Antiquity, P. Athanassiadi, and M. Frede (eds.), Oxford: Clarendon, pp. 149–183. - Baltes, M., 1976 & 1978, Die Weltentstehung des platonischen Timaios nach den antiken Interpreten, (Series: Philosophia Antiqua, 30 & 35), Leiden: Brill. - Baltzly, D., 2002, “What goes up: Proclus against Aristotle on the fifth element,” Australasian Journal of Philosophy, 80: 261–287. - –––, 2004, “The virtues and ‘becoming like god’: Alcinous to Proclus,” Oxford Studies in Ancient Philosophy, 26: 297–321. - Barbanti, M., and F. Romano (eds.), 2002, Il Parmenide di Platone e la sua tradizione. Atti del III Colloquio Internazionale del Centro di Ricerca sul Neoplatonismo. Università degli Studi di Catania, 31 maggio – 2 giugno 2001, (Series: Symbolon. Studi e testi di filosofia antica e medievale, 24), Catania: CUECM. - Beierwaltes, W., 1985, Denken des Einen, Frankfurt am Main: Vittorio Klostermann. - –––, 1998, 2001, Platonismus im Christentum, (Series: Philosophische Abhandlungen, 73), Frankfurt am Main: Klostermann. - Breton, S., 1969, Philosophie et mathématique chez Proclus, suivi de Principes philosophiques des mathémathiques d'après le commentaire de Proclus aux deux premiers livres des Éléments d'Euclide par N. Hartmann, traduit par G. de Pesloüan, Paris: Beauchesne. - Brisson, L., 1995, “Proclus et l'Orphisme,” in Orphée et l'Orphisme dans l'Antiquité gréco-romaine, (Series: Variorum Collected Studies Series), Aldershot: Ashgate, pp. 43–103. - Charles-Saget, A., 1982, L'architecture du divin. Mathématique et philosophie chez Plotin et Proclus, (Series: Collection d'études anciennes), Paris: Les Belles Lettres. - Coulter, J.A., 1976, The Literary Microcosm. Theories of interpretation of the later Neoplatonism, Leiden: Brill. - Cürsgen, D., 2002, Die Rationalität des Mythischen: Der philosophische Mythos bei Platon und seine Exegese im Neuplatonismus, (Series: Quellen und Studien zur Philosophie, 55), Berlin-New York: de Gruyter. - D'Ancona, C., 2005a, “Greek into Arabic: Neoplatonism in translation,” in The Cambridge Companion to Arabic Philosophy, P. Adamson, and R.C. Taylor (eds.), (Series: Cambridge Companions to Philosophy), Cambridge: Cambridge University Press, pp. 10–31. - –––, 2005b, “Les Sentences de Porphyre entre les Ennéades de Plotin et les Eléments de Théologie de Proclus,” in Porphyre. Sentences, L. Brisson (ed.), 2 vol., Paris: Vrin, I, pp. 139–274. - –––, 2007, “The libraries of the Neoplatonists. An introduction,” in The libraries of the Neoplatonists, Proceedings of the meeting of the European Science Foundation ‘Late Antiquity and Arabic thought: Patterns in the constitution of European thought’ held in Strasbourg, March 12–14, 2004, C. D'Ancona (ed.), (Series: Philosophia antiqua, 107), Leiden-Boston: Brill, pp. xiii–xxxvi. - D'Ancona, C., and R.C. Taylor, 2003, “Liber de Causis,” in Dictionnaire des philosophes antiques., R. Goulet, J.-M. Flamand, and M. Aouad, Paris: CNRS, pp. 599–647. - D'Hoine, P., 2004, “Four problems concerning the theory of ideas: Proclus, Syrianus and the ancient commentaries on the Parmenides,” in Platonic ideas and concept formation in ancient and medieval thought, G. van Riel, and C. Macé (eds.), (Series: Ancient and Medieval Philosophy, Series I, 32), Leuven: Leuven University Press, pp. 9–29. - –––, 2006, “The status of the arts. Proclus' theory of artefacts,” Elenchos, 27: 305–344. - De Haas, F.A.J., 1997, John Philoponus' new definition of prime matter: Aspects of its background in Neoplatonism & the ancient commentary tradition, (Series: Philosophia antiqua, 69), Leiden-Boston-Köln: Brill. - Di Pasquale Barbanti, M., 1983, 19932, Proclo tra filosofia e teurgia, Catania: Bonanno. - Dillon, J.M., 1972, “Iamblichus and the origin of the doctrine of Henads,” Phronesis, 17: 102–106. - –––, 1986, “Proclus and the forty Logoi of Zeno,” Illinois Classical Studies, 11: 35–41. - Dillon, J.M., and S. Klitenic, 2007, Dionysius the Areopagite and the Neoplatonist tradition: Despoiling the Hellenes, (Series: Ashgate Studies in Philosophy and Theology in late Antiquity), Aldershot: Ashgate. - Dörrie, H., 1973, “La doctrine de l'âme dans le Néoplatonisme de Plotin à Proklus,” Revue de Théologie et de Philosophie, 23: 116–134. - –––, 1975, De Jamblique à Proclus, (Series: Fondation Hardt. Entretiens tome, 21), Genève: Vandœuvres. - Dörrie, H., M. Baltes, and Chr. Pietsch , 1987ff, Der Platonismus in der Antike, Band 1–7.1, Stuttgart-Bad Cannstatt: Frommann-Holzboog, (three more volumes will be published, 7.2 and 8.1–2). - Endress, G., 1973, Proclus Arabus. Zwanzig Abschnitte aus der Institutio theologica in arabischer Übersetzung, Wiesbaden: Steiner. - Esser, H.P., 1967, Untersuchungen zu Gebet und Gottesverehrung der Neoplatoniker, Köln: Dissertation der Universität Köln. - Festugière, A.-J., 1971, Études de philosophie grecque, Paris: Vrin [contains reprints of the following papers: “Modes de composition des Commentaires de Proclus” (pp. 551–574); “Contemplation philosophique et art théurgique chez Proclus” (pp. 585–596); “L'ordre de lecture des dialogues de Platon aux Ve/VIe siècles” (pp. 535–550)]. - Gersh, S., 1973, Κίνησις ἀκίνητος. A study of spiritual motion in the philosophy of Proclus, Leiden: Brill. - –––, 1978, From Iamblichos to Eriugena. An investigation of the prehistory and evolution of the Pseudo-Dionysian tradition, (Series: Studien zur Problemgeschichte der antiken und mittelalterlichen Philosophie, 8), Leiden: Brill. - Gerson, L.P., 1997, “Epistrophe eis heauton: History and Meaning,” Documenti e studi sulla tradizione filosofica medievale, 8: 1–32. - –––, 2005, Aristotle and Other Platonists, Ithaca – London: Cornell University Press. - Gritti, E., 2008, Proclo. Dialettica, Anima, Esegesi, (Series: Il Filarete, Collana die studi e testi), Milano: LED. - Günther, H.-Chr., 2007, Die Uebersetzung der Elementatio Theologica des Proklos und ihre Bedeutung für den Proklostext, Leiden: Brill. - Hankins, J., and W. Bowen (eds.), 2001–2006, Marsilio Ficino. Platonic theology, 6 vol., (Series: I Tatti Renaissance Library), Cambridge (Mass.): Harvard University Press. - Hankinson, R.J., 1998, Cause and explanation in ancient Greek thought, Oxford: Clarendon. - Halfwassen, J., 1999, Hegel und der Spätantike Neuplatonismus. Untersuchungen zur Metaphysik des Einen und des Nous in Hegels spekulativer und geschichtlicher Deutung, (Series: Hegel Studien, 40), Bonn: Bouvier. - Harari, O., 2006, “Methexis and geometrical reasoning in Proclus' Commentary on Euclid's Elements,” Oxford Studies in Ancient Philosophy, 30: 361–389. - Helmig, C., 2004, “What is the systematic place of abstraction and concept formation in Plato's philosophy? Ancient and modern readings of Phaedrus 249b-c,” in Platonic ideas and concept formation in ancient and medieval thought, G. van Riel, and C. Macé (eds.), (Series: Ancient and Medieval Philosophy, Series I, 32), Leuven: Leuven University Press, pp. 83–97. - –––, 2008, “Proclus and other Neoplatonists on universals and predication,” Documenti e Studi sulla Tradizione Filosofica Medievale, 19: 31–52. - –––, 2009, “‘The truth can never be refuted’ – Syrianus' view(s) on Aristotle reconsidered,” in Syrianus et la Métaphysique de l'Antiquité tardive, A. Longo (ed.), (Series: Elenchos, 51), Rome: Bibliopolis, 347–380. - –––, 2010, “Proclus' Criticism of Aristotle's Theory of Abstraction and Concept Formation in Analytica Posteriora II 19 and elsewhere,” in Interpreting Aristotle's Posterior Analytics in Late Antiquity and beyond, F.A.J. de Haas, and M.E.M.P.J. Leunissen, and M. Martijn (eds.), (Series: Philosophia Antiqua, 124), Leiden – Boston – Köln: Brill, 27–54. - –––, 2011, Forms and Concepts—Concept Formation in the Platonic Tradition. A Study on Proclus and his Predecessors, (Series: Philosophia Antiqua), Leiden – Boston – Köln: Brill [forthcoming]. - Klibansky, R., 1981, The continuity of the Platonic tradition during the Middle Ages, with a new preface and four supplementary chapters together with Plato's Parmenides in the Middle Ages and the Renaissance, with a new introductory preface, München: Kraus. - Kremer, K., 1966, 19712, Die neuplatonische Seinsphilosophie und ihre Wirkung auf Thomas von Aquin, (Series: Studien zur Problemgeschichte der antiken und mittelalterlichen Philosophie, 1), Leiden: Brill. - Kuisma, O., 1996, Proclus' defense of Homer, (Series: Commentationes Humanarum Litterarum, 109), Helsinki: Societas Scientiarum Fennica. - Kutash, E., 2011, Ten Gifts of the Demiurge: Proclus on Plato's Timaeus, London/New York: Bristol Classical Press. - Lang, H.S., 2005, “Perpetuity, eternity, and time in Proclus' Cosmos,” Phronesis, 50: 150–169. - Lernould, A., 1987, “La dialectique comme sciene première chez Proclus,” Revue des Sciences Philosophiques et Théologiques, 71: 509–535. - –––, 2001, Physique et Théologie: Lecture du Timée de Platon par Proclus, Villeneuve d'Ascq (Nord): Presses Universitaires du Septentrion. - Linguiti, A., 1990, L'ultimo platonismo greco: principi e conoscenza, (Series: Accademia toscana di scienze e lettere La Colombaria. Studi, 112), Firenze: Olschki. - Lloyd, A.C., 1990, The anatomy of Neoplatonism, Oxford: Clarendon. - Mansfeld, J., 1994, Prolegomena: Questions to be settled before the study of an author, or a text, (Series: Philosophia antiqua, 61), Leiden: Brill. - Martijn, M., 2010, Proclus on Nature. Philosophy of Nature and its Methods in Proclus' Commentary on Plato's Timaeus, (Series: Philosophia antiqua, 121), Leiden: Brill. - O'Meara, D.J., 1986, “Le problème de la métaphysique dans l'Antiquité tardive,” Freiburger Zeitschrift Philosophie und Theologie, 33: 3–22. - –––, 1989, Pythagoras revived. Mathematics and philosophy in late Antiquity, Oxford: Clarendon. - Opsomer, J., 2000, “Proclus on demiurgy and procession. A neoplatonic reading of the Timaeus,” in Reason and necessity. Essays on Plato's ‘Timaeus’, M.R. Wright (ed.), London: Duckworth, pp. 113–143. - –––, 2001, “Proclus vs Plotinus on matter (De mal. subs. 30–7),” Phronesis, 46: 154–188. - –––, 2003, “La démiurgie des jeunes dieux selon Proclus,” Les Études Classiques, 71: 5–49. - –––, 2006, “To find the Maker and Father. Proclus' exegesis of Plato Tim. 28C3–5,” Études Platoniciennes, 2: 261–283. - Opsomer, J., and C. Steel , 1999, “Evil without a cause: Proclus' doctrine on the origin of evil, and its antecedents in Hellenistic philosophy,” in Zur Rezeption der hellenistischen Philosophie in der Spätantike, Akten der 1. Tagung der Karl-und-Gertrud-Abel-Stiftung vom 22.-25. September 1997 in Trier, Th. Fuhrer, and M. Erler (eds.), (Series: Philosophie der Antike, 9), Stuttgart: Steiner, pp. 229–260. - Phillips, J., 2007, Order from disorder. Proclus' doctrine of Evil and its roots in ancient Platonism, (Series: Ancient Mediterranean and Medieval Texts and Contexts; Studies in Platonism, Neoplatonism, and the Platonic Tradition, 5), Leiden: Brill. - Pichler, R., 2006, Allegorese und Ethik bei Proklos. Untersuchungen zum Kommentar zu Platons Politeia, (Klassische Philologie, 2), Berlin: Frank & Timme. - Podskalsky, G., 1976, “Nikolaos von Methone und die Proklosrenaissance in Byzanz (11.-12. Jh.),” Orientalia Christiana Periodica, 42: 509–523. - Praechter, K., 1973, Kleine Schriften, H. Dörrie (ed.), Hildesheim: Olms. - Radke, G., 2006, Das Lächeln des Parmenides, Proklos' Interpretationen zur Platonischen Dialogform, (Series: Untersuchungen zur antiken Literatur und Geschichte, 78), Berlin-New York: de Gruyter. - Roth, V.M., 2008, Das ewige Nun. Ein Paradoxon in der Philosophie des Proklos, (Series: Philosophische Schriften, 72), Berlin: Dunckler & Humblot. - Saffrey, H.D., 1987, Recherches sur la Tradition Platonicienne au Moyen Âge et à la Renaissance, Paris: Vrin. - –––, 1990, Recherches sur le Néoplatonisme après Plotin, (Series: Histoire des doctrines de l'Antiquité classique, 14), Paris: Vrin. - –––, 2000, Le Néoplatonisme après Plotin, (Series: Histoire des doctrines de l'Antiquité classique, 24), Paris: Vrin. - –––, 2002, L'Héritage des anciens au Moyen Âge et à la Renaissance, (Series: Histoire des doctrines de l'Antiquité classique, 28), Paris: Vrin. - Sezgin, F., 2000, Proclus Arabus and the Liber de Causis (Burûklûs ‘inda l-‘Arab wa-kitâb al-îdâh fî l-khayr al-mahd), Frankfurt am Main: Institute for the History of Arabic-Islamic Science at the Johann Wolfgang Goethe University. - Sheppard, A.D.R., 1980, Studies on the 5th and 6th essays of Proclus' Commentary on the Republic, Göttingen: Vandenhoeck & Ruprecht. - –––, 1 982, “Proclus' attitude to theurgy,” Corolla Londiniensis, 32: 212–224. - Steel, C., 1978, The changing self. A study on the soul in later Neoplatonism: Iamblichus, Damascius and Priscianus, Brussel: Paleis der Academiën. - –––, 1991, “The One and the Good: Some Reflections on a Neoplatonic Identification,” in The Neoplatonic Tradition. Jewish, Christian and Islamic Themes, A. Vanderjagt, and D. Pätzold (eds.), (Series: Dialectica Minora, 3), Köln: Dinter, pp. 9–25. - –––, 1997, “Breathing thought. Proclus on the innate knowledge of the soul,” in The Perennial Tradition of Neoplatonism, J. Cleary (ed.), (Series: Ancient and Medieval Philosophy, Series 1, 24), Leuven: Leuven University Press, pp. 293–309. - –––, 1999, “Proclus on the existence of evil,” and S. Menn, “Commentary on Steel,” in Proceedings of the Boston Area Colloquium in Ancient Philosophy, vol. 14, J.J. Cleary (ed.), Leiden–New York–Köln: Brill, pp. 83–109. - –––, 2001, “The Neoplatonic doctrine of Eternity and Time and its influence on Medieval Philosophy,” in: The Medieval Concept of Time. Studies on the Scholastic Debate and its Reception in Early Modern Philosophy, P. Porro (ed.), (Series: Studien und Texte zur Geistesgeschichte des Mittelalters, 75) Leiden–New York–Köln: Brill, pp. 3–31. - –––, 2002, “Neoplatonic versus Stoic causality: the case of the sustaining cause (‘sunektikon’),” in Quaestio 2: Causality, C. Esposito, and P. Porro (eds.), (Series: Yearbook of the History of Metaphysics), Turnhout: Brepols, pp. 77–93. - –––, 2003, “Why should we prefer Plato's Timaeus to Aristotle's Physics? Proclus' critique of Aristotle's causal explanation of the physical world,” in Ancient approaches to Plato's Timaeus, R.W. Sharples, and A. Sheppard (eds.), (Series: Bulletin of the Institute of Classical Studies. Supplement, 78), London: Institute of Classical Studies, pp. 175–187. - –––, 2004, “Definitions and ideas,” in Proceedings of the Boston Area Colloquium in Ancient Philosophy, vol. 19, J.J. Cleary, and G.M. Gurtler (eds.), Leiden: Brill, 103–121. - –––, 2005a, “Theology as first philosophy. The Neoplatonic concept of Metaphysics,” in Quaestio 5. Metaphysica, sapientia, scientia divina. Soggetto e statuto della filosofia prima nel Medioevo, Atti del Convegno della Società Italiana per lo Studio del Pensiero Medievale, Bari, 9–12 giugno 2004, P. Porro (ed.), Brepols: Turnhout, 3–21. - –––, 2005b, “Proclus' Defence of the Timaeus Against Aristotle's Objections. A Reconstruction of a Lost Polemical Treatise,” in Plato's Timaeus and the Foundations of Cosmology in Late Antiquity, the Middle Ages and Renaissance, Th. Leinkauf, and C. Steel (eds.), (Series: Ancient and Medieval Philosophy, I 34), Leuven: Leuven University Press, pp. 163–193. - –––, 2008, “Proclus on the Mirror as Metaphor of Participation,” in Miroir et savoir. La transmission d'un thème platonicien, des Alexandrins à la philosophie arabo-musulmane, D. De Smet, M. Sebti, and G. de Callataÿ (eds.), Leuven: Leuven University Press, pp. 79–96. - –––, 2010, “Proclus,” in The Cambridge History of Late Ancient Philosophy, L.P. Gerson (ed.), Cambridge: Cambridge University Press [forthcoming]. - Tarrant, H., and Baltzly, D. (eds.), 2006, Reading Plato in Antiquity, London: Duckworth. - Trouillard, J., 1982, La mystagogie de Proclos, Paris: Les Belles Lettres. - Van Den Berg, R.M., 2008, Proclus' Commentary on the Cratylus in context. Ancient theories of language and naming, (Series: Philosophia antiqua, 112), Leiden–Boston: Brill. - Van Liefferinge, C., 1999, La théurgie: des ‘Oracles chaldaïques’ à Proclus, (Series: Kernos. Supplément, 9), Liège: Centre International d'Étude de la Religion Greque Antique. - Van Riel, G., 2000, Pleasure and the good life. Plato, Aristotle and the Neoplatonists, (Series: Philosophia antiqua, 85), Leiden–New York–Köln: Brill. - Watts, E.J., 2006, City and school in late antique Athens and Alexandria, (Series: The Transformation of the Classical Heritage, 41), Berkeley: University of California Press. - Westerink, L.G., J. Trouillard, and A.P. Segonds , 1990, Prolégomènes à la philosophie de Platon, (Series: Collection des Universités de France), Paris: Les Belles Lettres. - Whittaker, J., 1975, “The Historical Background of Proclus' Doctrine of the authypostata,” in De Jamblique à Proclus, H. Dörrie, (Series: Fondation Hardt. Entretiens tome, 21), Genève: Vandœuvres, pp. 193–230 [Reprint in: Studies in Platonism and Patristic Thought, (Series: Variorum reprints), London: Aldershot, 1984, XVI. How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database. - Bibliography Proclus – DWMC, University of Leuven. - Editions and Translations Proclus – DWMC, University of Leuven. - Répertoire des sources philosophiques antiques (CNRS – Paris). - Search on Proclus at the Open Library. - W.J. Hankey, French Neoplatonism in the 20th Century, in Animus 4 (1999). The authors would like to thank Radek Chlup (Prague), Antonio Luis Costa Vargas (Berlin), and Sabrina Lange (Berlin) for comments.
fwe2-CC-MAIN-2013-20-44287000
June 22, 1976. North Atlantic. At 21:13 GMT a pale orange glow behind a bank of towering cumulus to the west was observed. Two minutes later a white disc was observed while the glow from behind the cloud persisted. High probability that this may have been caused by interferometry using 3-dimensional artificial scalar wave? Fourier expansions? as the interferers. Marine Observer. 47(256), Apr. 1977. p. 66-68. "Unidentified phenomenon, off Barbados, West Indies." August 22, 1969. West Indies. Luminous area bearing 310 degrees grew in size and rose in altitude, then turned into an arch or crescent. High probability that this may have been caused by interferometry using artificial scalar wave? ((Fourier expansions.)) Marine Observer. 40(229), July, 1970. p. 107-108. "Optical phenomenon: Caribbean Sea; Western North Atlantic." Mar. 20, 1969. Caribbean Sea and Western North Atlantic. At 23:15 GMT, a semicircle of bright, milky-white light became visible in the western sky and rapidly expanded upward and outward during the next 10 minutes, dimming as it expanded. High probability that this may be caused by interferometry using artificial scalar wave? Fourier expansions?. Marine Observer, 40(227), Jan. 1970. p.17; p. 17-18. 7B.21 - Electricity 13.06 - Triple Currents of Electricity 14.35 - Teslas 3 6 and 9 ((16.04 - Nikola Nikola Tesla describing what electricity is)) 16.07 - Electricity is a Polar Exchange 16.10 - Positive Electricity 16.16 - Negative Electricity - Russell 16.17 - Negative Electricity - Tesla 16.29 - Triple Currents of Electricity ((Figure 16.04.05 and Figure 16.04.06 - Nikola Nikola Tesla and Lord Kelvin)) Part 16 - Electricity and Magnetism Tesla - Electricity from Space What Electricity Is - Bloomfield Moore Page last modified on Wednesday 19 of May, 2010 05:23:05 MDT
fwe2-CC-MAIN-2013-20-44290000
Credit: Ingram Publishing/Thinkstock “Sanitation: A Global Estimate of Sewerage Connections without Treatment and the Resulting Impact on MDG Progress” Environmental Science & Technology It may be the 21st century, with all its technological marvels, but 6 out of every 10 people on Earth still do not have access to flush toilets or other adequate sanitation that protects the user and the surrounding community from harmful health effects, a new study has found. The research, published in ACS’ journal Environmental Science & Technology, says the number of people without access to improved sanitation is almost double the previous estimate. Jamie Bartram and colleagues explain that the current definition of “improved sanitation” focuses on separating humans from human excrement, but does not include treating that sewage or other measures to prevent it from contaminating rivers, lakes and oceans. Using that definition, 2010 United Nations estimates concluded that 4.3 billion people had access to improved sanitation and 2.6 billion did not. The new estimates used what the authors regarded as a more realistic definition from the standpoint of global health, since untreated sewage is a major cause of disease. They refined the definition of “improved sanitation” by discounting sewage systems lacking access to sewage treatment. They concluded that about 60 percent of the world’s population does not have access to improved sanitation, up from the previous estimate of 38 percent.
fwe2-CC-MAIN-2013-20-44291000
This file is also available in Adobe Acrobat PDF format Agriculture has always been absolutely necessary for the very survival of humankind. For centuries, it has provided people with food, clothing, and heating, and it has employed most of the total active population. Nowadays, we dress mainly in artificial and synthetic fibers and heat themselves with fossil fuels, but the primary sector still supplies all the food we need. The available projections suggest that the world population will grow further in the next decades, while the nutritional status of the world poor must improve. Thus, agricultural production has to rise, and it has to rise with little or no further environmental damage: modern agriculture has, in fact, the reputation, largely deserved, of being environment-unfriendly. The challenges ahead, however, should not let people forget the past achievements.1 From 1800 to 2000, the world population has risen about six- to sevenfold, from less than one billion to six billion.2 Yet, world agricultural production has increased substantially faster--at the very least, tenfold in the same period. Nowadays, people are better fed than in the past: each person in the world has, in theory, 2,800 calories available, with a minimum of some 2,200 in sub-Saharan Africa.3 Famines, which haunted preindustrial times, have disappeared from most of the world. The latest survey by the Food and Agricultural Organization (FAO) estimates that 800 million people (i.e., some 10-15% of the world population) are still undernourished--but this may be an overestimation, and the proportion has drastically fallen by about a quarter since 1970.4 Furthermore, undernourishment and famine are caused much more by the skewed distribution of income (poor entitlements in Sen's definition) and by political events (international wars, civil wars, terrorism), than by sheer lack of food.5 Actually, many OECD countries have, since the 1950s, been struggling with an overproduction of food. The achievements of agriculture appear even more remarkable if one looks at employment. Agriculture employed more than 75 percent of the total workforce in traditional agrarian societies, and, as late as 1950, about two-thirds throughout the world. Nowadays, in the advanced countries, the share is about 2.5 percent--eleven million people out of 430.6 In the rest of the world, agricultural workers still account for almost half the labor force, with a world total of some 1.3 billion workers (775 million in China and India alone). Such a massive transfer of labor, one of the key features of modern economic growth in the past two centuries, was made possible by a dramatic increase in product per worker. In short, agriculture is an outstanding success story. Its achievements have been outshone by the even faster growth of industry and services, but the latter would have been almost impossible if the workers had not had sufficient food to eat. The aim of this book is to describe this success, and to understand its causes. Chapter 2 illustrates the peculiarities of agriculture. Its production depends on the environment: soil, climate, and the availability of water have always determined what peasants could grow, how much they had to work, and how much they could obtain from their efforts. These constraints have been relaxed in recent times, without totally disappearing. The factor endowment, and notably the amount of land per agricultural worker, determines the intensity of cultivation. The combined effects of the environment and the factor endowment have created long-lasting and area-specific patterns of land use, crop mix, and techniques ("agricultural systems"). The next three chapters present the main statistical evidence, loosely arranged in a production-function framework. Chapter 3 deals with the long-term trends in output (which has always been growing), relative prices (increasing in the first half of the nineteenth century, then roughly constant or slowly declining), and world trade in agricultural products (increasing quite fast before 1913 and again after 1950). The focus then shifts to the proximate causes of this growth, the increase in the use of factors (chapter 4) and productivity growth (chapter 5). Historians have a fairly clear idea about the long-run change in factors. The total agricultural work force remained roughly constant all over the world, with the notable exception of Western settlement countries (North America, Australia, Argentina, and so on) during settlement process--that is, until the beginning of the twentieth century. The stock of capital grew fast beginning in the late nineteenth century, as machines substituted labor. Although this conventional wisdom is not exactly wrong, it is, however, inspired a bit too much by the experience of the Western world. The growth of land stock has been much more geographically widespread and has lasted for longer than is commonly assumed. Agricultural capital consists mainly of building, irrigation works, and the like, and thus it increased slowly but steadily throughout the period. The real process of mechanization started only in the 1950s, and the agricultural work force has gone on growing in absolute terms. Thus, the growth of inputs (extensive growth) was the major cause of worldwide growth in agricultural production until the 1930s, but after World War II, it slowed down. Consequently, most of the big increase in total output in the past half-century has been achieved thanks to the growth in total factor productivity. The available estimates, surveyed in chapter 5, suggest that its growth has been increasing over time and that it has been faster in "advanced" countries than in LDCs. In the "advanced" countries, productivity growth has accounted for the whole of the increase in agricultural output. Contrary to a common view, productivity growth has been faster in agriculture than in the rest of the economy, including manufacturing. Chapter 6 focuses on the main source of this great achievement, technical progress. It starts by describing the main innovations, and then focuses on the process of their adoption. As in the rest of the economy, innovations are adopted when profitable, and profitability ultimately depends on the expected productivity gains and on factor endowment and factor prices. However, as the chapter argues, a standard neoclassical model cannot explain all the features of technical progress in agriculture. Agricultural innovations depend on the environment and entail a high level of risk, and many of them yield little or no financial rewards to the inventor. These features call for a greater role of the state, both in the production and the diffusion of innovations. Chapters 7, 8, and 9 deal with the institutional framework of agricultural production. "Institutions" is a fairly vague word, which resists all attempts at a general definition. Chapters 7 and 8 deal with property rights on labor and land, markets for goods and inputs (labor, land, capital), and agricultural co-operatives. Chapter 7 is, to some extent, a general introduction to these issues and to the approaches of economists and historians to institutions. It discusses how institutions work and how they might affect the performance of agriculture. Chapter 8 describes the main changes--the creation of property rights on labor and land, the trends in the average size of farms, in landownership, and in contracts, and the development of markets for goods and factors. It also puts forward some tentative hypotheses on the likely causes of these changes and on their effects on agricultural performance--although, it is fair to say, the discussion on these issues is surprisingly thin when compared to the attention they have received in the theoretical literature. Chapter 9 focuses on the effects of agricultural policies. It argues that state intervention has only really affected agricultural development since the 1930s, and that, by and large, it has reduced the aggregate welfare of the whole population. The tenth, and last, chapter shifts the focus from agriculture to the whole economy. How did the growth of agricultural output and the change in input use affect modern economic growth? This issue has been the subject of much discussion in historical perspective, and it still looms large in the debates about the optimal development strategy for less developed countries. The chapter has no ambition to solve such a controversial issue. It sketches out the prevailing theories and deals very briefly with three case studies. The book closes with some very general remarks about the future of agriculture. The summary makes it clear that this is quite an ambitious book. It deals with many issues, and covers two centuries of agricultural history in the whole world, from Monsoon Asia to Midwest prairies. Any attempt to be comprehensive would be foolish. The potentially relevant literature spans dozens of languages, and many disciplines, from "traditional" agricultural economics and history to more "trendy" social and environmental history. Just to quote an example, the fourth volume of A Survey of Agricultural Economics Literature, Agriculture in Economic Development, contains more than two hundred pages of references.7 Assuming (conservatively) that there are twenty entries per page, the total sums up to almost four thousand entries. Some of these works may be purely theoretical, and thus outside the scope of this book, but the majority should still be considered. The survey refers only to the less developed countries, deals (almost) exclusively with the post-World War II period, lists only works in English, French, Spanish, and Portuguese published before 1990, and is probably, as with all surveys, not complete. A simple proportion suggests that there are thousands of potentially relevant references. Clearly, no one in the world (certainly not this author) can reasonably claim to master all the literature. And even if this miracle were possible, it would be impossible to review it thoroughly and keep the book to a reasonable size. Selective reading is an imperative. Thus, I have decided to focus on more general contributions, and to favor works that frame their views in economic theory and buttress their statements with data. This approach has some clear and often rehearsed shortcomings. Mainstream economic theory may appear too abstract to be relevant. Agriculture is a highly local activity, and specialists in agrarian history always warn against broad generalizations, which, they claim, cannot capture the peculiarities of the area that they are dealing with. Many data are missing, unreliable, or sometimes plainly wrong. Reliable "historical" (pre-1950) data are available only for some "advanced" countries (those of Western Europe, USA, Japan, etc). International organizations such as the UN, FAO, World Bank, and the OECD have made a magnificent effort to extract comparable data for all countries from the information provided by national statistical offices, which are sometimes incomplete and/or of dubious quality.8 However, there are some reasons for hope. Modern development economics, with its emphasis on institutions, transaction costs, information, and so on, provides powerful tools for understanding rural societies, which can also be employed to explore societies of the past. Economic historians have unearthed a great deal of new data, which, in spite of all their shortcomings, do throw light on many key issues. And, last but not least, I feel that there is no real alternative. A history of agriculture based on anecdotal evidence from local case studies would be a boundless and largely meaningless list of details. But details are sometimes fascinating and are useful for illustrating general points--to put some flesh on the bare bones of quantitative analysis, so to speak. The reader may find the selection of these examples somewhat haphazard (why--for example--discuss tenure in China during the 1930s instead of that in Guatemala during the 1970s?). It is, however, guided, whenever possible, by two principles: first, to deal with "large" countries (China, India, Russia, the USA) and, second, to focus on controversial cases. The interest of "big" countries is self-evident, while focusing on controversial issues makes it possible to give the reader a flavor of the current research and debates. Return to Book Description File created: 8/7/2007 Questions and comments to: [email protected] Princeton University Press
fwe2-CC-MAIN-2013-20-44295000
Saturn's largest moon, Titan, pictured to the right of the gas giant in the Cassini spacecraft view. Scientists have discovered methane lakes in the tropical areas of Saturn's moon Titan, one of which is about half the size of Utah's Great Salt Lake, with a depth of at least one meter. The longstanding bodies of liquid were detected by NASA's Cassini spacecraft, which has been orbiting Saturn since its arrival at the ringed planet in 2004. It was previously believed that such bodies of liquid only existed at the polar regions of Titan. According to a report published in the journal Nature , the liquid for the lakes could come from an underground aquifer. "An aquifer could explain one of the puzzling questions about the existence of methane, which is continually depleted," said the lead author Caitlin Griffith. "Methane is a progenitor of Titan's organic chemistry, which likely produces interesting molecules like amino acids, the building blocks of life," Griffith noted. The lakes have remained since they were detected by Cassini’s visual and infrared mapping spectrometer in 2004. Only one rainfall has been recorded which shows that the lakes could not be replenished by rain. According to the theories regarding the circulation models of Titan, liquid methane in the moon's equatorial region evaporates and is then carried by wind to the polar regions. Methane is then condensed due to the colder temperatures and forms the polar lakes after it falls to the surface. "We had thought that Titan simply had extensive dunes at the equator and lakes at the poles, but now we know that Titan is more complex than we previously thought," said Linda Spilker, a Cassini project scientist. She further added that, "Cassini still has multiple opportunities to fly by this moon going forward, so we can't wait to see how the details of this story fill out." NASA launched the Cassini spacecraft in 1997 and its mission has been extended several times, most recently until 2017.
fwe2-CC-MAIN-2013-20-44296000
WEST LAFAYETTE, Ind. - Parts of the human brain think about the same word differently, at least when it comes to prepositions, according to new language research in stroke patients conducted by scientists at Purdue University and the University of Iowa. People who speak English often use the same prepositions, words such as "on," "in," "around" and "through," to indicate time as well as location. For example, compare "I will meet you 'at' the store," to "I will meet you 'at' 3 p.m." These examples show how time may be thought of metaphorically in terms of space. Just because it's the same word, however, doesn't mean the brain thinks about it the same way, said David Kemmerer, an assistant professor of psychological sciences and linguistics at Purdue's College of Liberal Arts. "There has been a lot of cognitive neuroscience research about how the brain processes language pertaining to concrete things, such as animals or tools," said Kemmerer, who also is an adjunct faculty member at the University of Iowa's Department of Neurology, where this research was conducted. "This is the first cognitive neuroscience study to investigate brain regions for spatial and temporal relations - those involving time - used in language. "I was interested in whether these spatial or temporal prepositions can be dissociated in individuals with brain damage. One might think that if a person's knowledge of the word 'at' to describe location is impaired, then his or her ability to use that same preposition to describe time would be disrupted. But we found the words implying time are processed independently." This research was conducted at the Benton Neuropsychology Laboratory in Iowa's Carver College of Medicine and was funded by the Purdue Research Foundation and the National Institute for Neurological Disease and Stroke. Kemmerer's paper is available online at Neuropsychologia. "This study has potential implications for neurology," Kemmerer said. "A clinician could use information about how brain injuries in stroke patients affect specific speech components to develop therapies to help their patients." The four patients in Kemmerer's study were used because of similar brain injuries, such as lesions from stroke, in the perisylvian region, which is responsible for language processing. Kemmerer found the stroke subjects who passed the language tests asking about prepositions relevant to time subsequently failed when these same words reflected spatial meanings. For example, the subjects were asked to choose the correct preposition for scenarios such as, "The baseball is 'on/in/against' the glove." Two subjects did not select "in" as the correct answer. However, they did select "in" as the correct preposition for "It happened 'through/on/in' 1859." The other two subjects' test performances were the opposite. Kemmerer's earlier research with Daniel Tranel, professor of neurology at Iowa's Carver College of Medicine, had confirmed that the left inferior prefrontal and left inferior parietal regions of the brain play a crucial role in processing spatial prepositions. The previous research with Tranel was published in October's Cognitive Neuropsychology. This work, which has explored how different types of words are retrieved by different parts of the brain, is part of a larger-scale investigation being carried out by Tranel and his colleagues at the University of Iowa. "For example, we have identified the anterior left temporal lobe as being critical for proper nouns, whereas the left inferior prefrontal/premotor region is important for verbs," Tranel said. "The collaboration between myself, a neuropsychologist, and professor Kemmerer, a neurolinguist, has yielded important breakthroughs in understanding how the brain operates language, due to the unique perspectives that these researchers bring to a common research agenda." Three of the patients in Kemmerer's recent study also had damage to their brains' left hemispheres, in an area known as the parietal lobe, which houses the supramarginal gyrus. This area is involved in spatial meaning, and it is the part of the brain that guides action. For example, the supramarginal gyrus coordinates how a person moves his or her hand toward a glass of water. Previous research with normal brains identifies this area as important also in the knowledge and meaning of prepositions. The patients with damage to the supramarginal gyrus did not score high on the tasks that evaluated their knowledge of prepositions that dealt with space. In comparison, the fourth patient, who did not have similar damage to this region of the brain, was able to demonstrate complete knowledge of spatial prepositions. Kemmerer's next step will be looking at how the brain processes these prepositions in other languages. "If this is true in English, then what about the 6,000 other known languages in the world? This time-and-space metaphor is used from language to language, but how the metaphor is used does vary," he said. In English, months of the year are treated as containers. People say "in January" or "in February." Other languages treat months as surfaces. For example, "on January" or "on February." Despite the difference, there is a metaphor at work, Kemmerer said. "The cross-linguistic ubiquity of the metaphor suggests that people are naturally inclined to conceptualize time in terms of space," he said. "Nevertheless, the neuropsychological data suggest that people don't need to invoke the metaphor every time they use prepositions to talk about time. Just as the word 'breakfast' doesn't require one to think of a morning meal in terms of breaking a fast, so the sentence 'She arrived at 1:30' doesn't require one to think of time as a series of points on a line." Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009 Published on PsychCentral.com. All rights reserved. Great things are not done by impulse, but by a series of small things brought together. -- Vincent Van Gogh
fwe2-CC-MAIN-2013-20-44304000
DURHAM, N.C. – The amount of exercise may be more important than intensity to improve cardiovascular health, according to a new analysis of the first randomized clinical trial evaluating the effects of exercise amount and intensity in sedentary overweight men and women. This finding of the value of moderate exercise should be encouraging news for those who mistakenly believe only intense exercise can improve health, said the researchers who conducted the trial. The trial, led by researchers at Duke University Medical Center, found that a moderate exercise regimen, such as 12 miles of brisk walking each week, can provide significant improvements in fitness levels while reducing the risks of developing cardiovascular disease. Furthermore, the researchers found that any additional increase in amount or intensity can yield even more health benefits. The results of the analysis were published in the October, 2005, issue of the journal Chest. "People only need to walk up to 12 miles per week or for about 125 to 200 minutes per week to improve their heart health," said the lead author Brian Duscha. "Our data suggest that if you walk briskly for 12 miles per week you will significantly increase your cardiovascular fitness levels compared to baseline. If you increase either your mileage or intensity, by going up an incline or jogging, you will achieve even greater gains." The researchers said that their findings should inspire those couch potatoes who have been hesitant to begin exercising regularly -- especially since earlier analysis of the same participants (will insert link to inactivity study) by the same Duke team found that people who do not exercise and maintain the same diet will gain up to four pounds each year. "The participants in our study received the fitness benefits without losing any weight," Duscha said. "Many people exercise to lose weight, and when that doesn't occur, they stop exercising. However, the truth is that you can improve cardiovascular fitness and reduce the risk of heart disease by exercising without losing weight." To better understand the effects of differing amounts of exercise, the researchers studied 133 overweight sedentary men and women who were beginning to show signs of blood lipid levels high enough to affect their health. They were randomized into one of four groups: no exercise, low amount/moderate intensity (equivalent of 12 miles of walking per week), low amount/vigorous intensity (12 miles of jogging per week) or high amount/vigorous intensity (20 miles of jogging per week). Since the trial was designed solely to better understand the role of exercise, patients were told not to alter their diet during the course of the trial, which lasted six months for the group that did not exercise or eight-months for the exercise groups. The additional two months for the exercise group came at the beginning of trial, when participants slowly ramped up their exercise to their designated levels. The exercise was carried out on treadmills. For their analysis, the team compared two measurements of fitness – peak VO2 and time to exhaustion (TTE) – before and after the trial. Peak VO2 is a calculation that measures the maximum amount of oxygen that can be delivered by circulating blood to tissues in a given period of time while exercising. While all the exercise groups saw improvements in peak VO2 and TTE after completing their exercise regimens, the researchers noticed some interesting trends. "We found that when we compared the low amount/moderate intensity group to the low amount/vigorous intensity group, we did not see a significant improvement in peak oxygen consumption," Duscha said. "However, when we increased the amount of exercise from 12 to 20 miles – at the same intensity – we did see an improvement in peak oxygen consumption." Also, although no statistically significant difference was detected between the low amount/moderate intensity group and the low amount/high intensity group, the researchers did see a trend toward both a separate and combined effect of exercise intensity and amount on increased peak VO2 levels. The Duke team was led by cardiologist William Kraus, M.D., who received a $4.3 million grant from the National Heart, Lung and Blood Institute in 1998 to investigate the effects of exercise on sedentary overweight adults at risk for developing heart disease and/or diabetes. The results of that five-year trial, known as STRRIDE (Studies of Targeted Risk Reduction Interventions through Defined Exercise), and other analyses of the data collected, began to be published in 2002. The Duke team is currently enrolling patients in STRRIDE II, in which researchers are seeking to determine the effects of weight training, alone and in combination with aerobic training, on cardiovascular health. Joining Duscha were Duke colleagues Cris Slentz, Ph.D., Johanna Johnson, Daniel Bensimhon, M.D., and Kenneth Knetzger. Joseph Houmard, Ph.D., East Carolina University, was also a member of the team. Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009 Published on PsychCentral.com. All rights reserved. A psychiatrist asks a lot of expensive questions that your wife will ask for free. -- Joey Adams
fwe2-CC-MAIN-2013-20-44305000
Yifu Deng of QUT's School of Public Health studied the interplay between genetics, smoking and the development of Parkinson's disease with 400 people who had Parkinson's disease and 400 people without it. Dr Deng looked at the genetic background of individuals in each group for the presence of the CYP2D6 gene, which had previously been suggested to metabolise the chemical compounds found in cigarette smoke, in both groups. He found that smokers with the gene who metabolised the cigarette smoke compounds quickly were less likely to be protected than those who metabolised the chemical compounds more slowly. "It seems that if the chemical compounds stay in the body longer they are more likely to have a preventative effect," Dr Deng said. "It also seems that if you have the gene but you are not a smoker the gene may have no use in preventing Parkinson's." Dr Deng said it was not known how the cigarette smoke compounds protected against Parkinson's. He warned that there were still many smokers who suffered from Parkinson's. Additionally, smoking was notorious for causing cancers. Parkinson's disease is a common degenerative neurological disease in the elderly, affecting up to 4.9 percent Australians aged 55 and over. "Our study findings aid in further understanding of the causes of Parkinson's disease and may help identify people who are at higher risk of the disease," he said. The study is the first to look at the genetic epidemiology of Parkinson's disease by addressing individual genetic types in relation to cigarette smoke metabolism. Dr Deng's study may provide the potential to reveal new targets for strategies of altering Parkinson's disease risk. Media contact: Niki Widdowson, QUT media officer, +61 7 3864 1841 or [email protected]. **High res pic of Dr Deng available. Last reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009 Published on PsychCentral.com. All rights reserved.
fwe2-CC-MAIN-2013-20-44306000
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | - Main article: Tomography Computed tomography (CT), originally known as computed axial tomography (CAT or CT scan) and body section roentgenography, (and also known as computed axial tomography (CAT scan) X-ray computed tomography is a medical imaging method employing tomography where digital geometry processing is used to generate a three-dimensional image of the internals of an object from a large series of two-dimensional X-ray images taken around a single axis of rotation. The word "tomography" is derived from the Greek tomos (slice) and graphia (describing). CT produces a volume of data which can be manipulated, through a process known as windowing, in order to demonstrate various structures based on their ability to block the x-ray beam. Although historically (see below) the images generated were in the axial or transverse plane (orthogonal to the long axis of the body), modern scanners allow this volume of data to be reformatted in various planes or even as volumetric (3D) representations of structures. Since its introduction in the 1970s, CT has become an important tool in medical imaging and neuroimaging to supplement X-rays and medical ultrasonography. Although it is still quite expensive, it is the gold standard in the diagnosis of a large number of different disease entities. Diagnosis of cerebrovascular accidents and intracranial hemorrhage is the most frequent reason for a "head CT" or "CT brain". Scanning is done with or without intravenous contrast agents. CT generally does not exclude infarct in the acute stage of a stroke, but is useful to exclude a bleed (so anticoagulant medication can be commenced safely). CT is also useful in the setting of trauma for evaluating facial and skull fractures. In the head/neck/mouth area, CT scanning is used for surgical planning for craniofacial and dentofacial deformities, evaluation of cysts and some tumors of the jaws/paranasal sinuses/nasal cavity/orbits, diagnosis of the causes of chronic sinusitis, and for planning of dental implant reconstruction. For evaluation of chronic interstitial processes (emphysema, fibrosis, and so forth), thin sections with high spatial frequency reconstructions are used. For evaluation of the mediastinum and hilar regions for lymphadenopathy, IV contrast is administered. CT angiography of the chest (CTPA) is also becoming the primary method for detecting pulmonary embolism (PE) and aortic dissection, and requires accurately timed rapid injections of contrast and high-speed helical scanners. CT is the standard method of evaluating abnormalities seen on chest X-ray and of following findings of uncertain acute significance. With the advent of subsecond rotation combined with multi-slice CT (up to 64 slices), high resolution and high speed can be obtained at the same time, allowing excellent imaging of the coronary arteries. Images with a high temporal resolution are formed by updating a proportion of the data set used for image reconstruction as it is scanned. In this way individual frames in a cardiac CT investigation are significantly shorter than the shortest tube rotation time. It is uncertain whether this modality will replace the invasive coronary catheterization. Cardiac MSCT carries very real risks since it exposes the subject to the equivalent of 500 chest X Rays in terms of radiation. The relationship of radiation exposure to increased risk in breast cancer has yet to be definitively explored. The positive predictive value is approximately 82% while the negative predictive value is in the range of 93%. Sensitivity is about 81% and the specificity is about 94%. The real benefit in the test is the high negative predictive value. Thus, when the coronary arteries are free of disease by CT, patients can then be worked up for other causes of chest symptoms. Much of the software is based on data findings from caucasian study groups and as such the assumptions made may also not be totally true for all other populations. Dual Source CT scanners, introduced in 2005, allow higher temporal resolution so reduce motion blurring at high heart rates, and potentially requiring a shorter breath-hold time. This is particularly useful for ill patient who have difficult holding their breath, or who are unable to take heart-rate lowering medication. Abdominal and pelvic CTEdit CT is a sensitive method for diagnosis of abdominal diseases. It is used frequently to determine stage of cancer and to follow progress. It is also a useful test to investigate acute abdominal pain. Renal/urinary stones, appendicitis, pancreatitis, diverticulitis, abdominal aortic aneurysm, and bowel obstruction are conditions that are readily diagnosed and assessed with CT. CT is also the first line for detecting solid organ injury after trauma. Oral and/or rectal contrast may be used depending on the indications for the scan. A dilute (2% w/v) suspension of barium sulfate is most commonly used. The concentrated barium sulfate preparations used for fluoroscopy e.g. barium enema are too dense and cause severe artifacts on CT. Iodinated contrast agents may be used if barium is contraindicated (e.g. suspicion of bowel injury). Other agents may be required to optimize the imaging of specific organs: e.g. rectally administered gas (air or carbon dioxide) for a colon study, or oral water for a stomach study. CT has limited application in the evaluation of the pelvis. For the female pelvis in particular, ultrasound is the imaging modality of choice. Nevertheless, it may be part of abdominal scanning (e.g. for tumors), and has uses in assessing fractures. CT is also used in osteoporosis studies and research along side DXA scanning. Both CT and DXA can be used to asses bone mineral density (BMD) which is used to indicate bone strength, however CT results do not correlate exactly with DXA (the gold standard of BMD measurment). DXA is far more expensive, and subjects patients to much higher levels of ionizing radiation, so it is used infrequently. Advantages and hazards Edit Advantages Over Projection Radiography (See Radiography) Edit First, CT completely eliminates the superimposition of images of structures outside the area of interest. Second, because of the inherent high-contrast resolution of CT, differences between tissues that differ in physical density by less than 1% can be distinguished. Third, data from a single CT imaging procedure consisting of either multiple contiguous or one helical scan can be viewed as images in the axial, coronal, or sagittal planes, depending on the diagnostic task. This is referred to as multiplanar reformatted imaging. Radiation exposure Edit CT is regarded as a moderate to high radiation diagnostic technique. While technical advances have improved radiation efficiency, there has been simultaneous pressure to obtain higher-resolution imaging and use more complex scan techniques, both of which require higher doses of radiation. The improved resolution of CT has permitted the development of new investigations, which may have advantages; e.g. Compared to conventional angiography, CT angiography avoids the invasive insertion of an arterial catheter and guidewire; CT colonography may be as good as barium enema for detection of tumors, but may use a lower radiation dose. The greatly increased availability of CT, together with its value for an increasing number of conditions, has been responsible for a large rise in popularity. So large has been this rise that, in the most recent comprehensive survey in the UK, CT scans constituted 7% of all radiologic examinations, but contributed 47% of the total collective dose from medical X-ray examinations in 2000/2001 (Hart & Wall, European Journal of Radiology 2004;50:285-291). Increased CT usage has led to an overall rise in the total amount of medical radiation used, despite reductions in other areas. The radiation dose for a particular study depends on multiple factors: volume scanned, patient build, number and type of scan sequences, and desired resolution and image quality. Typical scan dosesEdit |Examination||Typical effective dose (mSv)| |Chest, Abdomen and Pelvis||9.9(a)| |Cardiac CT angiogram||6.7-13(b)| |CT colongraphy (virtual colonoscopy)||3.6 - 8.8| Adverse reactions to contrast agents Edit Because CT scans rely on intravenously administered contrast agents in order to provide superior image quality, there is a low but non-negligible level of risk associated with the contrast agents themselves. Certain patients may experience severe and potentially life-threatening allergic reactions to the contrast dye. The contrast agent may also induce kidney damage. The risk of this is increased with patients who have preexisting renal insufficiency, preexisting diabetes, or reduced intravascular volume. In general, if a patient has normal kidney function, then the risks of contrast nephropathy are negligable. Patients with mild kidney impairment are usually advised to ensure full hydration for several hours before and after the injection. For moderate kidney failure, the use of iodinated contrast should be avoided; this may mean using an alternative technique instead of CT e.g. MRI. Perhaps paradoxically, patients with severe renal failure requiring dialysis do not require special precautions, as their kidneys have so little function remaining that any further damage would not be noticable and the dialysis will remove the contrast agent. X-ray slice data is generated using an X-ray source that rotates around the object; X-ray sensors are positioned on the opposite side of the circle from the X-ray source. Many data scans are progressively taken as the object is gradually passed through the gantry. They are combined together by the mathematical procedure known as tomographic reconstruction. Newer machines with faster computer systems and newer software strategies can process not only individual cross sections but continuously changing cross sections as the gantry, with the object to be imaged, is slowly and smoothly slid through the X-ray circle. These are called helical or spiral CT machines. Their computer systems integrate the data of the moving individual slices to generate three dimensional volumetric information (3D-CT scan), in turn viewable from multiple different perspectives on attached CT workstation monitors.In conventional CT machines, an X-Ray tube and detector are physically rotated behind a circular shroud (see the image above right); in the electron beam tomography (EBT) the tube is far larger and higher power to support the high temporal resolution. The electron beam is deflected in a hollow funnel shaped vacuum chamber. Xray is generated when the beam hits a stationary target. The detector is also stationary. The data stream representing the varying radiographic intensity sensed reaching the detectors on the opposite side of the circle during each sweep is then computer processed to calculate cross-sectional estimations of the radiographic density, expressed in Hounsfield units. Sweeps cover 360 or just over 180 degrees in conventional machines, 220 degrees in EBT. CT is used in medicine as a diagnostic tool and as a guide for interventional procedures. Sometimes contrast materials such as intravenous iodinated contrast are used. This is useful to highlight structures such as blood vessels that otherwise would be difficult to delineate from their surroundings. Using contrast material can also help to obtain functional information about tissues. Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from -1024 to +3071 on the Hounsfield scale. Pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a Voxel, which is a three dimensional unit. The phenomenon that one part of the detector can not differ between different tissues is called the Partial Volume Effect. That means that a big amount of cartilage and a thin layer of compact bone can cause the same attenuation in a voxel as hyperdense cartilage alone. Water has an attenuation of 0 Hounsfield units (HU) while air is -1000 HU, cancellous bone is typically +400 HU, cranial bone can reach 2000 HU or more (os temporale) and can cause artefacts. The attenuation of metallic implants depends on atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely extinguish the X-ray and is therefore responsible for well-known line-artefacts in computed tomogrammes. Windowing is the process of using the calculated Hounsfield units to make an image. The various radiodensity amplitudes are mapped to 256 shades of gray. These shades of gray can be distributed over a wide range of HU values to get an overview of structures that attenuate the beam to widely varying degrees. Alternatively, these shades of gray can be distributed over a narrow range of HU values (called a narrow window) centered over the average HU value of a particular structure to be evaluated. In this way, subtle variations in the internal makeup of the structure can be discerned. This is a commonly used image processing technique known as contrast compression. For example, to evaluate the abdomen in order to find subtle masses in the liver, one might use liver windows. Choosing 70 HU as an average HU value for liver, the shades of gray can be distributed over a narrow window or range. One could use 170 HU as the narrow window, with 85 HU above the 70 HU average value; 85 HU below it. Therefore the liver window would extend from -15 HU to +155 HU. All the shades of gray for the image would be distributed in this range of Hounsfield values. Any HU value below -15 would be pure black, and any HU value above 155 HU would be pure white in this example. Using this same logic, bone windows would use a wide window (to evaluate everything from fat-containing medullary bone that contains the marrow, to the dense cortical bone), and the center or level would be a value in the hundreds of Hounsfield units. Three dimensional (3D) reconstructionEdit Because contemporary CT scanners offer isotropic, or near isotropic, resolution, display of images does not need to be restricted to the conventional axial images. Instead, it is possible for a software program to build a volume by 'stacking' the individual slices one on top of the other. The program may then display the volume in an alternative manner. This is the simplest method of reconstruction. A volume is built by stacking the axial slices. The software then cuts slices through the volume in a different plane (usually orthogonal). Optionally, a special projection method (maximum-intensity projection (MIP) or minimum-intensity projection (mIP) can be used to build the reconstructed slices. MPR is frequently used for examining the spine. Axial images through the spine will only show one vertebral body at a time and cannot reliably show the intervertebral discs. By reformatting the volume, it becomes much easier to visualise the position of one vertebral body in relation to the others. Modern software allows reconstruction in non-orthogonal (oblique) planes so that the optimal plane can be chosen to display an anatomical structure. This may be particularly useful for visualising the structure of the bronchi as these do not lie orthogonal to the direction of the scan. For vascular imaging, curved-plane reconstruction can be performed. This allows bends in a vessel to be 'straightened' so that the entire length can be visualised on one image, or a short series of images. Once a vessel has been 'straightened' in this way, quantitative measurements of length and cross sectional area can be made, so that surgery or interventional treatment can be planned. MIP reconstructions enhance areas of high radiodensity, and so are useful for angiographic studies. mIP reconstructions tend to enhance air spaces so are useful for assessing lung structure. 3D rendering techniquesEdit Surface rendering: A threshold value of radiodensity is chosen by the operator (e.g. a level that corresponds to bone). A threshold level is set, using edge detection image processing algorithms. From this, a 3-dimensional model can be constructed and displayed on screen. Multiple models can be constructed from various different thresholds, allowing different colors to represent each anatomical component such as bone, muscle, and cartilage. However, the interior structure of each element is not visible in this mode of operation. Volume rendering: Surface rendering is limited in that it will only display surfaces which meet a threshold density, and will only display the surface that is closest to the imaginary viewer. In volume rendering, transparency and colors are used to allow a better representation of the volume to be shown in a single image - e.g. the bones of the pelvis could be displayed as semi-transparent, so that even at an oblique angle, one part of the image does not conceal another. Where different structures have similar radiodensity, it can become impossible to separate them simply by adjusting volume rendering parameters. The solution is called segmentation, a manual or automatic procedure that can remove the unwanted structures from the image. Some slices of a cranial CT scan are shown below. The bones are whiter than the surrounding area. (Whiter means higher radiodensity.) Note the blood vessels (arrowed) showing brightly due to the injection of an iodine-based constrast agent. A volume rendering of this volume clearly shows the high density bones. After using a segmentation tool to remove the bone, the previously concealed vessels can now be demonstrated. The first commercially viable CT system was invented by Godfrey Newbold Hounsfield in Hayes, England at THORN EMI Central Research Laboratories using X-rays. Hounsfield conceived his idea in 1967, and it was publicly announced in 1972. It is claimed that the CT scanner was "the greatest legacy" of the Beatles; the massive profits from their record sales enabled EMI to fund scientific research . Allan McLeod Cormack of Tufts University independently invented a similar process at the University of Cape Town/Groote Schuur Hospital and they shared a Nobel Prize in medicine in 1979. The original 1971 prototype took 160 parallel readings through 180 angles, each 1° apart, with each scan taking a little over five minutes. The images from these scans took 2.5 hours to be processed by algebraic reconstruction techniques on a large computer.The first production X-ray CT machine (called the EMI-Scanner) was limited to making tomographic sections of the brain, but acquired the image data in about 4 minutes (scanning two adjacent slices) and the computation time (using a Data General Nova minicomputer) was about 7 minutes per picture. This scanner required the use of a water-filled Perspex tank with a pre-shaped rubber "head-cap" at the front, which enclosed the patient's head. The water-tank was used to reduce the dynamic range of the radiation reaching the detectors (between scanning outside the head compared with scanning through the bone of the skull). The images were relatively low resolution, being composed of a matrix of only 80 x 80 pixels. The first EMI-Scanner was installed in Atkinson Morley's Hospital in Wimbledon, England, and the first patient brain-scan was made with it in 1972. In the US, the machine sold for about $390,000, with the first installations being at the Lahey Clinic, then Massachusetts General Hospital, and George Washington University in 1973. The first CT system that could make images of any part of the body, and did not require the "water tank" was the ACTA scanner designed by Robert S. Ledley, DDS at Georgetown University. CT technology generationsEdit - First generation: These CT scanners used a pencil-thin beam of radiation directed at one or two detectors. The images were acquired by a "translate-rotate" method in which the x-ray source and the detector in a fixed relative position move across the patient followed by a rotation of the x-ray source/detector combination (gantry) by one degree. In the EMI-Scanner, a pair of images was acquired in about 4 minutes with the gantry rotating a total of 180 degrees. Three detectors were used (one of these being an X-ray source reference), each detector comprising a sodium iodide scintillator and a photomultiplier tube. Some patients had unpleasant experiences within these early scanners, due to the loud sounds and vibrations from the equipment. - Second generation: This design increased the number of detectors and changed the shape of the radiation beam. The x-ray source changed from the pencil-thin beam to a fan shaped beam. The "translate-rotate" method was still used but there was a significant decrease in scanning time. Rotation was increased from one degree to thirty degrees. - Third generation: CT scanners made a dramatic change in the speed at which images could be obtained. In the third generation a fan shaped beam of x-rays is directed to an array of detectors that are fixed in position relative to the x-ray source. This eliminated the time consuming translation stage allowing scan time to be reduced, initially, to 10 seconds per slice. This advance dramatically improved the practicality of CT. Scan times became short enough to image the lungs or the abdomen; previous generations had been limited to the head, or to limbs. Patients have reported more pleasant experiences with the third and fourth generation CT scanners because of greatly reduced noise and vibration compared to earlier models. - Fourth generation: This design was introduced, roughly simultaneously with 3rd generation, and gave approximately equal performance. Instead of a row of detectors which moved with the X-ray source, 4th generation scanners used a stationary 360 degree ring of detectors. The fan shaped x-ray beam rotated around the patient directed at detectors in a non-fixed relationship. Bulky, expensive and fragile photomultiplier tubes gradually gave way to improved detectors. A xenon gas ionization chamber detector array was developed for third generation scanners, which provided greater resolution and sensitivity. Eventually, both of these technologies were replaced with solid-state detectors: rectangular, solid-state photodiodes, coated with a fluorescent rare earth phosphor. Solid state detectors were smaller, more sensitive and more stable, and were suitable for 3rd and 4th generation designs. On an early 4th generation scanner, 600 photomultiplier tubes, ½ in. (12 mm) in diameter, could fit in the detector ring. Three photodiode units could replace one photomultiplier tube. This change resulted in increasing both the acquisition speed, and image resolution. The method of scanning was still slow, because the X-ray tube and control components interfaced by cable, limiting the scan frame rotation. Initially, 4th generation scanners carried a significant advantage - the detectors could be automatically calibrated on every scan. The fixed geometry of 3rd generation scanners was especially sensitive to detector mis-calibration (causing ring artifacts). Additionally, because the detectors were subject to movement and vibration, their calibration could drift significantly. All modern medical scanners are of 3rd generation design. Modern solid-state detectors are sufficiently stable that calibration for each image is no longer required. The 4th generation scanners' inefficient use of detectors made them considerably more expensive than 3rd generation scanners. Further, they were more sensitive to artifacts because the non-fixed relationship to the x-ray source made it impossible to reject scattered radiation. Another limiting factor in image acquisition was the X-ray tube. The need for long, high intensity exposures and very stable output placed enormous demands on both the tube and generator (power supply). Very high performance rotating anode tubes were developed to keep up with demand for faster imaging, as were the regulated 150 kV switched mode power supplies to drive them. Modern systems have power ratings up to 100 kW. Slip-ring technology replaced the spooled cable technology of older CT scanners, allowing the X-ray tube and detectors to spin continuously. When combined with the ability to move the patient continuously through the scanner this refinement is called Helical CT or, more commonly, Spiral CT. Multi-detector-row CT systems further accelerated scans, by allowing several images to be acquired simultaneously. Modern scanners are available with up to 64 detector rows / output channels ( depends upon the technology used by the manufacturer ). It is possible to complete a scan of the chest in a few seconds. An examination that required 10 separate breath-holds of 10 seconds each can now be completed in a single 10 second breath-hold. Multi-detector CT can also provide isotropic resolution, permitting cross-sectional images to be reconstructed in arbitrary planes; an ability similar to MRI. More anatomical volume coverage in less time is one of the key features of the latest generation MD CT Scanners. It is however more important to achieve better spatial resolution than only volume coverage for better reconstructed images. Latest generation MD CT scanners with flying X-Ray tube focal spot in z-axis direction shows better image resolution. A different approach was used for a particular type of dedicated cardiac CT technique called electron-beam CT (also known as ultrafast CT, and occasionally fifth generation CT). With temporal resolution of approximately 50 ms, these scanners could freeze cardiac and pulmonary motion providing high quality images. Only one manufacturer offered these scanners (Imatron, later GE healthcare), and few of these scanners were ever installed, primarily due to the very high cost of the equipment and their single-purpose design. Rapid development of MDCT has significantly reduced the advantage of EBCT over conventional systems. Contemporary MDCT systems have temporal resolution approaching that of EBCT, but at lower cost and with much higher flexibility. Because of this, MDCT is usually the preferred choice for new installations. Improved computer technology and reconstruction algorithms have permitted faster and more accurate reconstruction. On early scanners reconstruction could take several minutes per image, a modern scanner can reconstruct a 1000 image study in under 30 seconds. Refinements to the algorithms have reduced artifacts. Dual source CT uses 2 x-ray sources and 2 detector arrays offset at 90 degrees. This reduces the time to acquire each image to about 0.1 seconds, making it possible to obtain high quality images of the heart without the need for heart rate lowering drugs such as beta blockers. A dual-source multi-detector row scanner can complete an entire cardiac study within a single 10 second breath hold. Volumetric CT is an extension of multi-detector CT, currently at research stage. Current MDCT scanners sample a 4 cm wide volume in one rotation. Volumetric CT aims to increase the scan width to 10-20 cm, with current prototypes using 256 detector-rows. Potential applications include cardiac imaging (a complete 3D dataset could be acquired in the time between 2 successive beats) and 3D cine-angiography. In recent years, tomography has also been introduced on the micrometer level and is named Microtomography. But these machines are currently only fit for smaller objects or animals, and cannot yet be used on humans. - Cardiology diagnostic tests and procedures - Computed Tomography Laser Mammography (CTLM) - Medical ultrasonography - Magnetic resonance imaging (MRI) - Positron emission tomography (PET) - Single photon emission computed tomography (SPECT) - Electron-beam computed tomography (EBCT) - Digitally Reconstructed Radiograph - Synchrotron X-ray tomographic microscopy References & BibliographyEdit - CTisus comprehensive CT site by Dr. Elliot Fishman of Johns Hopkins University - MultiSlice CT Angiogram from Angioplasty.Org - RadiologyInfo- The radiology information resource for patients: Computed Tomography - Example CT Scan |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
fwe2-CC-MAIN-2013-20-44307000