content
string | score
float64 | source
string |
|---|---|---|
It contains four printable activities (9 pages in total). In this mini pack you will find:
(Use mini marshmallows to count how many snowballs the children are throwing!)
(Use snowflake counters to cover the correct letter.)
(An emergent reader that focuses on the sight words YOU and CAN.)
(It's a battle of the North and South Poles! Who will win, the penguin or the polar bear? Only the roll of the die can decide!)
Hope you enjoy and PLEASE check back for my other upcoming units, Animals in Winter, North Pole and South Pole!
| 0.8291
|
FineWeb
|
“When should I start teaching piano scales?”
Good question! Unfortunately, there’s no one answer that works for all students or all teachers. When to start scale work depends on a number of things, including the student’s age and playing level, as well as the teacher’s chosen curriculum. Sometimes teachers will simply assign scales as they come up in a method book or are required in an exam syllabus.
An understanding of scales, including the piano scale chart, and the ability to construct them is crucial to a student’s ability to read music, to “hear” music in the inner ear, and to play “by ear.” More importantly, the earlier these concepts are learned, the more anchored they will be in the student’s mind and ear.
A logical way of teaching scales that are easily comprehended and easily remembered uses the circle of 5ths. This method also gives the student a visual reminder of many different points of music theory including scale construction.
Before using the circle of 5ths, there are a few important concepts each student should master. These theory concepts are a necessary part of teaching music notes scales to any student.
Know the music alphabet forwards (ABCDEFG) and backward (GFEDCBA). This is one of the first things that all students should learn when they begin to read music. Being able to move in both directions is crucial, as the music moves upward and downward, a student should know the name of the next pitch in either direction. If students know the tune to “Twinkle, Twinkle, Little Star,” the seven letters of the music alphabet going both directions can be sung with the first phrase!
Know what sharp, flat, and natural notes are. Since only a few scales, like C Major or A natural minor, use only natural pitches (the white keys of the piano), the student should know what sharps and flats signify. If you teach an instrument other than the piano, you may find that using a visual aid like a keyboard to help explain the sharp pitches move up and flat pitches move down can be helpful. Teaching piano scales is always more visual and tactile in nature because the black keys create “landmarks” for each scale, making it easier to see a “shape” of the scale.
Know what keys and key signatures are. The concepts of keys and key signatures are both a means of and a result of using the circle of 5ths in scale construction. Although there are ways to form scales without using a key signature, having the key signatures on the student’s copy of the circle not only reinforces the identification of the key with the key signature but also accelerates the student’s learning of the scale notes.
Know the order of sharps or flats that appear in a key signature. This is an important part of quickly and easily constructing major and natural minor scales using the circle of 5ths. Teaching major scales should be primary for all instruments since all other scales can be formed by changing specific notes of the major or natural minor scale. A simple mnemonic for the order of sharps (FCGDAEB) is Father Charles Goes Down And Ends Battle. The fun part is that the order of flats (BEADGCF) is the same sentence in reverse word order: Battle Ends And Down Goes Charles’ Father!
Choose a key and check the key signature. Ask the student: “How many sharps or flats or in the key signature?” Then use the order of sharps or flats to determine which notes are sharped or flatted.
Say the key’s alphabet (all the letters from the first note of the scale to its repeat an octave higher, e.g. A to A). Add sharps or flats as indicated by the order of sharps or flats. Be sure the student actually spells the scale aloud! The more ways the notes are expressed (read, spoken, written, played), the more ingrained they become in the student’s mind.
Write the scale (ascending and descending) with accidentals first. Then write the scale using the key signature, as that is the way the scale may appear in a repertoire piece, without accidentals.
1. Start with C major, as there are no sharps or flats in the key. This scale could be used to teach a beginner about scales even before the student plays any scales. Playing scales requires specific instrumental techniques, but learning about scales can start much earlier.
2. Singing scales can be a wonderful ear training tool! Start at the very beginning of the study by having the student sing the letter names of scales (also helps with note identification when paired with written scales). Then, as you progress in teaching intervals, have the student sing the numbers 1-7 (8 is merely a repeat of the first step). This will help with teaching scale degrees and the concepts of tonic (I/1) and dominant (V/5). The solfege syllables (Do, Re, Mi, Fa, Sol, La, Ti) could also be used at this point.
3. Move quickly around the circle. We suggest moving clockwise, moving through the sharp keys first, as those keys start on natural notes, not sharps or flats. Don’t wait until one scale is perfect before moving on to another (about 1-2 scales every 2-4 weeks). Keep reviewing the scales as a matter of course in the lesson.
4. Don’t spend a lot of lesson time on the scale itself. Move into the repertoire music and identify the scale patterns as soon as possible.
5. Review scales and scale construction at every lesson. Set up an Excel or Google Sheets spreadsheet to keep track of what you review to make sure all keys are being covered equally.
6. Have the student create their own “reference book” of scales. For instance, a piano student could create a set of piano scale charts that contain a staff with the key signature and the written scale as well as the fingering for each hand. They could create a separate chart for each scale or for each key (with multiple scales on one page).
Making Scales Fun to Learn
1. Play and sing the scales with different rhythms. This can also be a way to review rhythmic notation when paired with rhythm flashcards.
2. Teach segments of melodies that use scale patterns. Well-known examples are “Joy to the World” and “Do-Re-Mi”.
3. Use simple manipulatives to reinforce scale notes and patterns. You can create letter blocks or flashcards and have students place them in order for the scale you’re working on.
4. MusiClock is also a fun way to work on scale note names. It has fun recorded rhythms that the student can play along with. Students can also improvise on the notes of a scale as they play with the rhythms!
Teaching scales is an integral part of teaching the language of music. Students from beginners through intermediate levels and beyond will benefit from studying both the theory behind scales as well as the technique required to play them. Repertoire becomes easier to understand, learn, and play with a better knowledge of how each piece is put together, starting with the scale of the piece.
When do I start teaching piano scales?
| 0.9866
|
FineWeb
|
Image management takes a lot of time to upload and download the large disk image to Glance. By using rich features such as copy-on-write, backup, etc. provided by storage arrays, we can make the image management more efficient.
We are proposing improvement of Cinder volume backend for Glance. This realizes:
- Storing image data in Cinder volumes (upload/download).
- Sharing volume data among tenants.
- Quick volume-boot of virtual and bare metal machines using copy-less volume creation from images.
- Copy-less image creation from volumes for backups, image sharing etc.
To fully enable auto-scaling for applications, we need to boot instances as quickly as possible to respond the change of demands. By levareging the copy-on-write based volume creation, instances can be booted at once without copying image data. We also introduce quick boot for baremetal servers without copying images by combining with Ironic-Cinder support.
| 0.9478
|
FineWeb
|
These guys are bats*it crazy, but it's Spooky Season after all!
Spooky season is upon us, and that means that it's time for us to pay respect to the bands and artists that genuinely terrify us.
The world of music is such a diverse and creatively open environment, which is both a gift and a curse. It's a gift in that self-expression, no matter how horrid, is (usually) welcomed with open arms, and it's a curse because self-expression, no matter how horrid, is (usually) welcomed with open arms. Let's take a look at the worlds spookiest musical acts and pay homage to those that have scarred us forever!
You can't talk about scary musicians without discussing the antics of Corey Taylor's 17-piece metal ensemble: Slipknot. Those spooky masks aside, the guys have all come clean about the absolutely bats*it things they've done as a band. From getting pissed on by two girls to huffing the scent of a jarred bird's corpse to get high on stage, these guys have a gauntlet of horror stories seemingly with no end. Also, let's not forget that they got into a fight using their own feces. Rock on guys, I guess.
- Overlooked Nu-Metal Acts of the 2000s - Popdust ›
- 6 Underrated Metal Bands Led By Women - Popdust ›
- The 10 Most Disturbing Metal Album Covers - Popdust ›
| 0.7124
|
FineWeb
|
Power is found in the ability to control and master all the energies of the universe. The path of the novice begins in controlling the energy of the mind, body and progresses to mastering the energies of the earth, sun, moon and larger universe. As ability extends, so too does the power that is realized. Cycles of ability spiral through a grid of sacred geometry and in time the novice becomes the adept. The forms of power dissolve in on themselves and as deeper mastery is realized internal and external energies lose their form. The consciousness that was once mind, body and spirit becomes one, yet at the same time expands outwards. The ability to control energies in all their forms is realized, yet the separateness, the uniqueness and the finite forms are perceived in even more depth and exquisite detail. This is the mountain peak where all religions, spiritual traditions and Esoteric schools find that their true essence meets.
The ability to command the universe is the ability to command the consciousness in full. What is full is expanded to its outer limit as the portals to further realization are found, expanded once more and illuminated.
There is no longer any separation between the desire to create and the creation, as the space in between has been mastered. There is no longer space between ignorance and Enlightenment because the illusion between these has been bridged.
The training of technique gives way to the understanding of the self and of truth. The understanding of form gives way to the understanding of creation. This understanding is what opens the consciousness to full Enlightenment.
| 0.8045
|
FineWeb
|
After losing his sight to smallpox in 1759 at the age of 2, John Gough developed a heightened sense of touch. The budding naturalist soon learned to identify plants by feel, touching their hairs with his lower lip and their stamens and pistils with his tongue. So when as an adult he quickly stretched a piece of natural rubber and felt its sudden warmth on his lip — and its subsequent coolness as it relaxed — he gained what he considered the most direct and convincing proof of a curious phenomenon.
He described his observations in 1802, providing the first record, in English at least, of what’s now known as the elastocaloric effect. It’s part of a broader category of caloric effects, in which some external trigger — a force, pressure, a magnetic or electric field — induces a change in a material’s temperature.
But caloric effects have become more than a curiosity.
Over the past couple of decades, researchers have identified increasingly mighty caloric materials. The ultimate goal is to build environmentally friendly refrigerators and air conditioners — caloric cooling devices won’t leak harmful refrigerants, which can be thousands of times more potent than carbon dioxide as a greenhouse gas. But better cooling devices require better materials.
The more a material can change its temperature, the more efficient it can be. And in the last year, researchers have identified two unique types of materials that can change by an unprecedented amount. One responds to an applied force, the other to pressure. They are both capable of temperature changes — “delta T” for short — of a dramatic 30 degrees Celsius or more.
“Who would’ve thought you would get a material to give you a delta T of 30 by itself?” said Ichiro Takeuchi, a materials scientist at the University of Maryland, College Park, who wasn’t part of the new research. “That’s enormous.”
Gough didn’t know it, but when he stretched his piece of rubber more than two centuries ago, he lined up the long molecules inside. The alignment reduced the disorder in the system — disorder measured by a quantity called entropy.
According to the second law of thermodynamics, the total entropy of a closed system must increase, or at least remain constant. If the entropy of the rubber’s molecular configuration decreases, then the entropy must increase elsewhere.
In a piece of rubber like Gough’s, the increase in entropy happens in the vibrational motion of the molecules. The molecules shake, and this boost in molecular movement manifests itself as heat — a seemingly hidden heat called latent heat. If the rubber is stretched quickly enough, the latent heat stays in the material and its temperature goes up.
Many materials have at least a slight elastocaloric effect, warming up a bit when squeezed or stretched. But to reach temperature changes large enough to be useful in a cooling system, the material would need a much larger corresponding change in entropy.
The best elastocaloric materials so far are shape memory alloys. They work because of a phase change, akin to liquid water freezing into ice. In one phase, the material can warp and stay warped. But if you crank up the heat, the alloy’s crystal structure transitions into a more rigid phase and reverts to whichever shape it had before (hence the name shape memory alloy).
The shift in the crystal structure between these two phases causes an entropy change. While entropy is related to a system’s disorder, it’s more precisely described as a measure of the number of configurations a system can have. The fewer the configurations, the less entropy there is. Think about a shelf of books: There’s only one way for the books to be alphabetized, but many ways for them to be un-alphabetized. Thus, a shelf of alphabetized books is more orderly and has less entropy.
In a shape memory alloy like nickel-titanium —which has shown one of the biggest elastocaloric effects — the crystal structure of the rigid phase is cubic. The pliable phase forms rhomboids, which are diamondlike elongated cubes.
These rhomboids have fewer possible configurations than the cubes. Consider that a square will remain unchanged if rotated through four possible angles: 90, 180, 270 or 360 degrees. A rhombus, on the other hand, will look the same only after two such rotations: 180 and 360 degrees.
Since the pliable phase has fewer possible configurations, it has less entropy. When an external force pushes on the alloy while it’s in its rigid phase, the metal transitions to its pliable, lower-entropy phase. As with Gough’s rubber, an entropy drop in the metal’s structure requires a rise in the entropy of its atomic vibrations, which heats the material.
In an air conditioner or refrigerator, you would then have to quickly remove this heat while keeping the alloy in its pliable, low-entropy phase. Once the force is removed, the alloy returns to its rigid, higher-entropy phase. But for that to happen, the atomic structure must acquire entropy from the alloy’s vibrating atoms. The atoms vibrate less, and because such vibrations are simply heat, the alloy’s temperature drops. The cold metal can then cool its surroundings.
Progress on these materials has been steady. In 2012, Takeuchi and colleagues measured a temperature change of 17 degrees Celsius in nickel-titanium wires. Three years later, Jaka Tušek of the University of Ljubljana and others observed a change of 25 degrees in similar wires.
Then last year, a group based at the University of Science and Technology Beijing discovered a new shape memory alloy of nickel-manganese-titanium, which boasts what they called a “colossal” temperature change of 31.5 degrees. “So far, this material is the best,” said Antoni Planes, a solid-state physicist at the University of Barcelona who was part of the team.
What makes it so good? During a phase transition, nickel-manganese alloys shrink. Because volume corresponds to the number of possible atomic configurations of the material, a reduction in volume leads to a further reduction in entropy. “This extra contribution is what makes this material interesting,” Planes said.
Cool Under Pressure
Shape memory alloys have limitations, though. Notably, if you squeeze a piece of metal over and over, the material is going to fatigue.
Partly for this reason, researchers have also pursued “barocaloric” materials, which heat up when you apply pressure. It’s the same basic principle: Pressure induces a phase change, lowering entropy and heating the material.
One intriguing material is neopentylglycol, a type of plastic crystal. This material is soft and deformable, consisting of molecules loosely bound in a crystal structure.
Neopentylgylcol’s molecules are round and arranged in a three-dimensional lattice. They interact with one another only weakly and can swivel into roughly 60 different orientations. But apply enough pressure and the molecules become stuck. With fewer possible configurations, the material’s entropy drops.
A plastic crystal’s squishiness means that squeezing it reduces its volume, decreasing entropy even more. “Because they are, in a way, between solid and liquid, they can display larger changes in entropy when you apply pressure,” said Xavier Moya, a solid-state physicist at the University of Cambridge.
Last year, two teams achieved the largest barocaloric effects on record. Neither team directly measured a temperature change, but a European team that included Planes and Moya reported an entropy change of 500 joules per kilogram per kelvin — the largest ever for a solid, on a par with entropy changes in commercial fluid refrigerants. They calculated a corresponding temperature change of at least 40 degrees. Another team based at the Shenyang National Laboratory for Materials Science in China reported an entropy change of 389 J/kg/K.
But many practical challenges remain. While barocaloric materials are less susceptible to fatigue than elastocaloric materials, the new milestones required colossal pressures of thousands of atmospheres. Such pressures also require the material to be sealed. “It’s difficult to exchange heat between this material and the surroundings if you seal the whole system,” Tušek said.
Indeed, heat exchange isn’t straightforward, Moya said. But he’s working on some proprietary systems for a barocaloric refrigeration company he co-founded called Barocal, which is a finalist for the Global Cooling Prize, an international competition to find sustainable cooling technologies. Takeuchi, meanwhile, founded Maryland Energy and Sensor Technologies in 2009 to commercialize elastocaloric cooling. The commercial products are being developed with copper-based shape memory alloys, which are softer and don’t need as much force as nickel-titanium alloys.
By contrast, Planes and his longtime collaborator Lluís Mañosa are focusing on multicalorics, which respond to multiple stimuli, such as both force and a magnetic field. Multicaloric devices would likely be more complex, but multiple stimuli could drive even greater entropy and temperature changes with higher efficiency. “Prospects for the future are very good,” Planes said. “But for the moment we are at the beginning.”
This article was reprinted on Wired.com.
| 0.7788
|
FineWeb
|
Rasa Navickaitė is a Doctoral Candidate in Comparative Gender Studies with a Specialization in History at the Central European University, Budapest, Hungary.
She has a Research Master degree in Gender and Ethnicity studies from Utrecht University, the Netherlands, and a Bachelor degree in political science from Vilnius University, Lithuania.
Rasa is a board member of ATGENDER: The European Association for Gender Research, Education, and Documentation.
Rasa's dissertation is an intellectual biography of Marija Gimbutas (1921-1994) – a renowned Lithuanian-American archaeologist and an advocate of the theory of the peaceful, egalitarian, gynocentric and Goddess-centered pre-historic civilization of “Old Europe”. The utopian vision of Gimbutas became a source of inspiration for a variety of socio-political movements between 1970s and 1990s, from the transnational Goddess spirituality movement to post-socialist ethno-nationalism. Her person and her writings also played a role in diverse national settings: from the United States where she lived since 1949, to her native Lithuania, to which she maintained a close contact despite the Cold War.
Studying Gimbutas’ life as a transnational life provides an entrance into analyzing the cultural transfers between the West and the East in the twentieth century, beyond the metaphor of the Iron Curtain. Gimbutas’ life and work constitute a perfect site for interrogating the importance of gender in local, national and transnational power-knowledge relationships. This dissertation combines the insights from transnational history and gender studies in the investigation of the reception, interpretation and appropriation of Gimbutas’ persona and her scientific and ideological writings in the context of the Cold War and its aftermath.
Rasa is a recipient of the Dissertation Grant for Graduate Students 2017, awarded by the Association for the Advancement of Baltic Studies (AABS).
| 0.6261
|
FineWeb
|
How to Help
I feel like a different person, and it is as if I am just opening my eyes to life. Thank you for helping me, my goal is to learn well my Spanish and English language. Thank you CENAES!
My parents never let me go to school. My family and friends write to me on my cell phone and I can't answer, I feel so sorry for them. Today, I want to learn how to write my name and read and write.
My husband told me that I have to learn to read and write, because my 5-year-old daughter is going to need me to help her with her homework.
| 0.8067
|
FineWeb
|
The fast food of communication, today’s tweets, texts, and social media posts often tout alphabet letters standing in as imposters for whole words and intentionally misspelled words like thnx instead of thanks. Fond of shortcuts, today’s digital kids rarely text a complete sentence, such as “Are you okay?” Instead, they would use their thumbs to key in “r u k?” Sounds Neanderthal in a way, or at the minimum, similar to what the Greeks called Phonography— sound writing—where vowels were optional with only consonants needed to determine a word.
Speed is the name of the game: Instant messaging, speed dialing, and instant gratification. Pair this with the challenge of keying in words from micro-keyboards and it makes sense for how shorthand became the norm for texting.
And then along came Twitter, a networking platform to connect with others in bird chirping fashion. Enter the 140 characters.
Is this all bad?
No, not if it’s used properly.
Public posts on social media, such as Twitter, Facebook, and Pinterest, are permanent, remaining on the Internet forever. Wide networks of people, including employers and future employers, have access to these posts and can see one’s use of the English language. The argument could be made that this would encourage use of excellent grammar and give rise to posts exhibiting proper-grammar smorgasbords. On the contrary, grammar is on life support, relegated to the basement.
What does this brevity of expression so rampant in the mobile-messaging wave mean for the future of grammar? In the business world and social circles, people are still judged by how they speak and write. Literacy image still carries enormous weight. Yet, the “cool factor” of communicating fluently in hashtag speak, the fashionable mobile-device runway today’s kids pose and strut on everyday, cannot be ignored.
For self-esteem, it’s important for kids to fit in, at least somewhat, with this world-wide trend. What can parents do to enable their kids to be successful in future career and social ventures while enjoying the cool factor now?
1) Family Blog: Select a blog topic once a month, one that interests your family and has appeal to the appropriate age ranges within your family. One night a week, have each family member contribute two sentences or a short paragraph that exhibits perfect grammar. Encourage complex sentences that are punctuated properly.
2) Family Game Night: Host a “Spelling Bee” with prizes for winners.
3) Family Dinner Conversations: In the middle of the table, place a bowl filled with tweets or texts messages. Family members can choose one to read and then discuss examples by translating into a complete sentence with appropriate adjectives and adverbs.
4) Family Gratitude: “I am grateful for…” Whether electronic or handwritten, require kids write two short gratitude messages a month to family members: parents, siblings, grandparents, etc. Messages should include abbreviated hashtag versions and translations written in complete sentences with correct spelling and punctuation. Example: #Thnx #Din #Yum. “Mom, I am grateful that you are such a good cook. The roasted chicken was my favorite dinner this week; it was delicious!”
5) Family Crest of Honor: Encourage your children to speak properly. Correct them immediately upon hearing mistakes, but not in a punitive way; strive to be pleasantly instructive. Avoid correcting them in front of peers after five years of age, but note the gaffe and take them aside later. Use your family name to instill pride: “The Greens speak properly. You are a Green; it’s important that you…”
Let’s not send grammar rules to the graveyard just yet. Do your part to keep grammar alive or the 140-Character Generation’s kids may see a return of hieroglyphic communication. #Grammar #Rules #Rock
By: Sherry Maysonave, Author of EggMania: Where’s the Egg in Exactly
- The Benefits of Toddler Climbing Toys — Meet EZPlay! - May 3, 2021
- Eco-Friendly Baby Products: The 2021 Guide for Safe and Natural Baby Toys - April 24, 2021
- Boost your child’s health with organic, fish-free DHA from Eversea - April 16, 2021
| 0.8102
|
FineWeb
|
- The Butterfly Cut – Frosted facets highlight a butterfly in the pavilion.
- Brilliant Emerald Cuts – Two methods for adding brilliant facets to your emerald cuts.
- Frosted Stars – 5 and 6 pointed stars, outlined with frosted facets.
- The Hayek Cut – An excellent design for an akward shaped piece of rough.
- Pentafan Design – Get great brilliance from dark rough.
- Fan Shield Cut – Another excellent design for dark rough.
Detailed faceting instructions by Jeff Graham available at The Rock Peddler
| 0.5082
|
FineWeb
|
1. Parking citation may be appealed WITHIN 15 DAYS OF DATE ISSUANCE
2. The appeal process suspends payment of the monetary penalty until a decision is rendered by the Appeal Board.
3. ONE CITATION ONLY PER APPEAL FORM
4. You will receive an e-mail or mailed letter providing the appeal results normally within 10 days. It is your responsibility to verfiy your appeal results. Results contact the CWU Parking Office if you do not receive the decision.
| 0.7679
|
FineWeb
|
High-Speed robot cell Type HSR for Primary and secondary packaging
The HSR robot cell is designed for sorting and packaging unpacked and primary packaged products. In series, grouped or chaotic incoming products are detected by a camera system and passed to the following handling zones via a conveyor belt. Based on its design and contrast-forming surface, the conveyor belt enables the highest precision to be achieved throughout the entire cell. The boxes / carriers are transported by a carrier belt which moves continuously or intermittently depending on the application.
Each handling zone is allocated to a robot that picks up the respective products by gripping or suction technology and inserts them into the boxes / carriers according to predefined patterns.
One cell can be equipped with up to 10 robots of the Type Delta-Picker in one or two rows, depending on the application.
Areas of application
- In the food, non-food and pharma branches
- Primary and secondary packaging
- Operation as a stand-alone-system to place products directly into the packaging unit, for example boxes, covering boxes, blisters, etc.
- Operation as a line component for feeding
- the carrier belt of a tubular bag machine
- the cassette infeed system of a cartoner
- the tray of a forming machine
- the infeed conveyor of other packaging machines
- Duties in the area of sorting and line distribution
- Compact design in the welded frame
- Self-explanatory operation for a simple production start
- Optimal view of the process workflow
- Execution according to branch-specific requirements for food, non-food, pharmaceuticals and medicine in the secondary packaging area.
- Rapid and simple change of format through settings with position displays
- Quick-change systems for format change or cleaning
- Design according to branch-specific requirements for primary packaging area
- Additional camera systems whenever the application requires it
- Additional product and box transport belts
- Additional storage as product memory
- Multivariety incoming products grouped in monovarieties
- Creation of mixed packages
Performance / Techn. Data
- Up to 150 picks / min. per robot
- Repeat accuracy: 0.1 mm
- Load capacity: up to 3 kg per robot
- Workspace: horizontal up to 1,300 mm / vertical up to 500 mm deep
- Single entity and group picks possible
- High position accuracy in the case of belt speeds of up to 60 m / min.
- Branch-specific and individually coordinated gripping systems
- Coordinated camera systems and lighting
| 0.8054
|
FineWeb
|
What I'm recommending is that if you are fortunate enough to be the one asked, there are ways to get behind the question and give them what they ask for.
And that's the goal: give them what they ask for, but do it on your terms.
Why? Because at the end of the day, it's their expectation vs. your expertise. Again, if trust has been established with my mechanic and he tells me a repair is most likely going to cost somewhere around $500, even if I was expecting a number like $300, I'm more than likely going to let him do the repair because he knows what he's talking about much more than I do. Mechanics, however, aren't necessarily sales people. People need to have their cars fixed; mechanics are more quick to brush it off if someone balks at a price. They will not investigate further when you ask for a ballpark estimate. They will tell you, if you say yes, they will do the work; if you say no, they will not. Perhaps that's a lesson I can learn better from them...
Nevertheless, I think there are some questions to consider and I addressed three of them yesterday. I have two for today:
- Why are they asking? This is an issue of motivation. If it is in the first conversation, it is good to find out what the motivation is for asking. The best thing that can be done is to flip it back on them. Today I was talking with a colleague and she brought up the idea of asking about a "dream scenario." Ask them to paint this picture, which allows them to talk and explain what they're envisioning. You may need to help guide, but while they're doing this, you can capture both the simple and complex areas, allowing you to come up with a better idea. Additionally, you can also try to defer from there, asking if you could have at least one more conversation.
- Who is asking? Is it a salesperson? A CFO? A VP of IT? Regarding SaaS specifically, if you are talking with a salesperson or a CFO, chances are they are going to be expecting a quote much lower than what you will tell them. This is different than a VP of IT, who most likely will be more understanding of a ballpark. Why? Because IT people know the complexity involved in implementing a new system. They know it is so much more than just "bing, bang, boom" which is the salesperson's notion or "a specified budgetary amount" the CFO is considering among 1,000 other line items.
So when the ballpark question is asked, first make sure that trust is established. That gives you the leverage to use your expertise and handle their expectations appropriately. Then, with that trust, use the five questions to help you lead the prospect as effectively as possible to get a right size ballpark in their own mind.
Then hit a home run.
| 0.7431
|
FineWeb
|
MA Exam Lists and Sample Questions
MA Examination Lists
- Aesthetics and Theory
- American Literature before 1800
- British Romantic Writing (1785-1830)
- English Renaissance Literature
- Film Studies
- Media History and Media Studies
- Middle English
- Modern American Literature
- Modern British Literature
- Nineteenth-Century American Literature
- Restoration and Eighteenth-Century Literature
- Victorian Literature
A Sampling of Recent MA Exam Questions
During the MA Exam, you will be asked to select one from among two to three questions in each field. You are allowed a total of 2 hours—roughly 30 minutes for deliberation and 1.5 hours for writing—for each of the four fields you select.
Please note that some of the sample questions below might seem a bit more specific than is typical on a “general knowledge” exam because they are keyed to graduate seminars taught during the academic year. The students taking the exam had enrolled in these seminars.
- What constitutes a strong woman in Middle English literature? Use at least three different authors when constructing your discussion, with a couple of contrasting examples by one or two of the authors.
- Discuss the relationships of audience to author in medieval literature. Several of the most intricately crafted poems of British literature come from the 14th century (e.g., Chaucer’s Troilus and Criseyde and Pearl and Sir Gawain and the Green Knight from the Cotton manuscript), with marginal markers of various kinds to assist individual readers, all of which suggests that the writers are consciously concerned with the potentialities of a new, literate and well educated audience for vernacular literature. On the other hand, the literature is highly concerned with aurality and subtle voicing. One approach to these issues might be to begin your discussion with ideas of reading as a performative craft, whether the reading occurs primarily through the eyes or through the ears, with intellect and memory as the essential staging areas.
- Issues of ethics in medieval literature usually focus on matters of the will—choice, motive, and intention. Using the N-Town play “The Woman Taken in Adultery” as your starting point, discuss the medieval aptitude for moral literature, with a refined sense of interface between the high-minded and the comic.
- Select three plays from the reading list you have studied. Consider what you know more generally about Renaissance practices surrounding death and Renaissance literary depictions of death. How does your understanding of death in Renaissance society, culture, and/or literature help you to interpret the significance of death in each of these three plays? Please remember that the question focuses on interpretation . In other words, be sure to explain precisely why or how the scenes of death or references to death matter in your understanding of these three plays.
- Select, from the list you studied, three works (plays, poems or prose) that depict women who are verbally powerful (for instance, as writers, speech-makers, persuaders, cursers, gossipers). Describe with precision the nature of their verbal power and then explain how each of these works relies, presumably in quite different ways, on the figure of the verbally powerful woman. This too is a question that focuses on interpretation. For example, how does each author use the verbally powerful woman to—and these are just "for instances" and not meant to be prescriptive or to set limitations—establish their work's tragic or comic paradigms; examine the distinction between and overlapping of public and private spheres; sort through acceptable and/or effective versions of rhetoric, style, or authorship; mark or navigate through competing Renaissance authority relations?
- Select three sonnets (representing work by at least two authors) from the list you studied. How does each sonnet convey meaning by relying on the audience's knowledge of the formal and thematic traits associated with the English or Italian sonnet? Again, this is a questions that focuses on interpretation . You will need to produce three careful close readings that explain precisely how each poem is trying to engage its audience.
Restoration and Eighteenth-Century Literature
- An overarching concern of much of eighteenth-century literature is the problem of how to read the body. The Country Wife is about a notorious rake whose challenge is to feign impotence to other men while making women understand that he is still very much – indeed, now more than ever—open for business. Swift’s Lady’s Dressing Room seeks to “expose” the armory of powders, paints, and pomades that women use to dissimulate the “reality” of their bodies. Likewise, Tristram Shandy evinces an obsession with gesture and posture—the doffing of a hat, the flourish of a walking stick, the angle of the orating body. Persuasion thematizes the challenges of discerning thoughts and feelings on the surfaces of the body when social circumstances render point-blank verbal declarations either improper or impossible. What forms does the eighteenth century’s investment in body-reading take, what significances does it seem to have (what “other” problems might it seem to stand for?), and how does it change over time or across different texts? Your discussion may focus on the works named above, or you may choose other examples; either way, your answer should engage with at least two different genres.
- The eighteenth-century novel has traditionally been studied separately from drama or poetry. Years’ worth of syllabi and conference-panel topics have cemented this division. As a result, the “uniqueness” of the Novel has perhaps been overemphasized: the Novel is “modern,” is “realistic,” is interested in “subjectivity,” in a way that other literature from the period is “not.” What would be the effects, contrastingly, of considering the eighteenth-century novel in the context of Augustan satire and/or Restoration drama? What insights into Pamela, Tom Jones, or Tristram Shandy, for example, might be gained by discussing them in light of the stylistic, thematic, and epistemological concerns of poetry and the theatre? Frame your answer in relation to one to two novels and one to two non-novels.
British Romantic Writing
- Literary-histories of romanticism tend to posit a shift, at the end of the eighteenth century, from “mimetic” to “expressive” models of poetry: the aim of literature was no longer to offer a “reflection” of the external world, but rather to provide a “revelation of personality.” In terms of genre, satire gave way to lyric; in terms of epistemological paradigms, empiricism gave way to psychology. But of course, the romantic poets are also famous for their interest in nature, and the power of many of their poems hinges upon a striking use of visual detail (think of Wordsworth’s description of daffodils, Shelley’s meditation on Mont Blanc, Keats’ ode to Autumn). How does romantic poetry bridge both inner and outer, mind and matter? How did the romantics reconcile their particularized attention to the natural world with their commitment to exploring memory, emotion, and the unconscious? Refer to several texts in your response.
- Many definitions of romanticism highlight the romantics’ intense privileging of the individual and individualism – their interest in how the poet becomes a poet (Wordsworth’s Prelude); their focus on the passionate rebel-hero, pitted against a small-minded and restrictive society from which he chooses to exile himself (Byron’s Childe Harold). Yet it is well known that these authors often collaborated, and consciously cultivated literary coteries: The Prelude was originally intended as part of a longer, epic poem called The Recluse, which Wordsworth and Coleridge planned to write together; according to Wordsworth, The Rime of the Ancient Mariner was inspired by a conversation the two poets had about a book Wordsworth was reading; and according to Shelley, it was with Shelley’s support and encouragement that Byron wrote Don Juan. Certainly, even without reference to particular biographical details, we can find in these authors’ poetry a multiplicity of overlapping concerns, motifs, settings, and characters. In a consideration of several texts, discuss the interaction of solitariness and sociability in romantic literature. In what ways do you see the two values in conflict? In what ways might they be seen to blend?
- Matthew Arnold’s Culture and Anarchy and Thomas Carlyle’s Past and Present both fault Victorians for idolizing the value of personal liberty. In your essay, discuss the problems Arnold and Carlyle perceive within a society of increasing democratization, industrialization, and capitalism. What remedies do they propose?
- Drawing on at least three different authors from your reading list, discuss how the novels/poems composed by these writers offer a critique of Victorian gender norms.
- John Stuart Mill once famously described the Victorian period as an “age of transition.” In your essay, discuss how either George Eliot’s Mill on the Floss or Middlemarch depict a society grappling with transition.
Nineteenth-Century American Literature
- How does Emerson's notion of self-reliance reflect a specific social orientation that is not easily applied to all people living in 19th-century America? Use at least three texts on your list to suggest the limitations of his universal claims.
- Compare Bartleby's infamous "I would prefer not to" to the type of resistance advocated by Thoreau in "Civil Disobedience." How do these differing approaches conceptualize human freedom, individuality and social change among other major 19th-century concerns? Supplement your response with at least one other text to argue for how American writers of this era understood the limits and possibilities of resistance.
- W. E. B. DuBois famously said that the problem of the 20th century would be the problem of the color line, but in many nineteenth-century works of literature the color line shows itself already to be an issue. Chose two works from early in the century and two from late and discuss the ways in which they reflect, interrogate, or critique the problem of race. Use specific examples from the works in developing your answer.
- Emerson has often been cited as the dominant influence on American writers at mid-century. Specify how Emerson may have influenced three writers from the following list: Melville, Hawthorne, Poe, Thoreau, Dickenson, Whitman, Fuller, Stowe. Remember, a given writer might manifest Emerson’s influence by resisting or even rejecting it. Be as precise as possible.
Modern American Literature
- Why, after nearly 100 years, does The Wasteland continue to play such a prominent role in the stories we tell about twentieth-century American poetry? Describe the relationship of three different poets (i.e., Williams, Crane, Lowell, Bishop, Oppen, etc.) to Eliot’s paradigm-shifting poem.
- People often speak of the “meta-fictional” nature of post-modern fiction writing, but the impulse is in many ways as old as the impulse to create fictions; think of Cervantes. Describe the ways in which three modern American novels (i.e., Faulkner, James, Hurston, Cather, Wharton, etc.) are themselves about novel-making or embody the impulse of novel-making in the formal procedures.
Modern British Literature
- One of the common claims about modern and postmodern fiction is that it is intensely selfreflexive—that it is writing about the subject of writing. In some cases, this takes the form of foregrounding the process by which the text we read is constructed—whether as a written text or an act of storytelling. Often, how the story is told (or written) becomes more important than what the story tells. Put another way, one could argue that many works of twentieth century British fiction are centrally concerned with the question of fiction-making, and these texts often focus on the permeable boundaries between the fictional and the real. Looking closely at three to four novels from the period, discuss their treatment of the process of writing, storytelling, or fiction-making.
- It has often been argued that World War I effected a decisive shift in modern consciousness, necessitating new literary forms to meet a radically changed understanding of the world. More recently, however, critics have suggested that the “the rupture of 1914-18 was much less complete than previous scholars have suggested.” Jay Winter, for example, has argued that “The overlap of languages and approaches between the old and the new, the ‘traditional’ and the ‘modern,’ the conservative and the iconoclastic, was apparent both during and after the war. The ongoing dialogue and exchange among artists and their public, between those who selfconsciously returned to nineteenth-century forms and themes and those who sought to supersede them, makes the history of modernism more complicated than a simple, linear divide between ‘old’ and ‘new’ might suggest.” Looking at four twentieth-century works from your list, discuss the extent to which you see the Great War as effecting a decisive break in aesthetic practice. How do you understand the history of modernism in terms of the “old” and the “new”?
| 0.9852
|
FineWeb
|
- a form of government in which God or a deity is recognized as the supreme civil ruler, the God's or deity's laws being interpreted by the ecclesiastical authorities.
- FISCAL REVENUE OF ORIENTAL THEOCRACIES
- FISCAL REVENUE OF ORIENTAL THEOCRACIES. D. Morgan Pierce.
- THE AMERICAN MILITIA PHENOMENON: A PSYCHOLOGICAL
- THE AMERICAN MILITIA PHENOMENON: A PSYCHOLOGICAL. PROFILE OF MILITANT THEOCRACIES. ______.
- 115 SPEECH RELIGION AND NON-STATE GOVERNANCE
- theocracy, where God is the sovereign law and controls all functions.
- Rick Perry and the Drought: What If God Hates You?
- And that's my problem with people who want to run our country like some kind of Christian theocracy. It's not just that i don't want to live in a Christian theocracy (theocracies tend not to be very good places for people like me).
- Will Hezbollah desert Assad before the end?
- important as Assad's support is to Hezbollah, the survival of his regime does not take precedence over Hezbollah's objectives: the defeat of Israel, the marginalisation of American influence and the creation of a regional arc of Shia theocracies.
- Libya: President Jacob Zuma Returns From African Union Peace And Security
- The World is getting more of Iran Islamic "revolutions" and theocracies. Sarkozi; Obama and Cameron are miserably short-sighted and greedy looters.
- WORLDVIEW: 'Things fall apart'
- Scenarios range from a new dawn of freedom and democracy to the rise of Islamist theocracies across the region. "Things fall apart; the center cannot hold," WB Yeats wrote in "The Second Coming," one of the most-quoted poems of modern times.
- Theocracy - Wikipedia, the free encyclopedia
- Theocracy is the rule by people in positions of political authority all of whom share the same religious beliefs and preferences.
- Category:Theocracies - Wikipedia, the free encyclopedia
- Category:Theocracies. From Wikipedia, the free encyclopedia.
- theocracies - definition of theocracies by the Free Online Dictionary
- the·oc·ra·cy (th - k r -s ). n. pl.
- CQ Press : Current Events In Context : Terrorism
- Theocracy, derived from two Greek words meaning "rule by the deity," is the name and the priestly roles are separate are not considered to be theocracies.
Theocracies is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Theocracies books and related discussion.
Suggested Pdf Resources
Suggested News Resources
Suggested Web Resources
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site.
Related searchestoilet humour toilet slang
juwayriya bint al-harith
body surface area normal values
| 0.501
|
FineWeb
|
Fungus is one the least understood members of the living world, even though they play such a vital role in the ecosystem. Without fungus, we wouldn’t have wine or bread, wood wouldn’t break down after it died, and we wouldn’t have anything to put on our salads! Today, we will look at not just the largest fungus in the world but also the largest organism by biomass.
The Largest Fungus (and Organism) in the World
The world’s largest fungus is a honey fungus species called Armillaria solidipes (formerly Armillaria ostoyae). It is also the largest organism by biomass in the world. This fungus is found in the Malheur National Forest in Oregon, covering an estimated 2,200 acres (8.9 square kilometers) of land. It is estimated to weigh around 7,567-35,000 tons and is thought to be between 2,400-8,650 years old. Its name is the Humongous Fungus, which is rather apt for such a massive bit of biology.
The fungus spreads underground by sending out rhizomorphs, thick, black, cord-like structures resembling roots. These rhizomorphs spread out from the original fungus and infect the roots of nearby trees, eventually killing them and growing larger. The fungus is parasitic, meaning it feeds off the host tree, but it also plays an essential role in the ecosystem by breaking down dead wood and returning nutrients to the soil. This massive system of webbing fungus isn’t a single mushroom cap but a network of invisible rot within trees in a massive forest.
How Armillaria Works
Armillaria solidipes is a parasitic fungus that infects and kills trees. The fungus spreads underground through its rhizomorphs, which are string-like structures that resemble roots, although it’s important to remember that fungi aren’t plants but separate organisms entirely. These rhizomorphs spread out from the original fungus and infect the roots of nearby trees. The fungus then colonizes the tree’s root system, penetrating the roots and eventually killing the tree, although it can take some time.
The fungus targets a wide range of trees, including Douglas fir, true firs, Ponderosa pine, and oak. Once the tree is infected, the fungus begins to colonize the root system, penetrating the roots and eventually killing the tree. The fungus also produces mushroom caps, which can be seen growing at the base of the infected tree.
The fungus also plays a vital role in the ecosystem by breaking down dead wood and returning nutrients to the soil. The fungus feeds off the tree and is one of the few living things that can effectively process and break down wood into nutrients.
Armillaria solidipes also has a bioluminescent feature, which means that it produces light, this feature is not well understood, but it is believed to be used for communication and to attract insects for spores dispersal. The mushroom caps that sprout up are the primary organs that use this feature.
Can You Eat Armillaria?
Not all species of Armillaria are edible, although Armillaria solidipes is! The mycelium and rhizomes produced by the fungus are found underground and aren’t eaten, but during certain seasons, the fungus will flower and grow mushroom caps above the dirt. These caps are essentially the reproductive organs of the fungus and the part that humans can eat.
The caps are defined as having gills, a campanulate or convex shape, and a decurrent hymenium. Regarding how to prepare these mushrooms, they are best cooked to be safe, although no mushrooms should be eaten without absolute certainty on behalf of the harvester.
How Does Armillaria Reproduce?
Armillaria ostoyae, also known as the honey mushroom, reproduces and fruits through a process known as mycelial growth. The mushroom’s mycelium, or the vegetative part of the fungus, spreads underground through the roots of trees and other woody plants. As the mycelium grows, it forms small, whitish clusters known as rhizomorphs, which resemble roots. These rhizomorphs are responsible for infecting and colonizing new host plants.
Once the mycelium has colonized a host, it begins to fruit, producing clusters of mushrooms found at the base of the host tree or on the ground nearby. The mushrooms typically appear in late summer to early fall and can continue to fruit for several years.
- The 7 Best Children’s Books About Fungi — Reviewed and Ranked
- Discover the Largest Mushroom Ever Grown
- Are Mushrooms and Other Fungus Plants?
The photo featured at the top of this post is © LianeM/Shutterstock.com
The information presented on or through the Website is made available solely for general informational purposes. We do not warrant the accuracy, completeness, or usefulness of this information. Any reliance you place on such information is strictly at your own risk. We disclaim all liability and responsibility arising from any reliance placed on such materials by you or any other visitor to the Website, or by anyone who may be informed of any of its contents. None of the statements or claims on the Website should be taken as medical advice, health advice, or as confirmation that a plant, fungus, or other item is safe for consumption or will provide any health benefits. Anyone considering the health benefits of particular plant, fungus, or other item should first consult with a doctor or other medical professional. The statements made within this Website have not been evaluated by the Food and Drug Administration. These statements are not intended to diagnose, treat, cure or prevent any disease.
Thank you for reading! Have some feedback for us? Contact the AZ Animals editorial team.
| 0.8543
|
FineWeb
|
Show Hide image Coronavirus 29 April 2020 Why is coronavirus hitting Britain’s ethnic minorities so hard? Doctors and patients from the black, Asian and minority ethnic communities are falling severely ill and dying with Covid-19 in above average numbers. By Anoosh Chakelian and Ben Walker Follow @@anoosh_c Follow @BenNHWalker Sign UpGet the New Statesman’s Morning Call email. Sign-up When the first reports of NHS staff losing their lives to Covid-19 appeared in newspapers, one aspect of the impact of the disease was evident from the pictures alone. All of the first ten doctors to be named as having died from coronavirus in the UK were from ethnic minority backgrounds. As the death toll rises, it is becoming clear that this is not a coincidence. The Health Service Journal recently identified 119 deaths of NHS staff, and from the 106 of these people that could be verified as active health workers, 63 per cent were from black, Asian and minority ethnic (BAME) backgrounds. While 20 per cent of nursing and support staff in the NHS are BAME, this group accounts for 64 per cent of Covid-19 deaths. Among NHS medical staff, 95 per cent of those who died came from 44 per cent of the workforce that has an ethnic minority background. Separate analysis by Sky News found on 22 April that 72 per cent of all health and social care staff who have died with Covid-19 are BAME. Deaths among ethnic minorities in the general population appear to reflect this. BAME people account for 34 per cent of the patients admitted to UK intensive care units with Covid-19. This contrasts with most recent estimate from the Annual Population Survey that 13.4 per cent of the UK's population is non-white. The most recent NHS England data shows that black people, who make up 3.1 per cent of the population, are dying from the virus at almost double the rate that would be expected, comprising 6 per cent of fatalities. Why is this happening, and is there anything the government can do about it? *** On 16 April, the government launched an inquiry, led by the UCL professor of epidimiology and public health Kevin Felton, into this disturbing trend. But it should have come as no surprise to the government or health authorities that those with ethnic minority backgrounds are being hit particularly hard by coronavirus. Data from the US going back to 1950 has shown that African Americans are more vulnerable to flu epidemics, and emerging data from the Centers for Disease Control and Prevention (CDC) suggests this is equally true for Covid-19. In both countries, evidence suggests that the most important factors are not genetic, but socioeconomic. Persistent health inequality is at least partly to blame for the disproportionate toll of Covid-19 on the BAME population. Black, Asian and other ethnic minority communities are more likely to work in lower-paid jobs, to live more densely populated areas and more crowded housing and to have poorer access to healthcare and public health information. People from BAME backgrounds also face a socioeconomic vulnerability to contracting novel coronavirus, as they are likely to work in jobs that bring them into contact with other people. In London, 28 per cent of Tfl's operational staff and 44 per cent of cleaners are BAME, while ethnic minorities are over-represented in the NHS at a national level. “It is no surprise that a pandemic such as this is going to impact those on the sharp end of inequality... Unfortunately, Britain’s black and minority communities are at that sharp end," says Jabeer Butt, chief executive of the Race Equality Foundation, who until 18 months ago was on the NHS Equality and Diversity Council since it was created in 2010. He adds: “Poorer quality work, poorer housing, these are all having a negative impact on people's ability to manage their health and wellbeing... we know over the last ten years, the quality of accommodation for black and minority ethnic communities has deteriorated, with overcrowding and so on”. Butt says health inequality should be taken into account in the policy and advice on treating Covid-19 patients. “You would have thought clinical guidance would have highlighted this as a risk factor," he says, so that ethnicity and the greater risk it correlates to "would be part of the decision-making” around a patient's care. He identifies the “lack of leadership” on the impact of Covid-19 on ethnic minorities as “institutional racism”. *** Why, then, are doctors – whose jobs are typically well paid and secure – experiencing the same increases in risk? Certain chronic health conditions are more prevalent among some ethnic groups than others. Some of those are known risk factors with Covid-19, such as the higher incidences of high blood pressure and diabetes among black and Asian communities. Scientists are also looking into how vitamin D levels interact with Covid-19, following the hypothesis that vitamin D helps regulate the sometimes fatal inflammatory response caused by the virus. Our bodies make vitamin D in response to sunlight, and higher levels of melanin in the skin lower the rate at which the body creates it, so vitamin D deficiency in some BAME individuals could be a potential factor. Michael Barrett, a professor of biochemical parasitology who is working at the new Lighthouse Laboratory Covid-19 testing facility in Glasgow, notes that “the government started formally recommending vitamin D last week” as a way of mitigating the extra time spent indoors due to lockdown. While he warns that vitamin D’s role in fighting the disease is “just a hypothesis, and could be wrong”, Professor Barrett notes that there are “well-known links between vitamin D deficiency and a number of different diseases”. “Credible studies have looked at people with vitamin D deficiencies and found they are more vulnerable to a range of respiratory diseases,” he adds. One such study was conducted by David Grimes, a former consultant physician in the North-West who has been looking at excess mortality among the BAME population for 30 years. Dr Grimes studied vitamin D levels in the town of Blackburn, and found more serious average vitamin D deficiency among British Asians than the white British population. “These tests could be done in any hospital,” Dr Grimes tells me. “It wouldn’t cost much. There is an inertia and ignorance surrounding these deaths.” Roger Kline, a research fellow at Middlesex University and author of the 2014 paper “The Snowy White Peaks of the NHS”, specialises in workforce culture and racial discrimination in the health service. “Some groups of staff are more likely to be at risk, with long-term health conditions like hypertension [and] diabetes,” he says. These staff may at the same time face higher risk; “We also know that BAME staff are more likely to be on the front line, outside of very strict PPE [personal protective equipment] areas – BAME nurses are more likely to be lower-grade, for example,” he says. “There are simple things that could have been done – where staff have additional risks, make sure they don’t go into hot Covid areas.” Kline's research has shown that BAME doctors are less likely to be in the most senior positions, and to feel safe speaking up about problems in the workplace. “BAME hospital staff are less likely to be listened to, [and] more likely to be bullied” he explains. "In some cases I’ve heard that BAME staff have been reorganised to Covid areas." Some hospital trusts have identified these specific workplace issues and changed their approach. The New Statesman has seen a letter sent to BAME staff by the chief executive of the Somerset NHS Foundation Trust, Peter Lewis, after the government’s review was announced. It includes “the decision to include BAME colleagues into the vulnerable and at-risk group". The letter also encourages BAME staff “to feel confident discussing any concerns you may have about Covid-19 and the impact on you and your family with your managers”, and reassures them that sick leave will not “affect your job role or future progress”. For too many people, however, this awareness has come too late. “This should not have been a surprise,” Kline says, of the number of ethnic minority health workers dying. “People are asleep at the wheel... Those who are deciding look different from those who are dying. This doesn’t prove they’re racist – it just shows the need for diversity.” *** The final element that contributes to higher risk for BAME people during the pandemic is geographical. The last census was in 2011, so we lack up-to-date population data at a local level. But viewing that census data against the NHS England ethnicity data on critically ill Covid-19 patients, a disproportionate effect on BAME people – though not at the same scale as the national disparity – is evident. Chart by Ben Walker Analysing data at a local level introduces areas that are suffering for other reasons. For example, while the London boroughs of Harrow and Brent (with ethnic minority populations of 58 per cent and 64 per cent respectively, according to 2011 census data) have the highest death rates in the UK, the third-highest rate is in South Lakeland, Cumbria – a largely rural area with a 98.2 per cent white population, but a median age that is nine years older than the UK as a whole. The areas with above-average ethnic minority populations closely correlate to urban areas. Of the ten worst-affected areas, eight are located in cities and seven have higher than average BAME population. And not all ethnically-diverse areas are being equally impacted. Local authorities categorised by the ONS as “Ethnically Diverse Metropolitan Living” outside of London do not show above average Covid-19 deaths. Meanwhile the “Urban Settlements” category – where 88 per cent of the population is white – join the “London Cosmopolitan” boroughs in showing more deaths than the average population. Despite making up just 7 per cent of local authorities, ethnically-diverse areas in London account for near 17 per cent of fatalities. Ethnically diverse areas outside of London, however, make up 10 per cent of local authorities but account for 4 per cent of fatalities so far. Table by Ben Walker So BAME people, who are more likely to live in cities in the UK, also face a series of environmental disadvantages – higher population density, air pollution, greater dependence on public transport – in a pandemic of infectious respiratory disease. Worse still, while the impact of these factors is obvious, little is being done to quantify their effects. Only 7 per cent of official reports into Covid-19 deaths and patterns globally record ethnicity, according to a study in the Lancet by Dr Manish Pareek, from the University of Leicester. “Given previous pandemic experience, it is imperative policy-makers urgently ensure ethnicity forms part of a minimum dataset,” he wrote. Dr Grimes agrees. “The government is failing in two ways – failing to identify ethnicity as a risk factor, and failing to tell us who is dying, and what their ethnicity is. They are dying every day, excessively. You could classify it as racism.” Anoosh Chakelian is the New Statesman’s Britain editor. She co-hosts the New Statesman podcast, discussing the latest in UK politics. Ben Walker is a data journalist at the New Statesman. Subscribe For more great writing from our award-winning journalists subscribe for just £1 per month!
| 0.5118
|
FineWeb
|
BACKGROUND: Arterial and thromboembolic pulmonary hypertension (PH) lead to arterial hypoxaemia.
OBJECTIVE: To investigate whether cerebral tissue oxygenation (CTO) in patients with PH is reduced and whether this is associated with reduced exercise tolerance.
METHODS: 16 patients with PH (mean pulmonary arterial pressure ≥25 mmHg, 14 arterial, 2 chronic thromboembolic) and 15 controls underwent right heart catheterisation with monitoring of CTO at rest, during maximal bicycle exercise and during inhalation of oxygen and NO. The 6 min walk distance (6MWD) was measured.
RESULTS: Median CTO in PH-patients at rest was 62 % (quartiles 53; 71), during exercise 60 % (53; 65); corresponding values in controls were 65 % (73; 73) (P = NS) and 68 % (66; 70) (p = .013 vs. PH). Inhalation of NO and oxygen improved CTO in PH. In multivariate regression analysis CTO at maximal exercise predicted the work load achieved when controlled for age, pulmonary vascular resistance and mixed venous oxygen saturation (R (2) = .419, p < .000); in addition, the 6MWD was predicted by CTO (adjusted R (2) = .511, p < .000).
CONCLUSION: In PH-patients but not in controls CTO decreased during exercise. Since CTO was an independent predictor of the work load achieved and the 6MWD cerebral hypoxia may contribute to exercise limitation in PH. Clinicaltrials.gov: NCT01463514.
| 0.8249
|
FineWeb
|
The JELQ device is a penis enlargement exercise device that simulates the jelqing method of natural male enhancement. The JELQ device has a proven success rate, with 82% of men reporting a permanent increase in penis size.
Jelqing Made Easy with Better Results
Although jelqing by hand is highly effective, the most common complaint is that it is difficult to perform the number of repetitions necessary to achieve results. With the JELQ device, penis enlargement does not have to be difficult anymore. The JELQ device makes it easy to safely and effectively exercise your penis.
The JELQ device provides constant pressure along the shaft of your penis, encouraging new tissue growth, cell multiplication, and improved circulation. Regular use of the JELQ device for penile exercise helps to achieve a larger and healthier penis.
Numerous Benefits to Using the Jelq Device
- Increased Penis Size
- Improved Sexual Stamina
- More Libido
- Harder Erections
- Improved Confidence
Proven Effective with Noticeable Results
To prove the effectiveness of the JELQ Device we contacted thousands of our past customers to study how the JELQ Device has worked for them.
We were overwhelmed with the positive response. In all, 380 men participated in the study. The results are outlined below:
- 380 men participated in the study
- 82% gained between 0.75 to 2+ inches in erect penis size
- Average erect length gain was 1.44 inches
- Average erect girth gain was 1.27 inches
| 0.6452
|
FineWeb
|
“I would never have drawn my sword in the cause of America if I could have conceived that thereby I was founding a land of slavery.”
Marquis de Lafayette; in a letter to
George Washington, 1784
In 1833, John Marshall, the first Supreme Court Chief Justice, and former President James Madison, two of the last remaining Founding Fathers, led a group to form the Washington National Monument Society. The group hoped to raise funds for the project of building a memorial in tribute of Washington’s military leadership.
Building monuments in honor of heroes was certainly a well-established practice, but this project, at least in its conception, was to be particularly ambitious and the result especially magnificent. The national pride and optimism represented by the effort was notable and emblematic of the time, with the country growing as new states joined the Union and new territories were settled. The difficulties the Society faced, right from the start, were also a reflection of the troubles that faced the young Nation, sectional discord and political intrigue were inseparably entwined with the expansion that was underway.
In the mid-1830’s, Andrew Jackson was consolidating the Executive Branch’s power to an unprecedented degree; his “war” on the Bank of the United States would earn him the dubious distinction of being the only President in the history of the United States (before or since) to be formally censured by the Senate. (Both Richard Nixon and Bill Clinton managed to avoid that mark).
The Missouri Compromise of 1820 allowed Maine to enter the Union as a free state in exchange for Missouri entering as a slave state to maintain an equal number of senators from each faction. The Compromise, which established the boundary of slaveholding in western territories at the line of 36°30’ North, “summoned the South into being” and began the States’ Rights outcry led by South Carolina. Thomas Jefferson called it “the knell of the Union.”
The Tariff Bills of the 1820’s were favored in the North but severely impacted the slave-based cotton industry whose main export market was Britain, the primary target of the tariffs. The issue further set the southern states against the North, and led to John Calhoun secretly authoring the South Carolina Exposition while he was Jackson’s Vice Presidential running mate. The Exposition introduced Calhoun’s provocative (and, many claimed, treasonous) idea of “Nullification”, the right of a state to nullify, or ignore, laws passed by congress if that state determined that the law was unconstitutional.
The Exposition was based on the Virginia and Kentucky Resolves of 1798 against Federal authority. Calhoun, after he was exposed as the author, claimed that his goal with his nullification theory was to save the Union from the threat of secession. That proved not to be the case. After South Carolina led the South in seceding from the Union following Abraham Lincoln’s election in 1860, the new Confederacy put Calhoun’s picture on their money and their postage stamps.
Still, the Civil War was a long way off when the Washington National Monument Society first met in 1833. The trials the Society would face in getting their monument built, just like the secession trials of the Nation itself, were still on the horizon, even as many could see the black clouds and sense the tension of the gathering storm.
Both Marshall and Madison passed away soon after they started the Society, but the group carried on with the memory of their leadership. In 1836, they held a competition for the design of the memorial. Architect Robert C. Mills won the competition with his ornate neo-classical plan of a flat-topped obelisk rising up from the center of a circular colonnade. The colonnade would support a statue of Washington in a chariot on its top and would contain inside the statues of thirty Revolutionary War heroes.
Robert Mill's original design for the Washington Monument
The top spot in the competition was by no means unanimous; there were protracted disagreements about the design of the monument which, coupled with significant problems in obtaining funding (the Financial Panic of 1837, brought about in large part by Jackson’s dissolution of the Bank of the United States, greeted Martin Van Buren’s presidential inauguration), delayed the start of construction for twelve years.
Finally, it was decided that construction on the obelisk would start, with the (faint) hope that money for the colonnade would come later. This pragmatic compromise proved to be very significant, and directly led to later changes that would create the monument that we see today.
On July 4th, 1848, in an elaborate ceremony, the cornerstone was laid for the new national Monument, a memorial to honor George Washington. It was located near the bank of Tiber Creek in the federal city also named for Washington in the District of Columbia. The location was close to the spot recommended by Pierre L’Enfant, the French-born architect who designed the city, for an equestrian monument to the first President. The actual spot, the perpendicular intersection of the lines of sight from the Executive Mansion and the Capitol, couldn’t be used; it was too close to the creek and the ground was swampy and unstable.
Three days before that elaborate ceremony, a 17-year-old boy named Thomas Lincoln Casey had received an appointment from President Polk to the U.S. Military Academy at West Point, New York. Casey, the son of an Army general, would go on to have a superlative career in the Corps of Engineers and would retire as a general himself, but perhaps his most important and enduring accomplishment would be the completion of that very same national monument in December 1884, over 36 years after that elaborate ceremony took place.
| 0.5386
|
FineWeb
|
In the previous post I explained why it was a bad idea to provide the translator with a Word file to translate a web site. I highlighted that translating the site page names helps your pages to score better, but also highlighted that the anchor text for the links -if selected incorrectly- might actually promote the page for keywords of no importance whatsoever, thus hurting your search engine position. None of these can be achieved by sending a Word file for translation.
Translated “Alt” attributes contribute to your ranking juice!
But there are other issues that cannot be solved when sending a Word file. For example, it always helps if the names of the figures correspond to one of the targeted keywords, but what people often forget is that the search engines also consider the “alt” attributes of the figures. The “alt” attributes contain the text that should be shown in the event that the figure cannot be displayed. An example image link might be for example:
<img src=”images/160.jpg” alt=”Rolex watches” width=”100″ height=”150″ border=”0″ align=”bottom”/>
Note the “Rolex watch” attribute, which will be displayed if the 160.jpg image is not available, is meaningless if you translate an English site into another language. A proper translation into Spanish should be like this:
<img src=”images/160.jpg” alt=”relojes Rolex” width=”100″ height=”150″ border=”0″ align=”bottom”/>
Translate also the image file names!
If you really would like to squeeze out every SEO ounce of this page, the name of the figure should be also translated. Now, I know that “160.jpg” does not have a translation per se, but a really good website translator would rename the image file and translate the image link as follows:
<img src=”images/RelojRolex.jpg” alt=”relojes Rolex” width=”100″ height=”150″ border=”0″ align=”bottom”/>
Take note that the image now says “Rolex watch” in Spanish – I have used the singular instead of the plural, so as reinforce even more the basic keyword without actually repeating it. I have also spelled both words in upper case. This would be incorrect in Spanish, which frowns on the use of upper case, except for own names. But the image name itself is not visible to the user, and it helps the search engines to distinguish that there are two words.
Both the “alt” attributes and the image names are considered by the search engines to discern what a page is about, so it makes no sense to use meaningless names or leave them in the original language. A potential Spanish-language customer will search for “relojes Rolex”, not for “Rolex watches”!
Of course, none of this will be possible if the translator receives simply the text in a Word file – and the webmaster which commits this mistake will not be able to really squeeze out every ounce of SEO of his translated page unless he is aware of these stupid things… and in most cases he is not.
| 0.9018
|
FineWeb
|
GranuFlo and NaturaLyte, substances used during dialysis, contain ingredients that can affect the chemistry of a human body. An imbalance of electrolytes (essential body chemicals) or metabolic abnormalities can lead to serious health complications, including potentially deadly heart problems.
Excess base (or alkaline) levels, low potassium levels and high sodium levels in the body have all been associated with GranuFlo and NaturaLyte.
Each of these conditions can result in serious health consequences, including:
- Arrhythmias (irregular heart rate) and other related heart or circulatory problems
- Muscle damage
Granuflo and NaturaLyte Linked to Excess Alkaline Levels in the Blood (Alkalosis)
Alkalosis, sometimes referred to as alkalemia, results when the body fluids contain excess base, or alkali. The condition is the opposite of acidosis, or excess acid. Active ingredients in GranuFlo and NaturaLyte are metabolized by the liver into a base (bicarbonate), leading to increased alkalinity in the blood.
Acids and Bases in Blood
The blood is made up of acids and bases. The amount of acids and bases in the blood are measured on a pH scale. An appropriate pH balance needs to be maintained in order to avoid significant health complications.
A pH measurement of 7 is considered neutral. A lower pH measurement indicates higher amounts of acid present in the blood. Typically, a normal blood pH measurement should be between 7.35 and 7.45, according to the American Association for Clinical Chemistry. A measurement above 7.45 is usually a sign of alkalosis.
Types and Causes of Alkalosis
The kidneys and lungs help to maintain the regulation of pH in the body. A decrease in carbon dioxide or an increase in bicarbonate levels makes the body too alkaline.
The five types of alkalosis include:
- Respiratory alkalosis – caused by low carbon dioxide levels in the blood.
- Metabolic alkalosis – caused by too much bicarbonate (base) in the blood or the loss of too much acid.
- Hypochloremic alkalosis – caused by an extreme lack or loss of chloride, such as from prolonged vomiting or sweating.
- Hypokalemic alkalosis – caused by the kidneys’ response to an extreme lack or loss of potassium.
- Compensated alkalosis – when the body returns the acid-base balance (pH) to normal in cases of alkalosis, but bicarbonate and carbon dioxide levels remain abnormal.
Patients given GranuFlo or NaturaLyte who are affected by alkalosis are typically suffering from metabolic alkalosis. When the condition is accompanied by a drop in potassium levels, patients are likely to suffer from hypokalemic alkalosis as well.
Symptoms of Alkalosis
Symptoms of alkalosis can vary depending on its cause, severity and accompanying health complications. Sometimes alkalosis can lead to low levels of calcium, which can result in subsequent headache, lethargy (a lack of energy and enthusiasm) and neuromuscular excitability, sometimes accompanied by delirium, tetany (intermittent muscular spasms) and seizures. Alkalosis along with low potassium can cause weakness.
It is also possible to experience chest pain and arrhythmias, or irregular heartbeats, as a symptom of alkalosis.
Early symptoms associated with alkalosis might include:
- Nausea and/or vomiting
- Prolonged muscle spasms
- Muscle twitching
- Hand tremors
As the condition worsens and becomes more serious, symptoms might include:
- Dizziness and/or lightheadedness
- Difficulty breathing
- Stupor (a state of near-unconsciousness or insensibility)
- Shock (a life-threatening medical condition)
Treatment of Alkalosis
Treatment of alkalosis depends on the underlying cause of the condition. A patient’s outlook is better the sooner a diagnosis is made and treatment is received. People with healthy kidneys and lungs do not usually have serious alkalosis. However, alkalosis due to existing kidney problems is usually unpreventable.
Medicines or supplements may be given to correct any chemical loss, such as the loss of chloride or potassium. Fluids containing electrolytes may also be needed to correct electrolyte imbalance. Severe loss of electrolytes will require hospitalization for care. If oxygen levels are low, a patient may receive oxygen.
Alkalosis that is left untreated or treated improperly can lead to coma, electrolyte imbalance (such as low potassium levels), and arrhythmias (the heart beating too fast, too slow or irregularly).
Alkalosis requires emergency medical attention when a patient experiences any of the following signs or symptoms:
Confusion or inability to concentrate
Inability to catch one’s breath or severe breathing difficulties
Loss of consciousness
Symptoms of alkalosis that rapidly worsen
Granuflo and NaturaLyte & Low Potassium (Hypokalemia)
Potassium is a mineral the body needs to work properly. It is also an electrolyte, meaning it conducts electrical impulses throughout the body. Other electrolytes in the body include sodium, chloride, calcium and magnesium.
Potassium is important to proper heart functioning, such as helping to keep the heartbeat regular. It also plays a role in nerve function and both skeletal and smooth muscle contractions, making it an essential component to digestion and muscular operation. Potassium helps move nutrients into cells and waste products out of cells. Finally, it assists in maintaining a normal blood pressure by off-setting some of sodium’s more harmful effects.
Potassium in Food
While potassium is an essential substance to our bodies, it is not naturally produced by the body. Therefore, it is important to eat the right foods in order to take in the recommended daily intake of potassium, which is at least 4,700 milligrams in adults, according to guidelines established by the Institute of Medicine of the National Academies of Science.
Sources of potassium-rich foods in many people’s diets include:
The kidneys, when healthy, help to keep the right amount of potassium in the body. But for people with chronic kidney disease, the kidneys may not remove the extra potassium from the blood through the urine. Some substances, such as GranuFlo and NaturaLyte, can also affect potassium levels in patients’ blood.
Symptoms of Hypokalemia
Potassium is needed for proper cell functioning. While a small dip in potassium level may not cause any symptoms, a more significant drop can lead to serious health complications. A very low potassium level can even cause the heart to stop.
Mild symptoms associated with low potassium might include:
- Feeling of skipped heart beats or palpitations
- Muscle damage
- Muscle weakness or spasms
- Tingling or numbness
More severe symptoms include abnormal heart rhythms, especially in people with heart disease. These irregular heartbeats can cause patients to feel lightheaded or faint.
The University of Maryland Medical Center reported that low potassium can also affect:
- Bone health
- High blood pressure
- Heart disease
Treatment of Hypokalemia
A blood test can be done to check a patient’s potassium level. If hypokalemia is mild, potassium supplements can be prescribed as the first line of treatment. This method of treatment may be all that is needed along with eating foods rich in potassium.
If the condition is more severe, potassium may need to be administered through a vein (IV). For patients taking diuretics, doctors may prescribe extra potassium to be taken daily. A certain type of diuretic called a potassium-sparing diuretic can also be prescribed to help keep potassium in the body.
In severe cases, life-threatening paralysis may develop, especially when there is too much thyroid hormone present in the blood. This condition is called thyrotoxic periodic paralysis.
A severe drop in potassium can also lead to serious and sometimes deadly heart rhythm problems (arrhythmias).
GranuFlo and NaturaLyte’s Other Serious Side Effects – High Sodium (Hypernatremia)
Increased concentrations of sodium in the blood, called hypernatremia, have also been linked to the use of GranuFlo and NaturaLyte. Hypernatremia is due to a decrease in total body water (TBW) rather than an increase in sodium intake or other electrolyte content.
Hyperosmolality, an increase in the osmolality of the body fluids (referring to the body’s electrolyte-water balance), can result from water loss. This condition can cause brain cell shrinkage, leading to subsequent brain injury.
Other serious complications might include:
- Cerebral edema (excess accumulation of fluid in the brain)
- Neuromuscular excitability
- Circulatory problems, such as tachycardia (faster than normal heart rate at rest) or hypotension (low blood pressure resulting in lack of blood to organs and body tissues)
- Overactive or overresponsive reflexes (hyperreflexia)
Please seek the advice of a medical professional before making health care decisions.
Kristin Compton is a medical writer with a background in legal studies. She has experience working in law firms as a paralegal and legal writer. She also has worked in journalism and marketing. She’s published numerous articles in a northwest Florida-based newspaper and lifestyle/entertainment magazine, as well as worked as a ghost writer on blog posts published online by a Central Florida law firm in the health law niche. As a patient herself, and an advocate, Kristin is passionate about “being a voice” for others.
| 0.9678
|
FineWeb
|
The risk of liver cancer is increased in patients with cirrhosis and cured hepatitis C virus (HCV) infection. These patients are screened regularly with the aim of detecting any cancers that do develop as early as possible and improving patient outcomes. However, the risk of cancer varies substantially between these individuals. If an individual patient’s risk could be better predicted, clinicians may be able to target their surveillance resources to those who might benefit the most.
Several liver cancer risk prediction models have been developed already but their accuracy has not yet been fully tested. Dr Hamish Innes (Glasgow Caledonian University and a member of the DeLIVER early cancer detection consortium led by University of Oxford’s Professor Ellie Barnes) and colleagues assessed the relative performance of six risk prediction models in two external “validation” cohorts.
Using data from the Scotland HCV clinical database and the STOP-HCV study, the team found that the “aMAP” model performed best in terms of calibration and discriminating between patients who go on to develop liver cancer versus those who do not. They observed that discrimination of the models varied by cohort, age and HCV genotype (for genetic risk scores), and that some risk models underpredicted liver cancer risk. These results highlight the importance of this validation step and further research before models can be used to support clinical decision-making.
| 0.7524
|
FineWeb
|
Al Vernacchio is an English and sexuality education teacher at Friends’ Central School in Pennsylvania and author of For Goodness Sex: Changing the Way We Talk to Teens About Sexuality, Values, and Health. A veteran teacher with 17 years of experience, Vernacchio first realized he wanted to learn more about sexuality and teach sexuality education during his first teaching job.
“The human sexuality curriculum was taught at the end of the ninth grade religion class, which was a class I taught,” Vernacchio explains. “Once I started teaching it, I realized I knew a lot about the spiritual side but not so much about the sexual side of things. I wanted to learn more and help people grow in sexually healthy ways.”
Vernacchio went on to get a master’s degree in human sexuality education and, in his words, has “been teaching sexuality education ever since, whether in my sexuality classes or my English classes.”
It’s just this sort of expertise in cross-topic teaching that made us think Vernacchio was the perfect person to interview for our second installment of our series Inter(sex)tions, which explores how sexuality education intersects with core topics taught in schools.
Answer: As a teacher of both English and a course on sexuality at the high school level, how and in what ways do these two subjects overlap?
Vernacchio: Almost any text taught in a high school English classroom can be used to teach a lesson on healthy sexuality. It’s just a matter of whether the teacher is willing to “go there” when teaching the text and whether the school is open to the teacher doing that. Literature is all about the human experience, and at the core of that human experience is our sexuality. We are sexual beings every minute of every day, from birth to death. Everything we do and every interaction we have is influenced by our bodies, our gender identity and expression, and our sexual and romantic attractions. The study of literature becomes so much richer when we understand the characters as fully human, and that means fully sexual.
I talk about sexuality all the time in my English class, because it’s on every page of every text I teach. It’s hard to teach The Catcher in the Rye without recognizing that Holden Caulfield, the protagonist, is a confused, horny, 16-year-old virgin who has a lot of questions about sex and dating and life, and those questions have an impact on his interactions with every other character. The novel also gives students a glimpse into the world of 1950s America and how sexism and homophobia were present there just as they are today.
One of my favorite experiences of talking about sexuality in the context of literature comes in the 11th-grade American Literature class. We read that old chestnut, The Scarlet Letter, followed immediately by Tony Kushner’s Angels in America: Millennium Approaches. Both texts are about characters scorned by society because of sexual issues. Hester Prynne’s scarlet letter (“A” for adulteress) is mirrored by the Kaposi sarcoma lesions that mark the characters living with AIDS in Angels in America. Both texts talk about the conflict between the American Dream of living one’s life openly and honestly and the prejudice and discrimination that comes from a society that demands conformity and punishes those who stray beyond the boundaries of what’s deemed “acceptable.” Both ask what the price of freedom is and both ultimately give the message that being true to oneself is what is most important.
Answer: When teaching English, what texts do you find foster the most conversations about sexuality or topics related to sex ed?
Vernacchio: There are certainly texts that foster conversations about sexuality more easily than others because their subject matter is directly related to sexuality in some way (think Romeo and Juliet or Their Eyes Were Watching God). But I think what’s much more important is the attitude of the teacher and the community created in the classroom. Is it one that is safe for discussing “real” issues? Are the students encouraged to look at the way gender and sexual orientation may impact what’s happening in a novel or a story? For instance, when reading The Adventures of Huckleberry Finn, of course it’s essential to talk about race and the dehumanizing effects of slavery in the United States. But Huck and Jim are also both men (well, a man and a boy), and that also impacts how they relate to one another. It’s also interesting to notice the place of women in that novel; they are oppressed by their gender in similar ways that slaves are oppressed by their race. Twain didn’t set out to write a novel about the place of women in nineteenth-century America, and I hope no teacher would avoid the racial issues in the novel to talk about that instead, but talking about the intersection of race and gender in the novel can enrich the experience and give students a new way of looking at issues of freedom, fairness and oppression.
Answer: For health teachers who are looking to more deeply engage students using content from other classes, what advice do you have?
Vernacchio: Health teachers have the ability to be extraordinarily creative in their classrooms. Teaching from novels and real-world experiences is so much more effective than using an out-of-date health textbook. There are amazing young adult novels, poems and essays that cover topics like navigating puberty, coming out, surviving sexual assault, being transgender, etc. These are easy reads and can open up discussions among students in powerful ways. Beyond fiction, teachers can use things like advertising to teach about gender role inequity. I’ve sent my students out to look at the display of Valentine’s Day cards in a store and count how many cards can be used by people in same-gender relationships. Television commercials can be a great focusing tool for a class and cover every sexuality-related issue imaginable. You-tubers like Laci Green are another valuable resource. Websites like Sexetc.org, Scarleteen and Go Ask Alice allow students to explore topics of their choosing and guarantee them accurate, thorough and up-to-date information.
Answer: What tips do you have for teachers fielding questions about sexuality when sexuality isn’t their area of expertise?
Vernacchio: Whether you know the answer or not isn’t the issue. It’s the way you answer the question or respond to the statement that’s important. If a teacher seems nervous, shocked or disgusted, that’s going to send a powerful message to the student. When we normalize students’ natural curiosity about sexuality, we do them a great service. It would also be great to have resources available in every classroom that answered basic questions about sexuality—pamphlets, books, posters. One thing every great teacher knows is that where to find an answer is just as, if not more, important than knowing the answer. Most of all, though, teachers who model authenticity and show their humanity to their students are teaching a terrific lesson about healthy sexuality.
| 0.8229
|
FineWeb
|
RationaleThis rationale complements and extends the rationale for the Technologies learning area.
In an increasingly technological and complex world, it is important to develop knowledge and confidence to critically analyse and creatively respond to design challenges.
AimsIn addition to the overarching aims for the Australian Curriculum: Technologies, Design and Technologies more specifically aims to develop the knowledge, understanding and skills to ensure that, individually and collaboratively, students:
develop confidence as critical users of technologies and designers and producers of designed solutions
StructureThe Australian Curriculum: Design and Technologies (F–10) comprises two related strands:
Design and Technologies knowledge and understanding – the use, development and impact of technologies and design ideas across a range of technologies contexts
PDF documentsResources and support materials for the Australian Curriculum: Technologies are available as PDF documents.
Design and Technologies: Sequence of content
Technologies: Sequence of achievement
| 0.9967
|
FineWeb
|
Acquired Dyslexia and Dysgraphia Across ScriptsView this Special Issue
Allographic Agraphia for Single letters
The case is reported of a patient (PS) who, following acute encephalitis with residual occipito-temporal damage, showed a selective deficit in writing cursive letters in isolation, but no difficulty to write cursive-case words and non-words. Notably, he was able to recognize the same allographs he could not write and to produce both single letters and words in print. In addition to this selective single letter writing difficulty, the patient demonstrated an inability to correctly perform a series of imagery tasks for cursive letters.PS’s performance may indicate that single letter production requires explicit imagery. Explicit imagery may not be required, instead, when letters have to be produced in the context of a word: letter production in this case may rely on implicit retrieval of well learned scripts in a procedural way.
| 0.9974
|
FineWeb
|
Earning This Amount of Money Will Make You Happiest, Study Says
Recently, the adage “Money can’t buy happiness” was given a leg to stand on by a study suggesting a raise won’t have a real impact on your state of mind. But a different study claims that in regard to your income, there is totally a financial sweet spot for optimal satisfaction.
The expansive study, published in the journal, Nature Human Behaviour, used a Gallup World Poll to evaluate the income and happiness of 1.7 million people around the world. The authors of the research found monetary averages associated with satisfaction: For daily emotional well-being, people were generally best off earning $60,000 to $75,000 a year, but for long-term satisfaction, the mark was $95,000.
Those numbers are the worldwide average, however; the averages vary from country to country, and in North America (and most “wealthy” countries) they are higher. For daily emotional well-being, the sweet spot here is $65,000 to $95,000, and for long-term satisfaction that number is $105,000. The area with the lowest income marker for long-term satisfaction is Latin America, at $35,000, and Australia and New Zealand report the highest, at $125,000.
But happiness does peak at a certain point, according to the researchers. If people achieved more than the optimal income for long-term satisfaction, they were likely to see decreased happiness, partially due to a phenomenon Money described as the “hedonic treadmill.” This describes when people very quickly adjust to increases in income.
Still, the study has its shortcomings: Gauging and measuring happiness is a subjective practice that often relies on self-reporting. Additionally, the study examined individual income instead of household income, which might have skewed conclusions about how much money someone needs in order to be happy. Not to mention that the concept of happiness itself is contentious, and many nations don’t place as much weight on it as Americans do.
So, while you should definitely hustle at your day job and also find a side gig that’s meaningful to you, remember to make time for yourself and don’t make money your purpose. Instead, leave work early every now and then to go have fun with your #squad.
More from Well+Good:
| 0.9155
|
FineWeb
|
Number of manual packages tagged with ascii.
evalresp can be used to calculate either the complex spectral response or the frequency-amplitude-phase response for a specified station or set of stations, channel or channels, and network, for a specified date, time and frequency, using the SEED ASCII response (RESP) files produced by the program
rdseed as input.
NetDC is a data request system that allows a user to request seismological information from multiple data centers through a single email form. Information is delivered to the user through email and FTP in a uniform digital or text format.
| 0.8339
|
FineWeb
|
Black history month is one of Canada’s most important months of the year. It is a way of remembering and honouring the legacy of important African Americans. It’s a month where we take a moment in our busy lives to remember the great pioneers that fought hard for equal rights and opportunities for their community. Martin Luther King Jr., Malcom X, and Harriet Tubman are some of the most widely celebrated names. But let me introduce you to an African American inventor, you may not have heard of. Without her, you would be using a fire-pit to stay warm!
This inventor was born in the small town of Morristown, New Jersey in 1895. This genius modernized the furnaces we use every day to keep us warm during the harsh winters. This inventor is none other than Alice H. Parker! Parker attended classes at Howard University Academy in Washington, D.C. The academy was a high school connected to Howard University, and in 1910, Parker earned a certificate with honours from the Academy, which was a feat for a woman and an African American at the time.
Alice H. Parker was an African American inventor famous for her patented system of central heating using natural gas. Her design allowed cool air to be drawn into the furnace, then conveyed through a heat exchanger that delivered warm air through the ducts to individual rooms in a house. The concept of central heating was around before Parker, but hers was unique as she chose natural gas over coal or wood as its fuel, which increased efficiency. Her idea was also unique as it contained individually controlled air ducts to transfer heat to different parts of the building—a new idea that hadn’t been introduced before! This was convenient as well because it meant people didn’t have to go outside to buy or chop wood for fuel. She got the inspiration for her design because she felt the fireplace was not effective enough in warming her home through the cold New Jersey winters.
Parker filed for a patent in 1919 and received a patent number on December 23, 1919. Parker filing a patent was a huge milestone because she was an African American woman in the early 1900s and her filing for a patent preceded both the Civil Rights Movement and the Women’s Liberation movement. At this time, African American women had limited opportunities and Parker receiving a patent for her invention during that time was a very outstanding achievement. This crossed barriers for women and African Americans and encouraged them to overcome difficulties that were placed against them! She was an outstanding woman that not only went against the odds but was also rewarded for it.
Unfortunately, Parker’s idea was never implemented due to safety concerns over the regulation of heat flow, but it was a steppingstone and was the precursor to many modern heating systems. With her idea and modifications to safety concerns, it modernized the furnaces we see today, including features such as thermostats, zone heating, and forced air furnaces. Parker is the reason why we can sit comfortably at home in our harsh Canadian winters that can reach -30 degrees Celsius and still be warm.
Since it’s February, the official month of black history month, and winter, I think today of all days (and every other day too!) is a good time to reflect on how Parker contributed to our comfort and appreciate the hard work she put in for us.
| 0.7722
|
FineWeb
|
November 24 1967 horoscope and zodiac sign meanings.
Here you can find a lot of entertaining birthday meanings for someone born under November 24 1967 horoscope. This report consists in some sides about Sagittarius attributes, Chinese zodiac traits as well as in an analysis of few personal descriptors and predictions in general, health or love.
Horoscope and zodiac sign meanings
The zodiac sign connected with this birthday has several eloquent implications we should be starting with:
- The connected zodiac sign with 11/24/1967 is Sagittarius. It sits between November 22 - December 21.
- Archer is the symbol for Sagittarius.
- In numerology the life path number for everybody born on 24 Nov 1967 is 4.
- This sign has a positive polarity and its observable characteristics are relaxed and good-humored, while it is classified as a masculine sign.
- The element linked to this sign is the Fire. Three characteristics of a native born under this element are:
- offering own talents to the world
- remaining focused on own mission
- being aware of spiritual law
- The linked modality to this astrological sign is Mutable. Three characteristics of natives born under this modality are:
- likes almost every change
- very flexible
- deals with unknown situations very well
- Natives born under Sagittarius are most compatible with:
- Someone born under Sagittarius horoscope is least compatible with:
Birthday characteristics interpretation
Within this section there is a list with 15 personality related characteristics evaluated in an subjective manner that best explains the profile of a person born on November 24, 1967, plus a lucky features chart that aims to interpret the horoscope influence.
Horoscope personality descriptors chart
Little to few resemblance!
Very good resemblance!
Little to few resemblance!
Horoscope lucky features chart
As lucky as it gets!
November 24 1967 health astrology
Pelvic inflammatory disease (PID) with a bacterial cause.
Jaundice which is a signal of liver disease that causes a yellowish pigmentation of the skin and conjunctival membranes.
Narcissistic personality disorder which is the disorder in which someone is obsessed with their own image.
Cellulite (buttocks) which represents adipose deposits in this area, also known as orange peel syndrome.
November 24 1967 zodiac animal and other Chinese connotations
The Chinese zodiac is another way to interpret the influences of the date of birth upon a person's personality and evolution. Within this analysis we will try to understand its relevance.
Zodiac animal details
- People born on November 24 1967 are considered to be ruled by the 羊 Goat zodiac animal.
- The Yin Fire is the related element for the Goat symbol.
- The lucky numbers related to this zodiac animal are 3, 4 and 9, while 6, 7 and 8 are considered unfortunate numbers.
- Purple, red and green are the lucky colors for this Chinese sign, while coffee, golden are considered avoidable colors.
Chinese zodiac general characteristics
- Among the specificities that define this zodiac animal we can include:
- dependable person
- intelligent person
- shy person
- likes clear paths rather than unknown paths
- This zodiac animal shows some trends in terms of love behavior which we detail in here:
- difficult to conquer but very open afterwards
- In terms of the qualities and characteristics that relate to the social and interpersonal skills of this zodiac animal we can affirm the following:
- prefers quiet frienships
- difficult to approach
- proves to be uninspired when talking
- often perceived as charming and innocent
- Under the influence of this zodiac, some career related aspects which may be laid down are:
- follows the procedures 100%
- is very rarely initiating something new
- is capable when necessary
- is not interested in management positions
Chinese zodiac compatibilities
- A relationship between the Goat and any of the following signs can be one under good auspices:
- The Goat and any of the following signs can develop a normal love relationship:
- Chances of a strong relationship between the Goat and any of these signs are insignificant:
Chinese zodiac career
Taking into account the features of this zodiac, it would be advisable to seek careers such as:
- hair stylist
- interior designer
Chinese zodiac health
A few things related to health should be in this symbol's attention:
- dealing with stress and tension is important
- should pay attention in keeping a proper schedule for sleeping
- should try to spend more time among nature
- should pay attention in keeeping a proper meal time schedule
Famous people born with the same zodiac animal
Few famous people born under the Goat years are:
- Orville Wright
- Muhammad Ali
- Julia Roberts
This date's ephemeris
The ephemeris for this birthday are:
Sidereal time: 04:09:04 UTC
Sun in Sagittarius at 01° 00'. Moon was in Leo at 18° 29'. Mercury in Scorpio at 12° 56'. Venus was in Libra at 15° 03'. Mars in Capricorn at 23° 58'.
Jupiter was in Virgo at 04° 35'. Saturn in Aries at 05° 52'. Uranus was in Virgo at 28° 26'. Neptun in Scorpio at 24° 21'. Pluto was in Virgo at 22° 34'.
Other astrology & horoscope facts
November 24 1967 was a Friday.
It is considered that 6 is the soul number for November 24 1967 day.
The celestial longitude interval linked to Sagittarius is 240° to 270°.
| 0.9228
|
FineWeb
|
The communist philosophy originated in Russia during the mid-19th century. It is both a political and economic system. Communism is somewhat based on the socialist philosophy, but is different in many ways. Some people refer to communism as socialism, but this is very misleading. Karl Marx and Friedrich Engels published a pamphlet titled "Manifesto of the Communist Party" in 1848. Marx and Engels named their system "communist," rather than "socialist," to distinguish it from utopian socialism.
Very Persuasive Communists Demonstrating in Paris, France.
National Archives and Records Administration.
Still Picture Branch; College Park, Maryland.
The manifesto defines communism as the abolition of private property. Near the end of the Communist Manifesto, they called for the forcible overthrow of all existing social institutions. The manifesto did not define most other aspects of the movement, however, so many details were left open to change. This created the possibility for major fundamental changes when Lenin, sometimes known as Nikolai Lenin, came to power in the revolution of 1917. Many changes may have resulted from the numerous problems encountered while establishing a new government in Russia. Lenin had many problems setting up the new government and establishing a working economic system.
He nearly destroyed all that remained of the Russian economy. An emergency transitional system was set up to avert a complete economic collapse, and then Lenin died in 1924. It then fell upon Lenin's successor, Joseph Stalin, to devise a working economic system. Stalin never changed the economic structure and the transitional system stayed. In a way, the Communists never created an economic system and kept the transitional system until the economic catastrophes of the 1980s and 1990s. It is clear that economics are far more powerful than even the most powerful totalitarian governments.
Lenin, the mastermind of the revolution, did not believe that ordinary workers could know what is best for them and he believed that full-time managers should control them. Lenin's views created a more severe government than some of the followers of Marxist philosophies had envisioned, and the party split into two factions. One of the factions, Lenin's Bolsheviks, believed in more government control of the country and the other faction, the Mensheviks, were more moderate. After Lenin's death, Joseph Stalin strengthened the communist party in Russia and promoted communism in other nations. Eventually, the communist system became one of totalitarian control. Russia was the only communist country until after World War II.
The Communist Manifesto stated the belief that the basis of communism was historical materialism. Marx and Engels believed that economic forces determine the course of history. They thought that they could explain all history as a struggle between the ruling classes and oppressed groups. Marx thought that capitalism must inevitably give way to socialism. This would come about through a struggle between the bourgeoisie, who owned the factories and machines and the proletariat, the class of modern wage earners.
| 0.5658
|
FineWeb
|
Welcome to GlycoScientific
We are developing tools to enable research in glycobiology, an emerging field of research focused on the identification, characterization, and quantitation of sugars, saccharides, and/ or glycans. These molecules are essential components to all living things and play important functions in various biological events such as cell-cell recognition, signal transduction, tumorigenesis, and epigenetics. Alterations to these glycans are important medically since they are often associated with diseases, such as cancer, diabetes, heart disease, Alzheimer’s, and autoimmune disorders.
Epigenetics and O-GlcNAc proteins
There is a growing body of evidence that directly links O-GlcNAc to the Histone Code, which makes a compelling case that O-GlcNAcylation plays a pivotal role in epigenetics. Despite its biological importance, the analysis of O-GlcNAc-modified proteins remains highly challenging. Unlike phosphorylation for which a wide range of antibodies are available, studies of O-GlcNAc modification are hampered by a lack of effective tools for its detection, quantification and site localization. According to the National Academy of Sciences the lack of new analytical tools to study O-GlcNAc is the greatest impediment to understanding the roles of O-GlcNAcylation in cellular physiology and disease and advance the field.
High affinity site-specific O-GlcNAc antibodies, to date, have eluded production using the traditional approaches to generate antibodies (Abs). This difficulty may result from O-GlcNAc modified epitopes being self-antigens, thus, tolerated by the immune system, and combined with the relatively weak carbohydrates-protein interaction that complicates antibody maturation.
GlycoScientific has developed a novel immunization strategy to overcome these issues.
At GlycoScientific we develop site-specific antibodies that can be used as tools in the elucidation of the role that O-GlcNAc plays in epigenetics and in chronic human disease including diabetes, cardiovascular disease, neurodegenerative disorders, and cancer.
GlycoScientific has patented process to produce glycoproteins whose glycans are isotopically labeled (iGlycan). These are ideal for the qualitative and quantitative analysis of glycoprotein glycans, and can be applied to individual glycans or complex mixtures such as including that performed in the field of glycomics.
Advantages of iGlycan:
- No alteration to mass spectrometry (MS) based workflows.
- Enhanced accuracy and precision.
- Comparable results from researchers in geographically different places.
- Absolute quantitation of glycans attached to glycoprotein.
We also produce a human monoclonal IgG1 labeled with 15N4 and 13C6 labeled Arginine and 15N2 and 13C6 labeled Lysine developed for use as an internal standard for the quantitation of monoclonal antibodies and Fc-fusion proteins.
Visit the Need for Better Glycan Identification Standards page for an overview of the challenges facing the industry.
| 0.8826
|
FineWeb
|
Nick White/Photodisc/Getty Images
It is important to write a thank you note to a professor properly so that your gratitude will be shown respectfully. A thank you note to your professor is appropriate when he has helped you in some way, such as writing a letter of recommendation for you. You can prepare a thank you letter that demonstrates your appreciation for their contribution to your educational career in less than 10 minutes of your time.
Begin your thank you note with "Dear Professor," followed by his name. This will be the greeting part of the note, and should look something like this:
"Dear Professor Johnson,"
State your reason for writing the note as being to give thanks for his assistance regarding a particular request. This request could have been a letter of recommendation that the professor wrote for you or possibly going over your research with you. It should look something like this:
"I can't thank you enough for taking a moment to write a letter of recommendation for my graduate school application. I am certain that your recommendation will be a deciding factor toward my acceptance."
Start a new paragraph and describe your admiration for another contribution from the professor during your educational career. Write a specific example of when the professor's personal teaching style helped you grasp a concept, for example. Give reference to a particular lecture that stood out in your mind as giving you an understanding of the topic he was discussing, if you choose. This second indication of gratitude makes the first one appear more sincere.
Close out the letter. Thank the professor again to end the body of the letter, then skip down two lines and write "Sincerely." Skip down two more lines and write your full name, and the year you are expected to graduate. Write your major and the current semester season and year on the next line. It should look something like this:
"Once again, I really would like to express my thanks for your time in assisting me as you have."
Michael Jones, Class of 2011
Information Technology 202, Spring 2011
- Nick White/Photodisc/Getty Images
| 0.9918
|
FineWeb
|
today I migrated to EclipseLink 2.4.0-RC2 and all my queries stopped working.
After investigating the issue, I found out that some of the conditions in my JPQL queries are not generating the correct SQL.
Here is a simplified example.
I have the following JPQL conditions:
(NOT( 1 = :free) OR v.price = 0)
NOT(1 = :blockUnrated) OR
Using EclipseLink 2.3.1 I get the following SQL conditions:
(Not ((1 = ?)) Or (T0.Price = ?))
With EclipseLink 2.4.0 the SQL is wrong:
NOT (((1 = ?) OR (t0.PRICE = ?)))
NOT (((1 = 0) OR (....)))
As you can see in 2.4.0 the whole internal statement (OR) is inverted. Since NOT is a unary operator, it has a higher priority than anything else.
I am not sure if I'm wrong or this is a real bug. But in any case there's a big difference in behavior between 2.3.1 and 2.4.0.
It will be great if someone can test this and see if this is a real issue.
| 0.7792
|
FineWeb
|
[This Post has been authored by our former blogger Varsha Jhavar. Varsha is a lawyer based in Delhi and is a graduate of Hidayatullah National Law University, Raipur. Her previous posts on the blog can be viewed here, here, here, here.]
After making an argument for the need of regulating AI from an IP perspective in Part I, Part II of the post focuses on the different aspects which can be regulated to develop a responsible and ethical AI.
Licensing of training datasets
The licensing of datasets – for the concerned rights under Sec. 14 of the Copyright Act, 1957 (the Act), along with attribution seems like a possible solution that would address the concerns raised in the above cases. The problem is how do you license all the copyrighted images/information available on the web?
Some have argued in favour of fair use, at least in the US context. It has been contended that use of databases should generally be allowed for training, whether the contents of such database are copyrighted or not. This has been supported by reasons – broad access to training sets will make AI better, fairer and safer; use of data by AI is transformative; as training sets are likely to contain millions of works with different owners it is not possible to find and negotiate licenses for these photos, texts, videos, code etc. for use in training; and broader access to quality data can help with issues of bias. Essentially, they have argued that copyrighted works should be allowed to be copied for non-expressive uses, such as AI learning to recognise stop signs, self-driving cars or learning how words are sequenced in conversations, but argued that the question of fair use should become tougher when learning is being done to copy expressions. The fact that we have in the past not dealt with issues of such complexity, i.e. the licensing of large volumes of data from different people who have not organised themselves as an organisation or society, does not mean that copyright infringement should go unchecked.
The argument regarding allowing copying of data without permission for non-expressive uses such as for teaching self-driving cars how cycles look like from images could be found as not amounting to infringement. It could be argued that such use is protected under Section 52(1)(b) of the Act (i.e. transient storage of work in the process of transmission to the public) or as fair use. However, not taking licenses for other kinds of uses such as training of ChatGPT, Stable Diffusion, Github Copilot, etc. by developers, could be considered commercial exploitation and probably might not qualify as fair dealing. In the past, Calcutta HC and Delhi HC have found websites/platforms engaged in streaming of copyrighted songs for money to not be covered under Section 52(1)(a)(i) of the Act (i.e. private or personal use). Let’s see if it would qualify as fair use? As commercial subscriptions are offered by platforms like ChatGPT and Midjourney, the first factor—character and purpose of use—is likely to be found against AI developers. However, transformative use argument could be taken by AI developers. The second factor – nature of the copyrighted work (i.e. whether factual or creative) is also likely to go against them. The third factor – amount and substantiality of the portion used in relation to the copyrighted work as a whole, is a fact-specific enquiry and can go either way depends on the circumstances. AI might fall afoul of the fourth factor as well – effect of the use upon the potential market for the copyrighted work, as the output might act as a market substitute and also, the fact that non-compensation of copyright owners for training might destroy copyright owners’ licensing markets.
Practically, when it comes to licensing data for the training of generative AI like ChatGPT, unlike music which is mostly organised (in terms of there being copyright societies/music labels that will license large repertoires under a single agreement without requiring platforms to deal with thousands of individuals) no such equivalent exists for information on the web. Practically, it might not be so difficult to secure licenses from stock photography websites for images such as Getty Images (and other stock photography websites), which has in the past licensed content from its platform to an AI art generator. Instead of taking multiple licenses, maybe there could be a one-stop window from where you can license the database for the any specific territory. There can be separate databases for images, videos, information/data, etc. and AI developers could license the relevant database as per their requirements. For example, an AI developer for a platform like Midjourney, can take license for the images database. This would encourage the development of AI, in a manner that is respectful of copyright law and the human creators whose work is being used as part of training datasets. Also, it would be fair to those who are taking licenses for training AI. For coding, there is The Stack, a 6.4 TB dataset of source code with permissive licenses i.e. licenses which have the least restrictions in terms of copying, modification and redistribution, and developers can also request for removal of their code from the database. Even for information/data on the web, the creation of a database including the data on the web for which there are permissive licenses, not including works which are covered by licenses like CC by ND and additionally, provide an option to opt out of inclusion of their work on the database. The database could be licensed to all on reasonable terms, irrespective of the fact that the licensee is a competitor. Licensing of data/copyrighted works and providing compensation for the same protects the copyright owners’ licensing markets, and also, the human creators’ incentive to create new works. There needs to be transparency about the manner in which a training dataset has been obtained/licensed by the AI developer.
What about the moral rights of the individuals whose works form a part of the training database and are used by AI in a different context? For example, what would happen if AI took a religious painting and used it in a context or manner not envisioned or intended by the author? Attribution or the right to paternity is important in order to recognise the contributions of the human creators. It is also important to prevent false attribution to humans when a work has been created by AI, so that humans won’t attempt to pass off the AI’s work as their work. It might even be possible for AI developers to ensure the right to paternity is respected, but the right of integrity is the one that would be difficult for AI developers to guarantee, because unlike human beings, AI can’t judge/determine if use in a certain manner/context could be prejudicial to the reputation of the concerned author. The determination by courts in respect of right to integrity will depends on the facts of the case and also depend on the interpretation of “other act in relation to the said work”. However, it is important that the author of a work be recognised, not only in order to fulfil the objective of the grant of copyright protection i.e. encourage the creation of more works, but also to ensure that the works include in a training database are appropriately licensed (as not all open-source licenses have the same terms and conditions i.e. sometimes they can be incompatible with each other).
Liability for copyright infringement
Human Artistry Campaign which has received support from groups representing artists, performers, writers etc. has adopted certain principles. One of them is that the use of copyrighted works, voices and likeness of performers should be subject to licensing and be in compliance with the concerned laws. Additionally, it is also stated that “[c]reating special shortcuts or legal loopholes for AI would harm creative livelihoods, damage creators’ brands, and limit incentives to create and invest in new works”. Although, no loopholes/shortcuts have been specifically mentioned, the above statement refers to allowing fair use exemption (or introduction of similar exceptions under the law) for utilisation of copyrighted works for AI training purposes. This should be considered when it comes to framing laws in respect of AI, as the creators/artists are some of the people most affected by AI. In this background, it is comforting to note that Firefly, Adobe’s AI art generator has been trained its AI on licensed works and can produce output which is safe for commercial use, and Adobe has assured compensation for the creators of such works. Shutterstock has also launched its AI image generation platform and has promised that it shall pay artists for their contributions.
In the Getty Images suit, relief was also sought in respect of trademark infringement claiming that the images generated by Stable Diffusion contained modified version of Getty’s watermark (displayed on all images on its website), thus implying association with Getty and to address this issue, AI developers could be required to not use trademarks in a way that would cause any likelihood of confusion or likelihood of association. Therefore, the above are a few aspects that should be considered when considering the regulation of AI. Interestingly, the Ministry of Electronics and IT, in a written reply in the Lok Sabha has stated that “the government is not considering bringing a law or regulating the growth of artificial intelligence in the country”. Innovation is vital, but it should be in a way that is beneficial to humanity.
| 0.5367
|
FineWeb
|
The new performance that brings together the playwright and director Ricardo Neves-Neves and the pianist and composer Filipe Raposo looks at the history of Olivenza, an Alentejo portion of Portuguese territory occupied in 1801 by Spain, with the support of France and whose Spanish sovereignty is still not recognized by Portugal. Using the ancient Iberian monarchic culture, a peninsula reigned among cousins, A Reconquista de Olivenza is a fanciful exercise on Power and Politics, the War and its participants, as well as the complex web of customs, laws and beliefs that help create our identity, among other zarzuelas.
Even after the signing of the Treaty of Vienna in 1817, which required the restoration of Olivenza, Spain continues to postpone the return of that portion of the territory leaving the Alentejo thus undecided as to its real dimension. If Portugal claims Olivença sobbing, Spain cries over their right to Gibraltar, the Rock that is in fact the last colony in Europe, where England, Portugal’s historical ally, is sovereign. And beyond Gibraltar there is Ceuta, the former Portuguese city and present Spanish city, albeit within the territory of Morocco.
(Video: Tiago Inuit, Diogo Borges)
| 0.6023
|
FineWeb
|
Penn Engineers Calculate Interplay Between Cancer Cells and Environment
Interactions between an animal cell and its immediate environment, a fibrous network called the extracellular matrix, play a critical role in cell function, including growth and migration. But less understood is the mechanical force that governs those interactions.
University of Pennsylvania engineers have joined with colleagues at Cornell University to form a multidisciplinary team investigating this interplay. Using a method for measuring the force a breast-cancer cell exerts on its fibrous surroundings, they have quantified how this phenomenon aligns and stiffens those fibers. This stiffening produces mechanical feedback to the cells themselves, which is relevant to how they migrate and metastasize.
Understanding those forces has implications in many disciplines, including immunology and cancer biology, and could help scientists better design biomaterial scaffolds for tissue engineering.
Vivek Shenoy, professor in the Department of Materials Science and Engineering in Penn’s School of Engineering and Applied Science and co-director of Penn’s Center for Engineering Mechanobiology, has previously modeled this stiffening feedback.
Looking to match simulated results with physical experiments, Shenoy joined a research team led by Mingming Wu, associate professor in the Department of Biological and Environmental Engineering at Cornell, and her graduate student Matthew Hall, now a postdoctoral researcher at the University of Michigan.
Wu and Hall’s colleagues used 3-D traction-force microscopy to measure the displacement of fluorescent marker beads distributed in a collagen matrix that also contained migrating breast-cancer cells. Shenoy’s team could then calculate the force exerted by the cells using the displacement of those beads.
“Nobody has looked how cellular forces quantitatively alter the matrix and how that feeds back into the cell force,” Shenoy said. “Using our model, we could quantitatively derive what forces the cells were exerting.”
The group published the findings in the Proceedings of the National Academy of Sciences.
Wu said the group’s work centered on a basic question: How much force do cells exert on their extracellular matrix when they migrate?
“The matrix is like a rope, and, in order for the cell to move, it has to exert force on this rope,” she said. “The question arose from cancer metastasis because, if the cells don’t move around, it’s a benign tumor and generally not life-threatening.”
It’s when the cancerous cell migrates that serious problems can arise. That migration occurs through “cross-talk” between the cell and the matrix, the group found. As the cell pulls on the matrix, the fibrous matrix stiffens; in turn, the stiffening of the matrix causes the cell to pull harder, which stiffens the matrix even more.
This increased stiffening also increases cell-force transmission distance, which can potentially promote metastasis of cancer cells.
“We’ve shown that the cells are able to align the fibers in their vicinity by exerting force,” Hall said. “We’ve also shown that, when the matrix is more fibrous, less like a continuous material and more like a mesh of fibers, they’re able to align the fibers through the production of force. And, once the fiber is aligned and taut, it’s easier for cells to pull on them and migrate.”
The combination of computer modeling and physical experiments helped to resolve confounding results in previous attempts to quantify cancerous cells’ rate of migration.
“This was a totally novel approach,” Shenoy said, “since it’s the first to account for the fact that collagen is fibrous. Without that model, you can’t explain the experimental data.”
This research was supported by grants from the National Institutes of Health, National Cancer Institute and National Science Foundation and made use of the Cornell NanoScale Science and Technology Facility, Cornell Biotechnology Resource Center Imaging Facility, Cornell Center for Materials Research and Cornell Nanobiotechnology Center.
Shenoy lab members Farid Alisafaei and Ehsan Ban also contributed to the study. Recently established through a Nation Science Foundation grant, Penn’s Center for Engineering Mechanobiology also supported this study.
| 0.7101
|
FineWeb
|
We spend nearly one-third of our lives asleep. Getting a sufficient amount of high-quality sleep is critically important for our overall health and well-being, and yet it's estimated that one out of every three adults doesn't get enough sleep. From technology distractions and work demands, to parenting and managing relationships, there are myriad challenges to getting a good night's rest.
The fact remains that getting high-quantity and high-quality sleep decreases the likelihood of chronic disease and premature death, improve accuracy and performance, and positively affect your mood and thus, your relationships. So how do we build this into our already busy lives?
Henry Kellem’s Science of Sleep webinar will teach you to:
- Understand why sleep is so critical for each and every one of us
- What are sleep cycles and the stages of sleep we go through nightly
- Understand what happens during sleep and why both quantity and quality are important factors for each night of rest.
- Identify the impact of chronic sleep deprivation on health outcomes and everyday performance.
- Describe the effects of caffeine, alcohol, technology, and environment on sleep.
- How to optimize your body to fall asleep faster and what to do if you wake up in the middle of the night
- Implement morning and evening rituals for improved performance and sleep.
| 0.9998
|
FineWeb
|
Background: We determined the incidence of hip fracture and subsequent mortality in Korea using national wide data from the national health insurance service from 2008 to 2012.
Methods: This study was performed on the patient population, aged 50-year older who underwent surgical procedures because of hip fracture (ICD 10; S720, S721). All patients were followed using patient identification code to identify deaths.
Results: Crude incidence of hip fracture increased from 221.4/100,000 to 299.4/100,000 in women and from 104.4/100,000 in 2008 to 131.2/100,000 in 2012 in men, respectively. Crude mortality within 12 months after hip fracture showed a similar trend (16.7% in 2008 and 14.9% in 2012). The standard mortality ratio at 1 year after hip fracture decreased from 3.2 in 2008 to 2.8 in 2012 and trend of mortality rate in women were more significant than in men (-4.1% in men and -14.3% in women).
Conclusions: The increasing incidence and the high mortality after hip fracture are still serious public health problems in recent years and a public health programme should be more active and systematic to decrease hip fractures in the future.
Disclosure: The authors declared no competing interests.
| 0.7754
|
FineWeb
|
The lush jungle surrounding Bayon Temple camouflaged its location in relation to other structures at Angkor, so it was misled about the true compound it belongs to. Therefore, that Bayon Temple stands in the very centre of the ancient capital of Angkor Thom was not universally acknowledged for such a long time. Dating from the late 12th century, Bayon Temple brims with a distinguishing feature of about 200 stone massive faces looking in all directions.
Built by King Jayavarman VII as part of a massive expansion of his capital Angkor Thom, Bayon Temple was initially designed with three levels. In nature, the first and second levels bear the feature of square outer and inner galleries, and a circular central sanctuary marked by a 43-metre-high tower dominates the third level. Thus, the arrangement of the temple is far more complex than it seems to be, with a maze of galleries, passages and stairways connected in some way that makes it hard to distinguish each level and creates faint light, narrow walkways and low ceiling. Visit Bayon Temple with travel to Cambodia
Known for its representing the intersection of heaven and earth, Bayon Temple remains one of the most enigmatic temples of the Angkor complex. The temple is oriented towards the east with a road leading to directly from the gates at each of Angkor Thom’s cardinal points. Bayon Temple itself is surrounded with two walls which serve as outdoor large galleries exhibiting an extraordinary collection of bas-relief scenes of historical and mythological events as well as images from daily life activities of the Angkorian Khmer for almost one thousand years.
In total, there are approximately 11,000 sculptural figures covering the walls of 1.2 kilometres in length. The details were vividly and complicatedly carved from stone walls without any sort of epigraphic text. A peculiarity of Bayon Temple is the absence of an enclosing wall.
Thus, what truly renders Bayon Temple majestic is the 200 stone massive faces carved out of 54 towers, each of which is featured by two, three or (most commonly) four giantic faces. The curious smiling image considered as the self-portrait of King Jayavarman VII depicts slightly curving lips with eyes placed in shadow. Visit Bayon Temple with Indochina travel Cambodia
A number of theories explaining the mystery behind those smiles – like the case of Mona Lisa by Leonardo da Vinci, have been formulated, and the most generally accepted is that King Jayavarman VII perhaps imagined himself as a god-King ruling in the name of Buddha. The characteristics of this face – a board forehead, downcast eyes, wild nostrils and thick lips – superbly combine to reflect the famous “Smile of Angkor”.
The Indochina Voyages team.
| 0.509
|
FineWeb
|
By default, asp.net validators are positioned right next to the control they validate. You can move them in the markup, but wherever you put them, they occupy an area equal to the area required to display the Text property (or if the Text is not present, then the ErrorMessage property). We may not want that. We may want them to only occupy space when they’re displayed, or not display them at all (and showing the error message in a validation summary only). The validators have a property called display, which can be set to one of three values: Static, Dynamic and None. Setting it to Static will mean the validator will occupy space even when there’s no error. Dynamic means that it won’t occupy space when there isn’t an error, but will show up when there is. None means that the validator’s Text (or ErrorMessage) isn’t displayed at all and doesn’t occupy any space. In this case, you’ll need to use a ValidationSummary control to be able to display the ErrorMessage.
Let’s look at a bit of markup that has three required field validators having different settings for Display. To the right of each validator, I’ve added the text “dummy” to signify where the validator’s display area ends. Markup:
Now let’s fire up the page and see what it look like in the browser:
Notice that since the first validator had a display of static, there’s a space equal to the space needed to display the Text of the first validator next to the first text box. The second and third validators had display set to “Static” and “None” respectively. Hence, there’s no space between the second and third textboxes and they’re respective “dummy” texts. Now, lets hit submit:
Notice that the Text of the first two validators are displayed next to the text boxes, but the third one’s Text is not shown at all. This is because the third validator’s display was set to none. Notice that the error message of all three validators are displayed in the validation summary.
Hope that helps.
| 0.9793
|
FineWeb
|
Create an epic in Agile Development 2.0 Epics organize the work needed to complete parts of a theme into smaller, more manageable pieces. An epic usually groups related user stories together. Before you beginRole required: scrum_story_creator, scrum_admin About this taskTo organize epics, you can create a hierarchy of parent and child epics. You can associate an epic to a product, theme, or a configuration item (an item or service being affected). You can also define child epics. Procedure Create an epic using one of these methods: OptionAction From Agile Board Navigate to Agile Development > Agile Board. Click the Backlog Planning or Sprint Planning tab. Click Create Epic. From the epics list Navigate to Agile Development > Epics. Click New in the record list. From a theme record Select the Epics related list and click New. From a product record Select the Epics related list and click New. Fill in the fields, as appropriate. Table 1. Epic form fields Field Description Number A system generated number for the epic. Product Product with which this epic is associated.An epic can only be associated with one theme and a product at a time. Priority Priority for the epic. State Current state of the epic. The default is Draft. Short Description A brief description of the epic. Description A detailed description of the epic. Click Submit. What to do nextAdd child epics, or stories using the following related lists. Table 2. Epic form related list Field Description Related list Child Epics Lists the child epics associated with the epic. Click New to create a child epic. Stories Lists the stories associated with the epic. Click New to create a story. An epic can have one or more stories, but a story can belong to only one epic at a time.
| 0.6873
|
FineWeb
|
US 5978703 A
An electrical method and apparatus for stimulating cardiac cells causing contraction to force hemodynamic output during fibrillation, hemodynamically compromising tachycardia, or asystole. High level electrical fields are applied to the heart to give cardiac output on an emergency basis until the arrhythmia ceases or other intervention takes place. The device is used as a stand alone external or internal device, or as a backup to an ICD, atrial defibrillator, or an anti-tachycardia pacemaker. The method and apparatus maintain some cardiac output and not necessarily defibrillation.
1. A method for electrically forcing cardiac output during tachyarrhythmia in a patient, comprising the steps of:
(a) providing a plurality of electrodes in communicative contact with the patient's heart;
(b) detecting the presence of tachyarrhythmia in the patient via said electrodes; and
(c) delivering electrical current pulses to the patient's heart, via said electrodes after detecting tachyarrhythmia, at a rate between 60 and 200 pulses per minute, said electrical current pulses having a strength between 25 and 200 volts, and greater than 250 mA and directly forcing contraction in the patient's heart, whereby a minimum level of cardiac output sufficient to maintain life is provided by said electrical current pulses.
2. The method of claim 1, further comprising the steps of reassessing the presence of an arrhythmia at predetermined intervals and terminating said delivery of electrical forcing pulses if the arrhythmia is no longer present.
3. The method of claim 1, in which each electrical current pulse has an energy of less than 360 joules.
4. The method of claim 1, further comprising the steps of monitoring cardiac output and adjusting said electrical current pulse with respect to amplitude to maintain a predetermined level of cardiac output, thereby conserving electrical energy.
5. The method of claim 1, wherein each electrical current pulse has rounded edges thereby minimizing patient discomfort and chest twitching.
6. The method of claim 1 further comprising the step of forming each electrical current pulse of a train of at least 10 narrow pulses thereby minimizing patient discomfort and chest twitching.
7. The method of claim 1, wherein said step of delivering electrical current pulses is repeated for at least one hour to maintain cardiac output.
8. The device of claim 7 in which additional heart treatment devices are used in combination therewith including means to perform conventional anti-tachycardia pacing or means to perform tachycardia cardioversion or means to perform atrial defibrillation.
9. A device, for implantation in the human body, for maintaining cardiac output of a patient's heart during tachyarrhythmia using electrical forcing fields, comprising:
(a) battery power supply means;
(b) arrhythmia detection means connected to said battery power supply means;
(c) means to communicatively connect said battery power supply means and said arrhythmia detection means to the patient's heart; and
(d) output control means connected to said arrhythmia detection means and to said battery power supply means, and to said means to communicatively connect for delivering multiple electrical current pulses to the human heart after the detection of tachyarrhythmia, said electrical current pulses having a voltage between 25 and 200 volts, and greater than 250 mA, whereby contraction in the patient's heart is directly forced and a minimum level of cardiac output sufficient to maintain life is provided by said electrical current pulses.
10. The device of claim 9, in which said electrical current pulses are delivered at a rate between 60 and 200 beats per minute.
11. The device of claim 9, further comprising blood pressure monitoring means connected to said arrhythmia detection means.
12. The device of claim 11, in which said blood pressure monitoring means monitors cardiac output and further comprises the step of adjusting said electrical current pulse amplitude by said output control means to maintain a predetermined level of cardiac output thereby consenting electrical energy.
13. The device cf claim 9, wherein each electrical current pulse has rounded edges, thereby minimizing patient discomfort and chest twitching.
14. The device of claim 9, further comprising the step of forming each electrical current pulse of a train of at least 10 narrow pulses, thereby minimizing patient discomfort and chest twitching.
15. The device of claim 9, in which said arrythmia detection means reassesses the presence of arrhythmia at predetermined intervals and said electrical current pulses are stopped by said output control means if the arrhythmia is no longer present.
16. The device of claim 9, wherein said output control means delivers said electrical current pulses for at least one hour to maintain cardiac output.
17. A device, having a portion designed for implantation in the human body, for maintaining cardiac: output of a patient's heart during tachyarrhythmia using electrical forcing fields, comprising:
(a) battery power supply means;
(b) arrhythmia detection means connected to said battery power supply means;
(c) means to communicatively connect said battery power supply means and said arrhythmia detection means to the patient's heart; and
(d) output control means connected to said arrhythmia detection means and to said battery power supply means, and to said means to communicatively connect for delivering multiple electrical current pulses to the human heart after the detection of tachyarrhythmia, said electrical current pulses having a voltage between about 25 and about 200 volts, and greater than about 250 mA, whereby contraction in the patient's heart is directly forced for at least about 30 minutes and a minimum level of cardiac output sufficient to maintain life is provided by said electrical current pulses.
18. The device of claim 9 or 17 in which the size of the device is less than the size of an implantable cardioverter device.
This is a Continuation of application Ser. No. 08/543,001 filed Oct. 13, 1995, now abandoned, which in turn is a continuation of application Ser. No. 08/251,349, filed May 31, 1994, now abandoned.
1. Field of the Invention
The invention relates to the field of therapies for cardiac arrhythmias, and more particularly, to a method and an apparatus for forcing cardiac output by delivering a pulsatile electrical field to the heart during fibrillation or a hemodynamically compromising tachycardia.
2. Background Information
Approximately 400,000 Americans succumb to ventricular fibrillation each year. It is known that ventricular fibrillation, a usually fatal heart arrhythmia, can only be terminated by the application of an electrical shock delivered to the heart. This is through electrodes applied to the chest connected to an external defibrillator or electrodes implanted within the body connected to an implantable cardioverter defibrillator (ICD). Paramedics cannot usually respond rapidly enough with their external defibrillators to restore life. New methods of dealing with this problem include less expensive external defibrillators (and thus more readily available) and smaller implantable defibrillators. Since the first use on humans of a completely implantable cardiac defibrillator in 1980, research has focused on making them continually smaller and more efficient by reducing the defibrillation threshold energy level. The goal has been to reduce the size of the implantable device so that it could be implanted prophylactically, i.e., in high risk patients before an episode of ventricular fibrillation.
An ICD includes an electrical pulse generator and an arrhythmia detection circuit coupled to the heart by a series of two or more electrodes implanted in the body. A battery power supply, and one or more charge storage capacitors are used for delivering defibrillation shocks in the form of electrical current pulses to the heart. These devices try to restore normal rhythm from the fibrillation. While it works well at restoring normal function, the ICD is large in size and not practical for a truly prophylactic device. A small device capable of maintaining minimal cardiac output, in high risk patients, prior to admission into an emergency room is needed.
In addition, external defibrillators are limited in their performance. The typical paramedic defibrillation may be delayed by 10 minutes. At this time defibrillation may be irrelevant since the rhythm is often advanced to asystole. In asystole, there is little or no electrical activity and certainly no cardiac pumping.
There is a need for a new method and apparatus for dealing with ventricular fibrillation. The defibrillation approach does not work satisfactorily. External devices are too slow in arrival and implantable defibrillators are excessively large (and expensive) for prophylactic use.
The invention provides an electrical method of stimulating cardiac cells causing contraction to force hemodynamic output during fibrillation, hemodynamically compromising tachycardia, or asystole. Forcing fields are applied to the heart to give cardiac output on an emergency basis until the arrhythmia ceases or other intervention takes place. The device is usable as a stand alone external or internal device or as a backup to an ICD, atrial defibrillator, or an anti-tachycardia pacemaker.
The goal of the invention is maintaining some cardiac output and not necessarily defibrillation. The method is referred to as Electrical Cardiac Output Forcing and the apparatus is the Electrical Cardiac Output Forcer (ECOF).
In the implantable embodiment, a forcing field is generated by applying approximately 50 volts to the heart at a rate of approximately 100-180 beats per minute. These fields are applied after detection of an arrhythmia and maintained for up to several hours. This will generate a cardiac output which is a fraction of the normal maximum capacity. The heart has a 4 or 5 times reserve capacity so a fraction of normal pumping activity will maintain life and consciousness.
The implantable embodiment is implanted in high risk patients who have never had fibrillation. If they do fibrillate, the ECOF device forces a cardiac output for a period of up to several hours, thus giving the patient enough time to get to a hospital. That patient would then be a candidate for an implantable cardioverter defibrillator (ICD). The ECOF differs from the ICD in that it is primarily intended for a single usage in forcing cardiac output over a period of hours, while the ICD is designed to furnish hundreds of defibrillation shocks over a period of years.
Insofar as is known, no prior attempts have been made at forcing pulses during any type of fibrillation. Some workers in the field have experimented for research purposes with local pacing during fibrillation. For example, Kirchhof did local pacing during atrial fibrillation in dog hearts (Circulation 1993; 88: 736-749). He used 0.5 mm diameter electrodes and pacing stimuli. As expected, small areas around the heart were captured but no pumping action was expected or detected. Similar results have been obtained in the ventricle by KenKnight (Journal of the American College of Cardiology 1994; 283A).
Various researchers have tried multiple pulse defibrillation without success in reducing the energy thresholds, for example, Schuder (Cardiovascular Research; 1970, 4, 497-501), Kugelberg (Medical & Biological Engineering; 1968, 6, 167-169 and Acta Chirurgica Scandinavia; 1967, 372), Resnekov (Cardiovascular Research; 1968, 2, 261-264), and Geddes (Journal of Applied Physiology; 1973, 34, 8-11).
More recently, Sweeney (U.S. Pat. No. 4,996,984) has experimented with multiple (primarily dual) shocks of timing calculated from the fibrillation rate. None of these approaches has been able to significantly reduce voltages from conventional defibrillation shocks. Importantly, none of these approaches anticipated the idea that the individual pulses might force cardiac output or could sustain life indefinitely.
Some have considered the use of smaller pulses, before the shock, to reduce the energy required for a defibrillation shock (Kroll, European Application No. 540266), but never anticipated eliminating the defibrillation shock itself or anticipated that the pulses themselves could maintain cardiac output. Some have suggested using higher voltage pulses to terminate ventricular tachycardias, but no suggestion was made of an application with fibrillation or of obtaining cardiac output (Kroll WO 93/19809) and Duffin (WO 93/06886).
The benefits of this invention will become clear from the following description by reference to the drawings.
FIG. 1 is a block diagram illustrating a system constructed in accordance with the principles of the present invention.
FIG. 2a shows the connection of an implantable embodiment of the device to the heart in an epicardial patch configuration.
FIG. 2b shows the connection of an implantable embodiment of the device to the heart using an endocardial lead system and the device housing as an electrode.
FIG. 3 shows the connection of an external embodiment of the invention.
FIG. 4 is a diagram showing a representative pulsatile electrical signal.
FIG. 5 is a flowchart illustrating one embodiment of the method of the invention.
FIG. 6 is a diagram showing the expected effect of a 50 V pulse on the heart during diastole.
FIG. 7 is a diagram showing the expected effect of a 50 V pulse on the heart during systole.
FIG. 8 is a diagram showing the expected effect of a 50 V pulse on the heart during fibrillation.
FIGS. 9a and 9b show various waveforms useful for the electrical cardiac output forcing method and apparatus.
FIG. 10 shows the device used as a backup to an atrial defibrillator.
The present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, applicants provide these embodiments so that this disclosure will be thorough and complete, and will convey the scope of the invention to those skilled in the art.
FIG. 1 is a block diagram illustrating a system 10 constructed in accordance with the principles of the present invention. The device circuitry is connected to the heart 40 via a series of leads; output lead 32, pressure sense lead 34, and ECG sense lead 36. The electronic circuit includes a conventional ECG amplifier 30 for amplifying cardiac signals. The amplified cardiac signals are analyzed by a conventional arrhythmia detector 20 which determines if an arrhythmia is present. The arrhythmia detector 20 may be one of several types well known to those skilled in the art and is preferably able to distinguish between different types of arrhythmias. For example; fibrillation, tachycardia or asystole. The circuit also contains an optional pressure sensing section 28 which amplifies and conditions a signal from an optional pressure sensor from within the heart or artery. The output of the pressure sense circuit 28 is fed to a cardiac output detection circuit 18 which analyzes the data and determines an estimate of the cardiac output. Data from the arrhythmia detector circuit 20 and the cardiac output detection circuit 18 is fed to the microprocessor 16. The microprocessor 16 determines if Electrical Cardiac Output Forcing (ECOF) is appropriate. If forcing is indicated, the microprocessor 16 prompts the output control 22 to charge a capacitor within the output circuit 26 via the capacitor charger 24. The output control 22 directs the output circuitry 26 to deliver the pulses to the heart 40 via the output leads 32. The microprocessor 16 may communicate with external sources via a telemetry circuit 14 within the device 10. The power for the device 10 is supplied by an internal battery 12.
FIG. 2a is a diagram showing the connection of an implantable embodiment of the device 130 to the heart 40 in an epicardial patch configuration. In this thoracotomy configuration, current passes through an output lead pair 32 to electrode patches 42 which direct the current through the heart 40. There is an optional pressure sense lead 34 which passes the signal from an optional pressure transducer 46 which lies in the heart 40. The ECG is monitored by sense electrodes 44 and passed to the device 130 by a lead 36. The area of the electrodes 42 is at least 0.5 cm2. The size of the electrode is greater than that of a pacing lead and no more than that of a defibrillation electrode or between approximately 0.5 cm2 and 20 cm2 each.
FIG. 2b shows a non-thoracotomy system embodiment of the invention. In this system, the current passes from a coil electrode 52 in the heart 40 to the housing of the device 140. An endocardial lead 50 combines the ECG sensing lead and the pulse output lead. The ECG is monitored by sense electrodes 44 in the heart 40 and passes through the endocardial lead 50. There is an optional pressure transducer 46 in the heart 40 which passes a signal to the device 140 via optional lead 34.
FIG. 3 shows an external embodiment of the invention. External patch electrodes 54 are placed on the chest to deliver current to the heart 40 through output lead 32. The ECG is monitored by surface electrodes 56 and passed to the device 150 by a lead 36. Alternately, the ECG could be monitored by the external patch electrodes 54. An optional pressure sensor 46 passes a pressure signal via an optional pressure sense lead 34. This embodiment could be used as a substitute (due to its small size) for an external defibrillator and keep a patient alive until arrival at a hospital. Also, the system could precede the external defibrillator by generating output in patients in asystole until blood flow and rhythm are restored.
A series of forcing pulses 60 are shown in FIG. 4. The pulses are approximately 50 V in amplitude with a spacing of approximately 500 ms. The 50 V and the 500 ms pulse spacing are chosen as illustrative for an implantable embodiment. The forcing pulse interval is chosen to maximize cardiac output within the limits of device circuitry and the response of the heart muscle. An interval of 500 ms corresponds to a heart rate of 120 beats per minute. This will produce a greater output than a typical resting rate of 60 beats per minute. However, a rate of 240 beats per minute would produce a lower output due to mechanical limitations of the heart. Thus a practical range is 60 to 200 beats per minute is appropriate. The pulses could also be timed to coincide with the natural pumping of the atria, thus improving overall cardiac output.
The higher the voltage, the higher the forcing fields, and therefore a greater number of heart cells contracting producing greater cardiac output. However, the higher voltage produces greater patient discomfort and extraneous muscle twitching.
Implantable batteries are also limited to a certain power output and energy storage. If an output pulse is 50 V and the electrode impedance is 50 Ω, the power during the pulse is P=V2 /R=50V*50V/50 Ω=50 W. If the pulse has a duration of 2 ms then the energy per pulse is 0.1 J. If two pulses are delivered every second, the charger must be capable of delivering 0.2 J per second which is 200 mW. This is well within the limits of an implantable battery. An implantable battery can typically deliver 5 W of power. However, 200 V pulses at 3 per second would require 4.8 W which is near the limit of the battery and charging circuitry. A typical implantable battery energy capacity is 10,000 J. Delivering forcing pulses at a rate of 4.8 W would deplete the battery in only 35 minutes (10,000 J/4.8 W=2083 seconds). Thirty five minutes may not be enough time to transport the patient to a hospital. Therefore 200 V represents the highest practical voltage for continuous operation in an implantable embodiment, although voltages of up to 350 V could be used for short periods and adjusted down when hemodynamic output is verified. A practical lower limit is about 10 V. During normal sinus rhythm, 10 V delivered through the patches would pace. However, during fibrillation the 10 V could not pace and only cells very near the electrodes would be captured. This would be insufficient for forcing cardiac output.
These calculations also suggest other differences between an implantable ECOF and an ICD. With a battery storing 10,000 J and an ECOF pulse having 0.1 J, this ECOF would be capable of delivering 100,000 pulses. An ICD can only deliver 200-400 shocks of about 30 J. The ECOF is also very different from an implantable pacemaker which typically delivers 150,000,000 pacing pulses (5 years at 60 BPM) each of about 0.00005 J.
For an external ECOF the calculations are similar, but scaled up. The typical ECOF pulse would have a voltage of 100 V with a range of 25-500 V. With electrode impedances of 50 Ω the power during the pulse is P=V2 /R=100V*100V/50 Ω=200 W with a range of 12.5-5,000 W. If the pulse has a duration of 2-5 ms, then the energy per pulse is 0.02-25 J. This is much less than the American Heart Association recommended output of 360 J for an external defibrillator.
This is also different from an external transthoracic pacemaker. These devices are rated by current and typically have an output range of 30-140 mA. Most patients are paced by pulses of 40-70 mA of current. An example of a modem external external thoracic pacemaker is given by Freeman in application WO 93/01861. Assuming an electrical impedance of 50 Ω and the ECOF voltage range of 25-500 V, then the ECOF current range would be 500 mA to 10 A. Since electrode impedance increases with lower voltage, the 25 V ECOF pulse would probably see an impedance of 100 Ω thereby giving a lower current of 250 mA.
FIG. 5 is a flowchart illustrating the method of the invention, which is provided for purposes of illustration only. One skilled in the art will recognize from the discussion that alternative embodiments may be employed without departing from the principles of the invention. The flow diagram shown in FIG. 5 represents a method of automatically treating a heart which is in fibrillation, tachycardia, or asystole and thereby pumping inefficiently or not at all. Electrodes are attached 69 and diagnoses the presence of an arrhythmia 70. A series of cardiac output forcing electric pulses 72 is automatically delivered. It should be understood that the therapy 72 may be delivered for any output compromising cardiac arrhythmia. After delivery of 10 forcing pulses (at a rate of 60-200 BPM) in the first block 72, the status of the heart is determined 74. If an arrhythmia is still present and there exists low pressure within the heart, more forcing pulses are delivered 78. If the heart is pumping at a safe level, the therapy ceases and exits 76. Note that this means that the ECOF successfully defibrillated the patient's heart even though this is not a primary goal of the system. This could be tested in patients who were scheduled to receive an ICD, in a hospital setting. Those patients who are defibrillated by ECOF pulse therapy could then receive the ECOF instead of the larger ICD. After the therapy 78 has been delivered, the pressure and ECG is again monitored 74. If the therapy 78 is successful, it ceases and exits 76. If the therapy 78 is unsuccessful in producing a safe level of pumping efficiency, the method proceeds to a continuous cardiac assist mode 80. The therapy may only be stopped by an external command, for example, a telemetry signal or a magnet which is applied to the chest activating a magnetic reed switch 82 which terminates the therapy and exits 76. To minimize patient discomfort and maximize battery life, the forcing voltage could be adjusted down when sufficient pressure signals or adequate flow measured by other means were detected, for example, the pressure sense transducer could be replaced by an oxygen detector or a doppler flow measuring device. The pulse rate could also be adjusted to maximize output.
FIG. 6 is a diagram showing the effect of a 50 V forcing pulse on the heart 40 during electrical diastole (cells at rest). The current is passed through the heart 40 by the electrodes 42. Approximately 60% of cardiac cells 90 would be captured by a 50 V pulse if the cells were in diastole. The captured cells 90 mostly lie in the direct path between the electrodes 42 and near the electrodes 42 where the field strengths are highest. Of course, over a time period of about 100 ms these directly captured cells then propagate an activation wavefront to stimulate the rest of the heart. This so called far-field pacing is irrelevant here as the hearts, of interest, are in fibrillation and not in diastole.
FIG. 7 is a diagram showing the effect of a 50 V forcing pulse on the heart during electrical systole (cells already stimulated). The current is passed through the heart 40 by the electrodes 42. Approximately 20% of cardiac cells 100 would be captured by a 50 V pulse if the cells were in systole. The captured cells 100 are nearest each electrode 42 where the field strengths are highest. Capture in systolic cells means that their activation potential is extended. This capture requires significantly higher fields (10 V/cm) than those required for diastolic cell capture (1 V/cm).
FIG. 8 is a diagram showing the effect of a 50 V forcing pulse on the heart during fibrillation. During fibrillation there are always cells in systole and diastole simultaneously. But, the vast majority are in systole. This diagram assumes 50% of the cells are in diastole which applies only after several capturing pulses. The current is passed through the heart 40 by the electrodes 42. 100% of the cells 110 nearest the electrodes 42 would be captured due to the high field strength. As shown in FIG. 7, even systolic cells are captured by high field strengths. 50% of the cells 112 in the direct path between the electrodes 42 would be captured if it is assumed that 50% of all cells are in diastole. If roughly 60% of cardiac cells are captured by a 50 V pulse when the cells are in diastole, and 20% are captured when in systole, and if 50% are in systole and 50% in diastole, 40% would be captured during fibrillation. This calculation is shown in the following table. The last two columns give the mechanical action resulting and the contribution to forcing a cardiac output. Considering the cardiac cells that are originally in diastole, (rows A & B) in the table below, the A row represents the diastolic cells that are not captured by the forcing pulse. If 50% of the heart's cells are in diastole and 40% of those are not captured that is 20% of the total cells. These cells will, however, shortly contract on their own (from previous wavefronts or new ones) providing a positive gain in mechanical action and therefore cardiac output. The B row corresponds to the diastolic cells that are captured. If 60% of the diastolic cells (50% of total) contract due to the forcing field this is 30% of the total heart cells. These cells provide the biggest gain in mechanical action and cardiac output. Next considering the activity of the systolic cells (rows C & D), if 50% of the heart's cells are in systole and 80% of those are not captured (row C), that is 40% of the heart's cells. These cells soon relax and negate a portion of the cardiac output. The systolic cells that are captured (row D) are 10% of the heart's cells (20% of 50%). These cells will hold their contraction and be neutral to cardiac output. The net result is a gain in contraction which forces cardiac output.
__________________________________________________________________________ Percentage Status of Percentage ForcingOriginal of the the of the Percentage Cardiacstatus of cardiac cardiac original of the total Mechanical Outputthe cells cells cells status cells Action Effect__________________________________________________________________________(A) 50% Diastolic 40% 20% will start to positiveDiastolic non- of 50% contract on (+) captured own(B) Diastolic 60% 30% contract positiveDiastolic captured of 50% (++)(C) 50% Systolic 80% 40% will start to negativeSystolic non- of 50% relax on (-) captured own(D) Systolic 20% of 10% hold neutralSystolic captured 50% (0)Total 100% 100% 100% more positive contraction (++)__________________________________________________________________________
The net result over a 200 ms mechanical response is given in the next table. The major contribution is in row (B) from the captured diastolic cells contracting.
______________________________________ Status of the Change in DescriptionRow Cardiac Cells Output of Activity______________________________________A Diastolic +5% Positive. Some cells non-captured will begin to contract on their own.B Diastolic +30% Positive. Cells captured contract due to forcing field.C Systolic -5% Negative. Some non-captured cells will begin to relax on their own.D Systolic 0% Neutral. Cells hold captured contraction due to forcing field.Net Gain +30% A net gain in cardiac output due to forcing fields.______________________________________
The 30% net pumping action should be sufficient to maintain survival and consciousness, because the heart has a 4-5 times reserve capacity.
FIG. 9 depicts examples of waveforms designed to minimize the twitching of the chest muscles which can be very uncomfortable to the patient. In FIG. 9a is seen a low harmonic pulse waveform 120 which has a very gradual "foot" 122 and a gradual peak 124. Such a pulse has less high frequency energy components and thus is less likely to stimulate the skeletal muscle.
FIG. 9b shows a technique of going to the opposite extreme. Here, each compound forcing pulse 126 is actually composed of 50 very short spikes 128 each of which is 20 μs in width with a 20 μs spacing. The heart will tend to average out these thin pulses and "see" a 2 ms wide forcing pulse. The skeletal muscle, however, is not efficiently stimulated by these extremely narrow pulses. The skeletal muscle will not average out this signal either. This approach could help minimize skeletal muscle twitching and discomfort.
An alternative system would be to charge the capacitor to 300 V for the first pulse to capture many cells therefore putting those cells into diastole after a delay of 100-200 ms. At this point the voltage could be lowered to 100 V and delivered every 100 ms. A 3 watt DC-DC converter with a 67% efficiency could provide 100 ms interval forcing pulses assuming a 50 Ω resistance and 1 ms pulse (0.2 J). This rate is too fast for forcing cardiac output due to mechanical limitations, but is very effective for electrical capture. After sufficient capture, the rate of forcing pulses could be slowed down to 100-170 beats per minute for optimum cardiac output.
The Electrical Cardiac Output Forcing device (ECOF) could also be used to help patients with atrial fibrillation. As an alternative embodiment to the ventricular placement of FIG. 2b, the electrode coil 52 and sensing electrodes 44 could be placed in the atrium. The device could then function to force atrial output. Even though atrial fibrillation is not instantly fatal like ventricular fibrillation is, clots can build up in the atria which can eventually lead to strokes. Cardiac output forcing of the atria on a daily basis may limit this problem. It is also possible that after a number of forcing pulses the atria would return to a normal rhythm. There is however, no urgency as is the case with ventricular fibrillation.
A second use of this invention for atrial defibrillation is shown in FIG. 10. As before in FIG. 2b, the ECOF 160 is shown connected to the heart 40 via endocardial lead 50. Again forcing coil electrode 52 and sensing electrodes 44 are in the right ventricle. In addition a large atrial coil electrode 130 and atrial sensing electrodes 132 are in the right atrium. These would be used for conventional atrial defibrillation. One of the big concerns with atrial defibrillation is that in a few cases, an atrial defibrillation shock causes ventricular fibrillation. If this happens, the patient dies within minutes. With the ECOF approach, for the left ventricle, one could maintain output in the patient for several hours and thus have enough time for transport to a hospital or external defibrillation. Thus the ECOF approach in the ventricle could provide a safety backup to atrial defibrillation.
Many cardiac patients have no known risk of ventricular fibrillation, but suffer regularly from ventricular tachycardia. Accordingly, these people can be treated with anti-tachycardia pacing (ATP). Unfortunately, occasionally ATP will cause a ventricular fibrillation. Then a large defibrillation shock must be applied. Thus it is not considered safe to implant a pure ATP device and these patients instead receive a full size ICD. The ECOF approach also serves as a safety backup and thus allow the implantation of true ATP devices. The system is depicted in FIG. 2b, although the pressure sensor 46 would typically not be needed.
Low energy cardioverters can also be used to treat ventricular tachycardias. These devices are also not considered safe as stand alone devices due to the fact that they may not terminate the rhythm or that they may cause fibrillation. The ECOF method also could is used as a safety backup thus allowing the implantation of cardioverters without defibrillation capabilities. Such a system is shown in FIG. 2b.
It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. For example, while most of the discussion is in the context of an implantable device, the concepts of the invention are also applicable to external delivery systems. It is intended that the following claims define the scope of the invention and that structures and methods within the scope of these claims and their equivalents be covered thereby.
| 0.8203
|
FineWeb
|
Lapilli formed by a Strombolian eruption are associated with the formation of a large lava flow of natrocarbonatite on or about 21–22 July, 2000 at Oldoinyo Lengai volcano, Tanzania. Fresh lapilli consist of vesicular natrocarbonatite similar to that occurring in rapidly quenched lavas. The lapilli were altered at low temperature (<50°C) by degassing to aggregates of sodian sylvite, potassian halite, trona, thermonatrite and a novel F-bearing sodium phosphate-carbonate. The latter is considered to be a new mineral as it has a composition (Na5–4.5PO4(CO3,F,Cl) that is not similar to that of nahpoite (Na2HPO4), dorfmanite [Na2(PO3OH).2H2O] or natrophosphate [Na7(PO4)2F.19H2O]. However, in common with these minerals, it is ephemeral and undergoes rapid decomposition under normal atmospheric conditions. The sodium phosphate-carbonate and associated halide-sodium carbonate assemblages are considered to be a part of a previously unrecognized hyperagpaitic assemblage forming as sublimates at Oldoinyo Lengai.
| 0.7596
|
FineWeb
|
Home | About Us | Projects | Maps | Facts
Turkey Vulture (Cathartes aura)
The naked red heads of the adult turkey vultures resemble those of turkeys, hence the name. Their genus name, Cathartes, means "cleanser" because that's what they do: clean up carcasses from woods and roads.
Description: The Turkey Vulture is a large soaring bird that feeds on carrion (the dead and rotting body of an animal). It's recognized by the featherless red head, white bill, large brown-black body and yellow feet. They can be easily spotted along roadways displaying their long wingspan and smooth soaring flight. While soaring it holds the wings slightly up in a V-shape. The wingspan extends to 170-178 cm (67-70 in). Weight: 2000 g (70.6 ounces) Size: 64-81 cm (25-32 in)
Usually silent. Makes hiss at carcasses, roosts, and nest.
Range/ Habitat: The Turkey Vultures breeds from southern Canada throughout the United States and southward through southern South America and the Caribbean.
This species is common in the lower forest zones and the steppe zones. It nests in small caves or ledges on high cliffs in these regions.
Click the range map to learn more about the distribution of Turkey Vultures in California.
Diet: Turkey Vultures eat a wide variety of carrion, from small mammals to dead cows. Also some insects, other invertebrates, and some fruit are consumed.
Nesting: The is no nest structure. The female Turkey Vulture lays 1 to 3 eggs directly on ground in caves, crevices, mammal burrows, hollow logs, under fallen trees, or in abandoned buildings. The eggs are creamy-white with dark blotches around large end.
Behavior: Turkey vultures are almost exclusively scavengers (Cathartes means "purifier"), so this species rarely kills small animals. They form communal roosts which facilitate group foraging and social interactions. The roosts range in size from a few birds to several thousand.
Turkey Vultures are often seen standing in a spread-winged stance - called the "horaltic pose." The horaltic pose is believed to serve multiple functions: drying the wings, warming the body, and baking off bacteria.
Groups of vultures spiraling upward to gain altitude are called "kettles". As vultures catch thermal updrafts they take on the appearance of water boiling in a pot – hence the name kettle.Why do turkey vultures defecate on their feet?
During the hot weather, turkey vultures will defecate on their feet to cool them off. Since a vulture's digestive juices kill bacteria--which is why vultures don't get sick from eating rotten meat--defecating on their legs might even work as an antiseptic wash.Why do turkey vultures vomit?
Their method of self-defense is to vomit their food, which they can send sailing 10 feet. If a turkey vulture is disturbed or harassed, it will throw up on the animal who is bothering it. Even the vulture babies will vomit on other animals. Though these behaviors might distress people, they serve turkey vultures well. Vulture vomit is an effective predator repellent.
Did you know?:
Animal silhouettes available to purchase »
Photos: Natures Pics
| 0.8384
|
FineWeb
|
Global climate change is a natural phenomenon; it is well known that the earth’s average surface temperature has been increasing since the end of the Little Ice Age. The most well known anthropogenic cause of global warming is Green House Gas (GHG) emissions, in recent years. Warming may induce sudden shifts in regional weather patterns like monsoons or the El Niño. Such changes would have severe consequences for water availability and flooding in tropical regions and their livelihoods South Asian countries are particularly at risk. The impacts result not only from gradual changes in temperature and sea level rise but also, in particular, from increased climate variability and extremes, including more intense floods, droughts, and cyclones. These changes have had impacts on economic performances of South Asian countries and the lives of millions of the poor. It also puts at risk infrastructure, agriculture, human health, water resources, and the environment. South Asian nations have started to face the effects of Climate Change.
All the nations of the sub-region are threatened by effects of climate change. A major concern in South Asia is the lack of knowledge and awareness on climate change as well as the lack of necessary resources to assess the possible impacts. There is a need for research on localized climate changes and its impacts. The focus is on promoting understanding of Climate Change, adaptation and mitigation, energy efficiency and natural resource conservation.
What we do
SAVAE is engaged in a variety of international activities to promote activities that reduce greenhouse gas emissions. SAVAE establishes partnerships, provides leadership, and shares technical expertise to support these activities.
| 0.9915
|
FineWeb
|
Researcher Academy provides free access to countless e-learning resources designed to support researchers on every step of their research journey. Browse our extensive module catalogue to uncover a world of knowledge, and earn certificates and rewards as you progress.
Going through peer review
When you’ve already invested so much time in your manuscript, it’s not always easy to hear that a reviewer thinks it needs more work. In these modules, we provide some useful advice on how to deal with those reviewer comments and keep your submission moving smoothly through the publishing process. You will learn the initial steps you should take, and the correct tone and language to use in your response letter. We also help you see your manuscript through the eyes of an editor and reviewer, so you can spot any shortfalls or mistakes before you submit.
What you will learn
- Practical advice on how to respond to reviewers
- An explanation of what reviewers are looking for
- Tips on looking at your submission with a critical eye
| 0.7301
|
FineWeb
|
Born three decades before William Shakespeare, George Gascoigne (1535-77) was one of the earliest poets to write in modern English. He also wrote the first essay on poetic meter; what could be called the first novel (A Discourse of the Adventures of Master FJ); and the first stage comedy, The Supposes, translated from the Italian of Ludovico Ariosto.
Gascoigne's sense of humor, with its metrical deadpan and droll, peculiar sense of self, still works. He sometimes uses his name in the title of his poems, as in the longish self-defense "Gascoigne's Woodmanship." In that poem, the poet explains his inadequacies as a hunter, addressing his patron in a way that relates his failures with the bow to his failures to bribe the right people at court. In a brilliant piece of rhetoric, he makes his own clumsiness a matter of ethical superiority while keeping it comical.
I first learned about Gascoigne and his best-known poem, "The Lullaby of a Lover," from my crusty, passionate, and dictatorial teacher Yvor Winters. With a chuckle, Winters first read the poem aloud, handed me a copy, and then explained that Sir Arthur Quiller-Couch, editor of The Oxford Book of English Verse, quietly omitted the fifth, next-to-last stanza of Gascoigne's poem. Did I see why? I passed Winters' test by answering that Gascoigne, in that stanza, was pretty clearly singing the lullaby to his penis—his "loving boy." With a somewhat malicious smile, Winters told me that he had asked the same question of his colleague, an eminent professor specializing in 16th-century poetry. The professor, in Winters' words, "had no idea."
I admire "The Lullaby of a Lover" for the way it sounds: The hypnotic rhythm and refrain vary enough to beguile without monotony. Part of that engaging music rises from Gascoigne's gift for personal comedy: Like Oliver Hardy dancing, he knows how to be funny and graceful, hyperbolic and earnest, laughable and grave, all at once.
"The Lullaby of a Lover"
.....Sing lullaby, as women do,
Wherewith they bring their babes to rest,
And lullaby can I sing too,
As womanly as can the best.
With lullaby they still the child,
And if I be not much beguiled,
Full many wanton babes have I,
Which must be stilled with lullaby.
.....First lullaby my youthful years,
It is now time to go to bed,
For crooked age and hoary hairs
Have won the haven [within] my head:
With Lullaby then youth be still,
With Lullaby content thy will,
Since courage quails, and comes behind,
Go sleep, and so beguile thy mind.
.....Next Lullaby my gazing eyes,
Which wonted were to glance apace.
For every Glass may now suffice,
To show the furrows in my face:
With Lullaby then wink awhile,
With Lullaby your looks beguile:
Let no fair face, nor beauty bright,
Entice you eft with vain delight.
.....And Lullaby my wanton will,
Let reasons rule, now reign thy thought,
Since all too late I find by skill,
How dear I have thy fancies bought:
With Lullaby now take thine ease,
With Lullaby thy doubts appease:
For trust to this, if thou be still,
.....My body shall obey thy will.
Eke Lullaby my loving boy,
My little Robin take thy rest,
Since age is cold, and nothing coy,
Keep close thy coin, for so is best:
With Lullaby be thou content,
With Lullaby thy lusts relent,
Let others pay which hath mo pence,
Thou art too poor for such expense.
Thus Lullaby my youth, mine eyes,
My will, my ware, and all that was,
I can no more delays devise,
But welcome pain, let pleasure pass:
With Lullaby now take your leave,
With Lullaby your dreams deceive,
And when you rise with waking eye,
Remember then this Lullaby.
Slate Poetry Editor Robert Pinsky will be joining in discussion of this poem by George Gascoigne this week. Post your questions and comments on the work, and he'll respond and participate. You can also browse "Fray" discussions of previous classic poems. For Slate's poetry submission guidelines, click here. Click here to visit Robert Pinsky's Favorite Poem Project site.
| 0.9663
|
FineWeb
|
Titanium dioxide has recently been classified by the International Agency for Research on Cancer (IARC) as an IARC Group 2B carcinogen ''possibly carcinogen to humans''. Titanium dioxide accounts for 70% of the total production volume of pigments worldwide. It is widely used to provide whiteness and opacity to products such as paints, plastics, papers, inks, foods, and toothpastes. It is also used in cosmetic and skin care products, and it is present in almost every sunblock, where it helps protect the skin from ultraviolet light.
With such widespread use of titanium dioxide, it is important to understand that the IARC conclusions are based on very specific evidence. This evidence showed that high concentrations of pigment-grade (powdered) and ultrafine titanium dioxide dust caused respiratory tract cancer in rats exposed by inhalation and intratracheal instillation*. The series of biological events or steps that produce the rat lung cancers (e.g. particle deposition, impaired lung clearance, cell injury, fibrosis, mutations and ultimately cancer) have also been seen in people working in dusty environments. Therefore, the observations of cancer in animals were considered, by IARC, as relevant to people doing jobs with exposures to titanium dioxide dust. For example, titanium dioxide production workers may be exposed to high dust concentrations during packing, milling, site cleaning and maintenance, if there are insufficient dust control measures in place. However, it should be noted that the human studies conducted so far do not suggest an association between occupational exposure to titanium dioxide and an increased risk for cancer.
The Workplace Hazardous Materials Information System (WHMIS) is Canada's hazard communication standard. The WHMIS Controlled Products Regulations require that chemicals, listed in Group 1 or Group 2 in the IARC Monographs on the Evaluation of the Carcinogenic Risk of Chemicals to Humans, be classified under WHMIS Class D2A (carcinogenic). The classification decision on titanium dioxide has been published on the IARC website and in a summary article published in The Lancet
Representatives from Health Canada (National Office of WHMIS) recently consulted with the Quebec CSST and CCOHS (the two main agencies providing WHMIS classifications to the public) regarding the implications of the IARC decision to the WHMIS classification of titanium dioxide. It was agreed that titanium dioxide does now meet the criteria for WHMIS D2A (carcinogen) based on the information released by IARC to date, and that it is not necessary to wait for release of the full monograph.
Manufacturers and suppliers of titanium dioxide are advised to review and update their material safety data sheets and product labels based on this new information as soon as possible. Employers should review their occupational hygiene programs to ensure that exposure to titanium dioxide dust is eliminated or reduced to the minimum possible. Workers should be educated concerning this potential newly recognized risk to their health and trained in proper work procedures.
* Intratracheal administration is an exposure procedure that introduces the material directly into the lungs via the trachea, bypassing protective mechanisms in the respiratory system.
International Agency for Research on Cancer (IARC): Titanium dioxide (IARC Group 2B), Summary of data reported, Feb. 2006
Baan, R., et al. Carcinogenicity of carbon black, titanium dioxide, and talc. The Lancet Oncology. Vol. 7 (Apr. 2006). P. 295-296
Learn more about CHEMINFO (produced by CCOHS' occupational health and safety specialists). This resource provides comprehensive, practical occupational health and safety information on more than 1,300 important workplace chemicals.
| 0.7103
|
FineWeb
|
Stateline Shipping and Transport Company
Rachel Sundusky is the manager of the South-Atlantic office of the Stateline Shipping and Transport Company. She is in the process of negotiating a new shipping contract with Polychem, a company that manufactures chemicals for industrial use. Polychem want Stateline to pick up and transport waste products from its six plants to three waste disposal sites. Rachel is very concerned about this proposed arrangement. The chemical wastes that will be hauled can be hazardous to humans and the environment if they leak. In addition, a number of towns and communities in the region where the plants are located prohibit hazardous materials from being shipped through their municipal limits. Thus, not only will the shipments have to be handled carefully and transported at reduced speeds, they will also have to traverse circuitous routes in many cases. Rachel has estimated the cost of shipping a barrel of waste from each of the six plants to each of the three waste disposal sites as shown in the following table:
Waste Disposal Site
The plants generate the following amounts of waste products each week:
Waste per Week (bbl)
The three waste disposal sites at Whitewater, Los Canos, and Duras can accommodate a maximum of 65, 80, and 105 barrels per week respectively. In addition to shipping directly from each of the six plants to one of the three waste disposal sites, Rachel is also considering using each of the plants and waste disposal sites as intermediate shipping points. Trucks would be able to drop a load at a plant or disposal site to be picked up and carried on to the final destination by another truck, and vice versa. Stateline would not incur any handling costs because Polychem has agreed to take care of all local handling of the waste materials at the plants and the waste disposal sites. In other words, the only cost Stateline incurs is the actual transportation cost. So Rachel wants to be able to consider the possibility that it may be cheaper to drop and pick up loads at intermediate points rather than ship them directly. Rachel estimates the shipping costs per barrel between each of the six plants to be as follows:
The estimated shipping cost per barrel between each of the three waste disposal sites is as follows:
Waste Disposal Site
Rachel wants to determine the shipping routes that will minimize Stateline’s total cost in order to develop a contract proposal to submit to Polychem for waste disposal. She particularly wants to know if it would be cheaper to ship directly from the plants to the waste sites or if she should drop and pick up some loads at the various plants and waste sites. Develop a model to assist Rachel and solve the model to determine the optimal routes.
Read the 'Stateline Shipping and Transport Company" Case Problem on pages 273-274 of the text. Analyze this case, as follows:
There are two deliverables for this Case Problem, the Excel spreadsheets and an accompanying written description/explanation. Please submit both of them electronically via the drop box.
Hi, I'm a moderator for this topic and I wonder whether you're still waiting for an answer. If you are, please let me know and I will do my best to find an Expert to assist you right away. If not, feel free to let me know and I will cancel this question for you. Thank you!
Sometimes, finding the right Expert can take a little longer than expected and we thank you greatly for your understanding. We'll be in touch again shortly.
| 0.8322
|
FineWeb
|
Heat sinks are devices that use heat to transfer heat away from a particular device. An air-source heat pump is one such example. While it may not transfer heat to its enclosures directly, it can effectively lower the temperatures of some components of the enclosure by transferring heat to air circulating parts or even evaporator coils. A heat sink is also a passive heat exchanger that directly transfers the heat produced by a mechanical or an electronic device into a fluid medium, usually either air or a liquid cooling agent, where it’s dissipated off the device and into the environment.
The most common material used in heat sinks are aluminum, which are extremely effective in both cooling and heating systems, and has excellent thermal conductivity. But since aluminum is somewhat heavy, it tends to get extremely hot when it comes in contact with any liquid, so liquid cooling agents were put to use in its place, like polyethylene glycol or distilled water. Today, aluminum alloys are often used for cooling as well. One important note on aluminum is that very little metal gets incorporated in the alloy, which makes aluminum a poor choice for high performance cooling systems because all the metal gets incorporated into the structure. So instead of aluminum, you may want to opt for stainless steel heat sinks, which have excellent thermal conductivity and excellent resistance to corrosion.
One of the great things about heat sinks is that they save a lot of money. They don’t just keep your system cool – they also keep your utility bills down, because they help with reducing convection in your house. Convection is where heated air rises while colder air sinks to the floor, which is often a problem in houses with basements that aren’t vented properly. The problem with the floor-flow problem is that it causes heat leaks, so that the hotter air rises and the cooler air sinks, causing your utility bill to go up. Heat sinks prevent the rise of hot air from ceiling to floor, which means that your residence will stay much cooler, even when it’s a bit warmer outside.https://www.youtube.com/embed/qO6AuFc72AA
| 0.7448
|
FineWeb
|
In the region of Tabgha, on the northern shore of the Sea of Galilee, lies the Church of the Multiplication of Loaves and Fishes, the traditional site of the food multiplication story found in all four gospels (Matthew 14:13-21, Mark 6:31-44, Luke 9:10-17, John 6:5-15). It is also where Jesus appeared to his disciples after his resurrection (John 21:1-17).
The church is most famous for a mosaic of loaves and fishes from the original mid-third century church. The church was expanded in the fifth century, but soon after destroyed by the Persians when they invaded in 614. The Byzantine structures and mosaics were excavated in the 1930s by a German team. In 1982, the current reconstruction was added. The original mosaics depict water birds and plants, ecology of the marshy swamps typical of the area historically.
The name Tabgha is a variation on its ancient Greek name, Heptapegon, meaning “seven springs.” Six of these springs have been identified in modern times, including one known as “Job’s Spring.”
Fourth-century pilgrim Egeria’s account of visiting Tabgha:
“In the same place (not far from Capernaum) facing the Sea of Galilee is a well watered land in which lush grasses grow, with numerous trees and palms. Nearby are seven springs which provide abundant water. In this fruitful garden Jesus fed five thousand people with five loaves of bread and two fish.”
| 0.6139
|
FineWeb
|
Poverty and Food Security
Assignment 2: Poverty and Food Security
The members of the United Nations appreciated the content you provided on population growth in your last assignment. Now they are asking you to expand your whitepaper to include global food security as it relates to population growth and poverty.
To accomplish this task:
· Read the Case Study.
· Provide a written paper based on the Assignment Questions/Instructions.
We can view global food security as the effort to build food systems that can feed everyone, everywhere, and every day by improving food quality and promoting nutritional agriculture. That said, there are certain practices that can advance this project:
1. Identifying the underlying causes of hunger and malnutrition
2. Investing in country-specific recovery plans
3. Strengthening strategic coordination with institutions like the UN and the World Bank
4. Developed countries making sustained financial commitments to the success of the project
We must bear in mind that more than three billion people, nearly one-half of the global population, subsist on as little as $2.50 a day and that nearly 1.5 billion are living in extreme poverty on less than $1.25 a day. According to the World Health Organization, the United Nations, and other relief agencies, about 20,000 people (mostly children) starve to death in the world every day, for a total of about seven million people a year.
In addition, about 750 million (twice the population of the United States) do not have access to clean drinking water, meaning that some one million people die every year from diarrhea caused by water-borne diseases.
The population of Earth is expected to grow from 7 billion in 2010 to 8 billion in 2025, 9 billion in 2040, and 11 billion by the end of the 21st century. If the demand for food is predicted to grow by 50% by 2030 and 70% by 2050, the real problem is not necessarily growing that much food. Rather, it is making that amount available to people.
Moreover, foodborne illnesses are prevalent, with nearly 600 million reported cases of foodborne diseases each year. These affect mainly children, but also negatively impact the livelihood of farmers, vendors, trade associations and, ultimately, the Gross Domestic Product (national income) of a country. These issues can impose tremendous human, economic, social, and fiscal costs on countries Addressing them allows governments to devote more resources to making desperately needed improvements in infrastructure that raise the quality of life for everyone.
It is not enough to have adequate supplies of food available. Policies that focus exclusively on food production can exacerbate the problem, particularly if, to satisfy the need for quantity, the quality of the food is left wanting.
Reasons for Food Insecurity
Certainly, poverty and the systemic internal conditions creating it inside a country are the unmistakable driving factors behind keeping adequate food resources from reaching people. It is only one factor of several, however. Others include the following:
Inadequate Food Distribution: The reality is that there is more than enough food in the world to feed its people. The primary cause of famines is not poor weather conditions as much as it is getting the needed amount of food to the people who need it most. Quite often causes result from political instability and poor infrastructure, often involving a country’s port facilities, transportation availability and quality of road networks. Paradoxically, although the population is going to increase in the coming decades, the amount of food potentially available will increase along with it. This is due mostly to advances in bio agricultural engineering and increased seed immunity to molds.
Writing in the late 18th century, Thomas Malthus warned that global population would exceed the capacity of Earth to grow food, in that while population would grow exponentially, food production would grow only arithmetically. Although this theory has been proven invalid, the unfortunate result of its propagation has been for some governments to rationalize political choices that avoid helping the poverty-ridden and starving.
Political-Agricultural Practices: The widespread use of microbiological, chemical, and other forms of pesticides in food continues to be a serious issue throughout the global food chain. Widespread use of fertilizers also causes illness in millions of people every year, not only from the food itself, but from run-off into streams and rivers, contaminating entire water supplies. The human, social, and economic costs of such practices impede improvements being made not only in the raising of crops, but in their distribution. Added to this, the rising demand in developed countries for biofuels, currently refined mostly from corn and soy beans, reduces the amount of arable land devoted to producing food.
The failure of many farmers in the developing world to rotate their crops harms the replenishing of nutrients necessary to continue growing crops. In addition, the repeated use of agricultural land without allowing it to lie fallow in order to replenish needed soil nutrients thereby increasing fertility and maximizing crop yield results in reduced agricultural output and insufficient crop yields.
Economic Issues: The fact is, government policies that focus on growing cash crops, for example, are designed solely to export them to earn foreign exchange. This may be fine for the government in its efforts to earn money, but the result is that farmers end up growing for foreign markets and not domestic ones. This leads to shortages of necessary staples. Consequently, the poorest of the population are frozen out of the local markets because they cannot afford the food that remains to be sold.
Civil Strife: Civil war can interrupt the flow of food from gathering depots, such as ports, to distribution centers where it can be handed out to people. During the 1990s, Somalia was particularly hard hit by their civil war, as clans fought for control of the main port at Mogadishu. This affected the flow of food to the rest of the population. In this case, as with many civil wars, whoever controls the supply of food controls the country. In failed and failing states like Zimbabwe, Democratic Republic of Congo, Haiti, South Sudan, Yemen, and Libya, food very often is another weapon used by one segment of the population against another.
For a good overview of food security in general, see Peter Timmer, Food Security and Scarcity: Why Ending Hunger Is So Hard, Foreign Affairs, May/June 2015, Reviewed by Richard N. Cooper.
World Population Prospects, United Nations Population Division, 2017.
Will Martin, Food Security and Poverty: A Precarious Balance, The World Bank, (Blog, Let’s Talk Development), November 5, 2010.
The purpose of this paper is to explore this topic. The issue is not the lack of food in the world, but the access to food. Simply put, food is not getting to where it needs to be in time. In developing or under-developed countries, the food shortage is due to governmental control over food. These governments maintain their control and preference for certain groups by limiting access of nutritious food to certain other groups. The result is the weaponizing of food.
The goal is to write a minimum of four pages assessing the impact, citing at least five credible sources in your research. Refer specifically to the role these issues have had in the under-developed or developing country of your choice. Remember to pick one (1) under-developed or developing country to use for your paper.
This assessment must include:
- A cover page with your name, title of the course, date, and the name of your instructor. (first page)
- A one-half page introduction (1/2 page)
- A middle section (or body) that is numbered and divided into three one-page sections. Each of these one-page sections should answer each of the following questions (at least 3 pages):
o Section 1: What is food insecurity and what role does population growth play in it?
o Section 2: What factors specifically interrupt the flow of food from the source to the people in the developing country you selected?
o Section 3: What forms of technology can be used to reduce hunger and improve food security? Explain how these technical solutions can do that.
· A one-half page conclusion. (1/2 page)
· Cite at least five credible sources excluding Wikipedia, dictionaries, and encyclopedias for your assessment. (last page)
A brief list of resources for this assignment is shown at the end of the Course Guide found in Blackboard at the Course Info tab.
This course assignment requires use of the new Strayer Writing Standards (SWS). The format is different than many other Strayer University courses. Please take a moment to review the SWS documentation (see the left-panel of your Blackboard page) for details. (Note: You will be prompted to enter your Blackboard login credentials to view these standards.)
| 0.8688
|
FineWeb
|
The purpose of coating carbide inserts is to enhance their performance, durability, and cutting capabilities. Coatings provide several benefits that contribute to the overall effectiveness of carbide inserts. Here are the main objectives of coating carbide inserts:
Improved Wear Resistance:
Coatings increase the hardness of the carbide inserts, making them more resistant to wear. This allows the inserts to withstand the high forces and temperatures generated during cutting operations, resulting in longer tool life and reduced tool replacement frequency.
Coatings reduce the friction between the carbide inserts and the workpiece, resulting in smoother cutting action. This minimizes heat generation and prevents the build-up of chips on the cutting edge, leading to improved chip evacuation and better surface finish.
Enhanced Heat Resistance:
Coatings provide thermal stability to the carbide inserts, allowing them to withstand high temperatures without losing their hardness or deforming. This enables higher cutting speeds and feeds, increasing productivity and efficiency.
Improved Cutting Speed and Feed Rates:
Coated carbide inserts can withstand higher cutting speeds and feed rates due to their increased hardness and reduced friction. This results in faster machining times and higher productivity.
Extended Tool Life:
By improving wear resistance, reducing friction, and enhancing heat resistance, coatings significantly extend the tool life of carbide inserts. This reduces the frequency of tool changes, increases production uptime, and lowers tooling costs.
Enhanced Performance in Specific Applications: Different coatings are designed to excel in specific machining applications or materials. For example, some coatings are optimized for high-speed machining, while others are suitable for cutting high-hardness or high-temperature alloys. Coatings can be tailored to specific requirements to achieve optimal performance. Overall, the purpose of coating carbide inserts is to improve their performance, increase their durability, and optimize their cutting capabilities. Coatings enhance wear resistance, reduce friction, improve heat resistance, extend tool life, and enable better performance in various machining applications.
Contact person: Steve Lee
E-mail: [email protected]
Address: Floor 4,Building NO.15,Zhichuang Plaza,NO.1299,Liyu Road,Tianyuan District,Zhuzhou City, Hunan, P.R. CHINA
E-mail: [email protected]
WeChat Official Account
| 0.9984
|
FineWeb
|
Not just any general group for any vector space , but the particular groups . I can’t put LaTeX, or even HTML subscripts in post titles, so this will have to do.
The general linear group is the automorphism group of the vector space of -tuples of elements of . That is, it’s the group of all invertible linear transformations sending this vector space to itself. The vector space comes equipped with a basis , where has a in the th place, and elsewhere. And so we can write any such transformation as an matrix.
Let’s look at the matrix of some invertible transformation :
How does it act on a basis element? Well, let’s consider its action on :
It just reads off the first column of the matrix of . Similarly, will read off the th column of the matrix of . This works for any linear endomorphism of : its columns are the images of the standard basis vectors. But as we said last time, an invertible transformation must send a basis to another basis. So the columns of the matrix of must form a basis for .
Checking that they’re a basis turns out to be made a little easier by the special case we’re in. The vector space has dimension , and we’ve got column vectors to consider. If all are linearly independent, then the column rank of the matrix is . Then the dimension of the image of is , and thus is surjective.
On the other hand, any vector in the image of is a linear combination of the columns of the matrix of (use the components of as coefficients). If these columns are linearly independent, then the only combination adding up to the zero vector has all coefficients equal to . And so implies , and is injective.
Thus we only need to check that the columns of the matrix of are linearly independent to know that is invertible.
Conversely, say we’re given a list of linearly independent vectors in . They must be a basis, since any linearly independent set can be completed to a basis, and a basis of must have exactly elements, which we already have. Then we can use the as the columns of a matrix. The corresponding transformation has , and extends from there by linearity. It sends a basis to a basis, and so must be invertible.
The upshot is that we can consider this group as a group of matrices. They are exactly the ones so that the set of columns is linearly independent.
| 0.9401
|
FineWeb
|
Richard R Johnson
Introduction to the discipline of history for new or prospective majors. Emphasizes the basic skills of reading, analysis, and communication (both verbal and written) that are central to the historian's craft. Each seminar discusses a different subject or problem.
Title of Course: Comparative Empires in Early Modern North America
What students can expect to learn from the course. HST 388 is designed as an intensive introduction to the study of history for students who have recently declared their intent to be history majors. Students will receive a training in several of the basic techniques of historical analysis. Among these will be the discovery, assessment, and use of source materials; an understanding of how historians work by means of reading and discussing a selection of assigned readings capped by the writing of a comparative book analysis; and the opportunity for students to plan, research, and write their own historical essays, all in close collaboration with the teacher and other students in a seminar-style class format. The skills of research and of oral and written analysis that are fostered by the class are expected to contribute significantly to the students' subsequent success as history majors. Each 388 centers on a different historical problem: this one takes as its subject Comparative Empires in Early Modern North America, ranging from the Aztec and Iroquois confederacies encountered by arriving Europeans to the Spanish, French, Dutch, English and eventually American empires created by 1800. It places a special emphasis on learning the value of a comparative approach to historical study.
Student learning goals
See description of the course's coverage and its various learning goals outlined above
General method of instruction
Twice-weekly seminar-style meetings: discussion of assigned readings; occasional student presentations
RECOMMENDED PREPARATION : Recommended preparation for success in the course. Some knowledge of early American history (as through having taken courses in early modern Indian, Hispanic, Canadian, or British American history) would be useful but is not a prerequisite. More important is a desire to learn the skills of being an historian, and a readiness to attend every class and commit to at least twelve hours of study a week outside class in preparation for the class and its assignments.
Class assignments and grading
The class will consist of two seminar-style 90-minute meetings a week: it will involve discussion of the assigned readings and student presentations. Assignments will consist of a commitment to regular attendance in class, digesting substantial weekly readings in primary and secondary sources, and the preparation of three mid-sized (4-5 page) papers plus several short (1-page) papers and oral reports. This is a W-course, with a consequent emphasis upon writing assignments. Class readings include primary and secondary accounts of: Aztec and Iroquois history; French and Spanish missionary writings; Dutch and English cartography; the conduct of European and Indian warfare (as assessing Ian Steeleís Warpaths and Jill Lepore's The Name of War); the settlement of Virginia and French Canada; and the processes of state-formation and imperial policy making.
25% for each of the two longer papers, 25% for short papers and reports, remainder for in-class performance A studentís work will be judged according to the strength, clarity, and concision of its arguments, its capacity to employ and analyze the appropriate course materials, and the relevance of its response to its chosen topic.
| 0.845
|
FineWeb
|
SD-5000 Vacuum Inert Loop Spray Dryer, suitable for heat sensitive organic solvent
Suitable for heat sensitive water-based and organic solvent to be dry in low temperature, included inert loop system, solvent recovery system, cooling system and vacuum system.
SD-5000 vacuum low temperature spray dryer is an enhanced product of SD-500 desktop spray dryer. The SD-500 desktop spray dryer is popular with scientific researchers because of its small size and convenient operation. We have developed a vacuum low temperature spray dryer and an organic solvent spray dryer on the basis of SD-500. After users purchase the SD-500 It only needs to purchase vacuum components or nitrogen circulation system to form a vacuum low-temperature spray dryer or an organic solvent spray dryer, and the connection is very convenient, saving experimental funds and venues, and convenient for users to purchase on demand.
The problem of rapid drying of heat-sensitive materials has always plagued many scientific researchers. Common vacuum drying and spray drying have great damage to the biological activity of the material or the structure of the material. The freeze-drying time is long, the energy efficiency is low, and the drying After the material is agglomerated, it needs to be crushed a second time. SD-5000 laboratory vacuum low temperature spray dryer, when you need ordinary spray drying, only need to use the main body of the spray dryer, there is no need to open the vacuum component, and your material needs to be dried at a low temperature below 60 °C, you open Vacuum components can be quickly dried at an inlet air temperature of 55°C, completely avoiding the destruction of the material’s activity or structure during the drying process, and providing extremely convenient and safe drying methods for heat-sensitive materials, such as enzyme preparations and living organisms. Products, extracts of natural products of traditional Chinese medicine with high sugar content, heat-resistant polymer materials, materials that vaporize when heated, etc.
- SD-5000 laboratory vacuum low temperature spray dryer is divided into a desktop spray dryer main unit and vacuum components. The modular design facilitates the switching between high temperature spray drying and vacuum low temperature spray drying;
- The spray head is a concentric spray head. Make sure there is no eccentricity during spraying and it will spray to the side of the bottle wall. After the spray head is installed, it can move up and down to facilitate the adjustment of the spray position and improve the spray drying effect;
- SD-5000 laboratory vacuum low temperature spray dryer uses a color touch screen operation, real-time regulation of PID constant temperature control, so that the temperature control in the full temperature zone is accurate, the heating temperature control accuracy is ±1℃, and the feed rate of the peristaltic pump can be adjusted at any time.
- The entire experimental process of spray drying is completed in a vacuum environment, which greatly reduces the material drying temperature and solves the problem of spray drying of heat-sensitive materials;
- The atomization structure of the two-fluid spray is made of high-quality stainless steel materials, with a compact design, no auxiliary equipment, easy to use, and as long as new.
- The instant spray drying is completed under low temperature (minimum 50℃) conditions, and the moisture content after drying is less than 1%. Under such drying conditions, substances that are easily oxidized, volatile, and heat sensitive can maintain chemical structure and biological activity well ;
||SD-5000 Vacuum inert loop spray dryer
|Rated drying capacity
|Minimum inlet air temperature
||0.7mm standard, (0.5/0.75/1.0/1.5/2.0mm available)
||Two-fluid, concentric spray head, not eccentric during atomization, spray head can move up and down
|Particle size after drying
|Maximum feed volume
|Min material handling capacity
|Drying room material
||SUS 304 stainless steel
|Inlet temperature range
|Heating temperature accuracy
||SUS-304 stainless steel
||SUS-304 stainless steel
||7-Inch LCD display with USB interface
|Spray head material
||Concentric spray head, SUS 316 L stainless steel
| 0.7706
|
FineWeb
|
Tsunade is a fictional character from Naruto. She is the Fifth Hokage of Konohagakure, taking over after Hiruzen Sarutobi died. Although Tsunade is over fifty years old, she maintains a young appearance through a constant Transformation Technique. She has fair skin and brown eyes. Tsunade's long, blonde hair is tied into two ponytails using bands in the same colour as the seal on her forehead. She often wears a grass-green robe with the kanji for gamble written in black on the back, inside a red circle. Underneath she wears a gray, kimono-style blouse with no sleeves, held closed by a broad, dark bluish-grey obi that matches her trousers. Her blouse is closed quite low, revealing her sizeable cleavage. This character is popular among cosplayers. It can be portrayed by wearing cosplay Tsunade costume and cosplay Tsunade wig. This character is always present in a Naruto cosplay.
| 0.5052
|
FineWeb
|
Man has a soul and physical body, each of which is subject to its own pleasures and diseases. What harms the body is sickness, and that which gives it pleasure lies in its well-being, health and whatever is in harmony with its nature. The science that deals with the health and the maladies of the body is the science of medicine.
The diseases of the soul constitute evil habits and submission to lusts that degrade man doom to the level of beasts. The pleasures of the 'soul are moral and ethical virtues which elevate man and move him closer to perfection and wisdom bringing him close to God. The study that deals with such matters is the science of ethics ('ilm al-akhlaq).
Before we commence a discussion of the main topics of our subject, we must prove that the soul of man is incorporeal, possesses an existence independent of the body, and is immaterial. In order to prove this, a number of arguments have been set forth amongst which we can mention the following:
1. One of the characteristics of bodies is that whenever new forms and shapes are imposed upon them, they renounce and abandon their previous forms or shapes. In the human soul, however, new forms, whether of the sensible or of the intellectual nature, enter continuously without wiping out the previously existing forms. In fact, the more impressions and intellectual forms enter the mind, the stronger does the soul become.
2. When three elements of colour, smell, and taste, appear in an object, it is transformed. The human soul however, perceives all of these conditions without being materially affected by them.
3. The pleasures that man experiences from intellectual cognition can belong only to the soul, since man's body plays no role in it.
4. Abstract forms and concepts which are perceived by the mind, are undoubtedly non-material and indivisible. Accordingly, their vehicle, which is the soul, must also be indivisible, and therefore immaterial.
5. The physical faculties of man receive their input through the senses, while the human soul perceives certain things without the help of the senses. Among the things that the human soul comprehends without relying on the senses, are the law of contradiction, the idea that the whole is always greater than one of its parts, and other such universal principles.
The negation of the errors made by the senses on the part of the soul, such as optical illusions, is done with the aid of these abstract concepts, even though the raw material required for making corrections is provided by the senses.
Now that the independent existence of the soul has been proved, let us see what are the things responsible for its well-being and delight, and what are the things that make it sick and unhappy. The health and perfection of the soul lies in its grasp of the real nature of things , and this understanding can liberate it from the narrow prison of lust and greed and all other fetters which inhibit its evolution and edification towards that ultimate stage of human perfection which lies in man's nearness to God. This is the goal of `speculative wisdom' (al-hikmat al-nadariyyah).
At the same time, the human soul must purge itself of any evil habits and traits it may have, and replace them with ethical and virtuous modes of thought and conduct. This is the goal of `practical wisdom' (al-hikmat al-`amaliyyah). Speculative and practical wisdom are related like matter and form; they cannot exist without each other.
As a matter of principle, the term "philosophy" refers to `speculative wisdom' and "ethics" refers to `.practical wisdom'. A man who has mastered both speculative wisdom and practical wisdom is a microcosmic mirror of the larger universe: the macrocosm.
| 0.858
|
FineWeb
|
We can use Python to slice a dictionary to get the key and value pairs we want. Dictionary comprehension is a way to slice a dictionary. A dictionary is a collection of key and value pairs.
- How do you slice a dictionary in a list Python?
- What can be sliced in Python?
- Can you modify a dictionary in Python?
- Is Slicing allowed in dictionary?
- What is the difference between indexing and slicing?
- Which data type is not supported in Python?
- Can you slice a string in Python?
- Can dictionaries be modified?
- Is dictionary mutable in Python?
- Are dictionaries mutable?
- Can we convert dictionary to list in Python?
- Are nested dictionaries bad?
- What does Unhashable type slice mean?
- What does KEYS () do in Python?
How do you slice a dictionary in a list Python?
If you want to slice each list till you get to K, you have to use the test_dict.
What can be sliced in Python?
It is possible to access parts of a sequence with the slicing feature. They can be used to modify or remove items from mutable sequence. There are third party objects that can be applied with slices.
Can you modify a dictionary in Python?
A dictionary modification is like a list modification. The name of the dictionary and the key in square brackets are what you give.
Is Slicing allowed in dictionary?
A subset of key value pairs can be obtained by slicing a dictionary. A list of required keys would be used to remove values from a dictionary. In this article, we are going to learn how to slice a dictionary using Python.
What is the difference between indexing and slicing?
Iterable elements are referred to as indexable by their position in iterable. Slicing is when you get a subset of elements from an iterable.
Which data type is not supported in Python?
There is a choice of A numbers and B strings. Number datatype is not present in Python, but it is used to define a variable for numbers. str datatype is the type of data that is used to define a string variable.
Can you slice a string in Python?
The string can be sliced to make a substring. There is a new string created from the source string and the original string is not changed.
Can dictionaries be modified?
Dictionary is an object that can be changed within functions, but it has to be defined first.
Is dictionary mutable in Python?
The data structure of Dictionary is mutable. It is the same as List, Set, and Tuples.
Are dictionaries mutable?
It is possible to add, remove, and change entries at any time. It is not possible to have two entries with the same key because entries are accessed by their key.
Can we convert dictionary to list in Python?
There are three methods in the dictionary class. Key-value pairs, keys only and values only are included in the view objects returned by the methods. The in-built list method can be used to convert view objects into list objects.
Are nested dictionaries bad?
Nested dicts are not inherently wrong. It makes sense for a dict to be any value. A lot of the time when people make nested dicts, their problems can be solved more quickly by using a dict with tuples for keys.
What does Unhashable type slice mean?
It was the conclusion of the story. When trying to access items from a dictionary using slicing, the “TypeError: unhashable type:’slice'” error is raised. Refer to the items you want to access from a dictionary in order to solve the error.
What does KEYS () do in Python?
All of the keys are retrieved from the dictionary using the keys() method. The keys have to be unique and have a string, number, or tuple in them. The values of each key are separated by a colon.
| 0.8931
|
FineWeb
|
Lattice Algorithms for Compression Color Space Estimation in JPEG Images
de Queiroz, Ricardo
Baraniuk, Richard G.
DCT coefficients; compression
JPEG (Joint Photographic Experts Group) is an international standard to compress and store digital color images . Given a color image that was previously JPEG-compressed in some hidden color space, we aim to estimate this unknown compression color space from the image. This knowledge is potentially useful for color image enhancement and JPEG re-compression. JPEG operates on the discrete cosine transform (DCT) coefficients of each color plane independently during compression. Consequently, the DCT coefficients of the color image conform to a lattice structure. We exploit this special geometry using the lattice reduction algorithm from number theory and cryptography to estimate the compression color space. Simulations verify that the proposed algorithm yields accurate compression color space estimates.
| 0.9987
|
FineWeb
|
The May 1, 1955 date was especially significant in Austrian history because it was the tenth anniversary of the reinstatement of the 1929 Austrian Constitution, the one that had been discarded by Chancellor Dollfuss on May 1, 1934, following the civil war that destroyed the Social Democratic Party and created a one-party Austro-fascist state.
Also, May 1, 1955 had to be a time of optimism for Vienna and Austria because it was just two weeks before the signing of the Austrian State Treaty whereby the U.S., Great Britain, France, and the Soviet Union returned sovereignty to the Austrian state and ended their occupation of the country.
Most of the pictures of the parade was taken at around 10:15 in the morning at or near Schottentor (see the clock in the picture below). The first shows a float that is a globe with the words "World Holiday of Labor, 1 May." The float is built on a platform of two bicycles, and you can see the legs of the two people inside the globe moving it forward. At the rear right of the float is a man holding a flag, but I cannot tell what it represents. To the rear left is a banner whose words are mostly blocked by the float. It appears that a pretty good crowd is observing the parade.
|Float at Vienna May Day Parade on May 1, 1955|
|Another View of the Vienna May Day Float, near Schottentor|
|Marching Band in 1955 May Day Parade in Vienna|
|Float Celebrating Ten Years of Rebuilding Austria's Second Republic, With a Protest Message|
You can see the first float, plus some large crowds, and the Burgtheater in the background.
|The 1955 May Day Parade Ending at the Vienna City Hall|
No doubt speeches followed the parade.
| 0.6376
|
FineWeb
|
Samantha Goode, a senior undergraduate psychology major, working in Dr. Charles Emery's Cardiopulminary Behavioral Medicine lab, was awarded Honorable Mention in the Three-Minute Thesis competition sponsored by the Office of Undergraduate Research and Creative Inquiry. The goal of the Three-Minute Thesis competition is for students to share their research using lay terms and only one static slide in three minutes or less.
Obesity prevalence continues to rise in the U.S., so for her senior thesis Samantha wanted to address this growing public health concern. She looked at two factors that are related to physical activity usage: exercise self-efficacy (confidence that you'll be able to perform exercise in different situations) and anxiety sensitivity (fear of the physical arousal brought on by anxiety). Exercise self-efficacy is associated with more physical activity, while anxiety sensitivity is associated with performing less physical activity as the physical arousal we feel during exercise is similar to the physical arousal of anxiety. Samantha hypothesized the findings would replicate these associations and that exercise self-efficacy would influence the relationship between anxiety sensitivity and exercise, such that this relationship's strength would be different at different levels of exercise self-efficacy.
Contrary to expectations, exercise self-efficacy did not influence the relationship between anxiety sensitivity and exercise activity. However, as she hypothesized, the more social anxiety sensitivity an individual experienced the less moderate intensity exercise, and total exercise, they participated in.
Thus, it may be that physical activity among individuals with obesity is associated with perceptions of negative social judgments from others.
| 0.8766
|
FineWeb
|
When you woke up this morning, what did you do first?
Did you hop in the shower, check your email or grab a doughnut from the kitchen counter?
Did you brush your teeth before or after you toweled off?
Which route did you drive to work?
When you got home, did you put on your sneakers and go for a run, or pour yourself a drink and eat dinner in front of the TV?
In 1892, the famous psychologist, William James wrote, “All our life, so far as it has definite form, is but a mass of habits,” — I absolutely love that because it’s absolutely true: most of the choices we make each day may feel like the products of well-considered decision making, but they’re not. They’re habits.
And though each habit means relatively little on its own, over time, the meals we order, whether we save or spend, how often we exercise, and the way we organize our thoughts and work routines have enormous impacts on our health, productivity, financial security and happiness. One paper published by a Duke University researcher in 2006 found that more than 40 percent of the actions people performed each day weren’t actual decisions, but habits.
Habits, by definition, are choices that we all make deliberately at some point—and then stop thinking about but continue doing, often every day. At one point, we all consciously decided how much to eat and what to focus on when we got to the office, how often to have a drink or when to go for a jog. But then we stopped making a choice, and the behavior became automatic. It’s a natural consequence of our neurology. And by understanding how it happens, you can rebuild those patterns in whichever way you choose.
This episode/article combo was inspired by a book called “The Power of Habit” by Charles Duhigg. (If you’d like the full book summary for this book, you can pick it up here) For now though, we’ll focus on a few of my favorite big ideas + actionable insights from the book…
Here’s what you’ll discover in this episode/article:
“Habits, scientists say, emerge because the brain is constantly looking for ways to save effort. Left to its own devices, the brain will try to make almost any routine into a habit, because habits allow our minds to ramp down more often. This effort-saving instinct is a huge advantage. An efficient brain also allows us to stop thinking constantly about basic behaviors, such as walking and choosing what to eat, so we can devote mental energy to inventing spears, irrigation systems, and, eventually, airplanes and video games.”
In “The Power of Habit” Duhigg describes a series of experiments run by researchers at MIT on the science of habit formation.
The researchers were running these experiments on groups of rats—dropping them into mazes and making them sniff around for a piece of chocolate placed at the end.
They wanted to monitor brain activity in the rats as they moved about the maze, so they inserted super-tiny micro-sensors in their brains. This, helped the researchers determine which parts of the brain would light up when the rats were running through the maze—which would help them understand how the brain forms habits.
So, they begin the experiment…
And at first, it seemed like the rats weren’t really doing anything interesting at all. They’d start at the beginning of the maze, sniff around, scratch the walls a bit, and randomly pause every now and then before moving through the maze again.
But then the researchers noticed something big: each time the rats moved from one end of the maze to the other; they sniffed around a little less, scratched the walls a little less, and paused a little less—thus, moving through the maze faster and faster with each run.
After running the rats through the maze several times, they learned that the mental activity decreased in the rats with each successful navigation through the maze. As the route become more and more automatic, the rats were actually thinking less about how to get through the maze… No more sniffing, scratching, and pausing necessary. Now, they could speed from start to finish hardly without thinking at all.
The researchers found, that this automaticity in the rats relied on a part of the brain called the basal ganglia, which took over as the rat ran faster and faster and its brain worked less and less.
The basal ganglia was central to recalling patterns and acting on them. In other words, its responsible for storing habits even while the rest of the brain falls asleep.
And your brain works the same way.
This process is called “chunking,” and it plays a primary role in how habits form. With “chunking,” the brain converts a sequence of actions—like brushing your teeth, tying your shoes, or backing your car into the garage—into an automatic routine.
Bottom line? Habits emerge because our brains are always on the lookout for efficient ways to save effort.
So, how do we form habits, then?
“This process within our brains is a three-step loop. First, there is a cue, a trigger that tells your brain to go into automatic mode and which habit to use. Then there is the routine, which can be physical or mental or emotional. Finally, there is a reward, which helps your brain figure out if this particular loop is worth remembering for the future.”
If you want to create new habits of any kind, keep the following formula in mind:
CUE + ROUTINE + REWARD = HABIT
New habits depend on this three-step loop:
1. The cue—a trigger for your brain that tells it which habit to use. 2. The routine—how a habit influences what you do, think, or feel. 3. The reward—which helps us determine how valuable the habit is, and whether it’s worth remembering or not.
Now let’s talk about how you can use this habit loop to develop better habits within your own life…
“If you want to start running each morning, it’s essential that you choose a simple cue (like always lacing up your sneakers before breakfast or leaving your running clothes next to your bed) and a clear reward (such as a midday treat, a sense of accomplishment from recording your miles, or the endorphin rush you get from a jog). But countless studies have shown that a cue and a reward, on their own, aren’t enough for a new habit to last. Only when your brain starts expecting the reward—craving the endorphins or sense of accomplishment—will it become automatic to lace up your jogging shoes each morning. The cue, in addition to triggering a routine, must also trigger a craving for the reward to come.”
The key to creating habits is based on a simple formula any one of us can adopt. Let’s say you want to create the habit of working out first thing in the morning:
The simple addition of a craving could be what makes the difference between whether you get up and hit the gym, or hit snooze and bury yourself back under the sheets.
Cravings drive habits. The reason behind why habits are so powerful is because they actually create neurological cravings. And figuring out what sparks your cravings is what can make creating a habit easier for you.
Neal, David T., Wendy Wood, and Jeffrey M. Quinn. “Habits—A repeat performance.” Current Directions in Psychological Science 15.4 (2006): 198–202. ↩
| 0.9315
|
FineWeb
|
Date of Conferral
Doctor of Business Administration (D.B.A.)
Recent changes in the Centers for Medicare and Medicaid Services (CMS) reimbursement programs resulted in $1 billion in payments to hospitals based on Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) scores. Approximately 50% of the 3,000 hospitals currently receiving Medicare supplements may receive increases in reimbursement payments while 50% will receive decreases in payments. This case study explored how one hospital team in North Texas achieved high HCAHPS scores. The primary provider theory, Deming's model of plan-do-study-act (PDSA), and disruptive innovation theory framed the study. The data collection process included administrator interviews (n = 7), hospital document analysis (n = 13), and observations of staff conducting care (n = 8). Through method triangulation, themes emerged on the constructs required to achieve high HCAHPS scores. Themes included caregiver-patient interactions, hospital services, hospital environment, hospital technology, and hospital governance. Although this was a single case study, other healthcare leaders may explore the findings to determine how the information contained within might transfer to other healthcare organizations. Improved patient outcomes resulting from education, communication, and technology in the continuum of care might enhance the patient experience and patients' overall health and wellness.
| 0.6733
|
FineWeb
|
History of the United States Navy
The history of the United States Navy divides into two major periods: the "Old Navy", a small but respected force of sailing ships that was also notable for innovation in the use of ironclads during the American Civil War, and the "New Navy", the result of a modernization effort that began in the 1880s and made it the largest in the world by the 1920s.
The United States Navy claims 13 October 1775 as the date of its official establishment, when the Second Continental Congress passed a resolution creating the Continental Navy. With the end of the American Revolutionary War, the Continental Navy was disbanded. Under President John Adams threats to American merchant shipping by Barbary pirates from 4 north African Muslim States, in the Mediterranean, led to the Naval Act of 1794, which created a permanent standing U.S. Navy. The original six frigates were authorized as part of the Act. Over the next 20 years, the Navy fought the French Navy in the Quasi-War (1798–99), Barbary states in the First and Second Barbary Wars, and the British in the War of 1812. After the War of 1812, the U.S. Navy was at peace until the Mexican–American War in 1846, and served to combat piracy in the Mediterranean and Caribbean seas, as well fighting the slave trade. In 1845, the Naval Academy was founded. In 1861, the American Civil War began and the U.S. Navy fought the small Confederate Navy with both sailing ships and ironclad ships while forming a blockade that shut down the Confederacy's civilian shipping. After the Civil war, most of the its ships were laid up in reserve, and by 1878, the Navy was just 6,000 men.
In 1882, the U.S. Navy consisted of many outdated ship designs. Over the next decade, Congress approved building multiple modern armored cruisers and battleships, and by around the start of the 20th century had moved from twelfth place in 1870 to fifth place in terms of numbers of ships. After winning two major battles during the 1898 Spanish–American War, the Navy continued to build more ships, and by the end of World War I had more men and women in uniform than the Royal Navy. The Washington Naval Conference recognized the Navy as equal in capital ship size to the Royal Navy, and during the 1920s and 1930s, the Navy built several aircraft carriers and battleships. The Navy was drawn into World War II after the Japanese Attack on Pearl Harbor on 7 December 1941, and over the next four years fought many historic battles including the Battle of the Coral Sea, the Battle of Midway, multiple naval battles during the Guadalcanal Campaign, and the largest naval battle in history, the Battle of Leyte Gulf. Much of the Navy's activity concerned the support of landings, not only with the "island-hopping" campaign in the Pacific, but also with the European landings. When the Japanese surrendered, a large flotilla entered Tokyo Bay to witness the formal ceremony conducted on the battleship Missouri, on which officials from the Japanese government signed the Japanese Instrument of Surrender. By the end of the war, the Navy had over 1,600 warships.
After World War II ended, the U.S. Navy entered the Cold War and participated in the Korean War, the Vietnam War, the Persian Gulf War, and the Iraq War. Following the collapse of the Soviet Union, the Soviet Navy fell apart, which made the United States the world's undisputed naval superpower. Nuclear power and ballistic missile technology led to new ship propulsion and weapon systems, which were used in the Nimitz-class aircraft carriers and Ohio-class submarines. By 1978, the number of ships had dwindled to less than 400, many of which were from World War II, which prompted Ronald Reagan to institute a program for a modern, 600-ship Navy. Today, the United States is the world's undisputed naval superpower, with the ability to engage and project power in two simultaneous limited wars along separate fronts. In March 2007, the U.S. Navy reached its smallest fleet size, with 274 ships, since World War I. Former U.S. Navy admirals who head the U.S. Naval Institute have raised concerns about what they see as the ability to respond to 'aggressive moves by Iran and China.'
- 1 Foundations of the "Old Navy"
- 2 "New Navy"
- 3 Modern era
- 4 See also
- 5 Notes
- 6 References
- 7 External links
The Navy was rooted in the American seafaring tradition, which produced a large community of sailors, captains and shipbuilders in the colonial era. During the Revolution, several states operated their own navies. On 12 June 1775, the Rhode Island General Assembly passed a resolution creating a navy for the colony of Rhode Island. The same day, Governor Nicholas Cooke signed orders addressed to Captain Abraham Whipple, commander of the sloop Katy, and commodore of the armed vessels employed by the government.
The first formal movement for the creation of a Continental navy came from Rhode Island, because its merchants' widespread smuggling activities had been severely harassed by British frigates. On 26 August 1775, Rhode Island passed a resolution that there be a single Continental fleet funded by the Continental Congress. The resolution was introduced in the Continental Congress on 3 October 1775, but was tabled. In the meantime, George Washington had begun to acquire ships, starting with the schooner Hannah which was paid for out of Washington's own pocket. Hannah was commissioned and launched on 5 September 1775, from the port of Marblehead, Massachusetts.
The US Navy recognizes 13 October 1775 as the date of its official establishment — the date of the passage of the resolution of the Continental Congress at Philadelphia, Pennsylvania that created the Continental Navy. On this day, Congress authorized the purchase of two vessels to be armed for a cruise against British merchant ships. Congress on 13 December 1775, authorized the building of thirteen frigates within the next three months, five ships of 32 guns, five with 28 guns and three with 24 guns.
On Lake Champlain, Benedict Arnold ordered the construction of 12 Navy vessels to slow down the British fleet that was invading New York from Canada. The British fleet did destroy Arnold's fleet, but the U.S. fleet managed to slow down the British after a two-day battle, known as the Battle of Valcour Island, and managed to slow the progression of the British Army. By mid-1776, a number of ships, ranging up to and including the thirteen frigates approved by Congress, were under construction, but their effectiveness was limited; they were completely outmatched by the mighty Royal Navy, and nearly all were captured or sunk by 1781.
Privateers had some success, with 1,697 letters of marque being issued by Congress. Individual states, American agents in Europe and in the Caribbean also issued commissions; taking duplications into account more than 2,000 commissions were issued by the various authorities. Over 2,200 British ships were taken by Yankee privateers, amounting to almost $66 million, a significant sum at the time.
One particularly notable American naval hero of the Revolution was John Paul Jones, who in his famous voyage around the British Isles defeated the British ship Serapis (1779) in the Battle of Flamborough Head. Partway through the battle, with the rigging of the two ships entangled, and several guns of Jones' ship Bonhomme Richard (1765) out of action, the captain of Serapis asked Jones if he had struck his colors, to which Jones has been quoted as replying, "I have not yet begun to fight!"
France officially entered the war on 17 June 1778, and the ships of the French Navy sent to the Western Hemisphere spent most of the year in the West Indies, and only sailed near the Thirteen Colonies during the Caribbean hurricane season from July until November. The first French fleet attempted landings in New York and Rhode Island, but ultimately failed to engage British forces during 1778. In 1779, a fleet commanded by Vice Admiral Charles Henri, comte d'Estaing assisted American forces attempting to recapture Savvanah, Georgia.
In 1780, a fleet with 6,000 troops commanded by Lieutenant General Jean-Baptiste, comte de Rochambeau landed at Newport, Rhode Island, and shortly afterwards the fleet was blockaded by the British. In early 1781, Washington and de Rochambeau planned an attack against the British in the Chesapeake Bay area to coordinate with the arrival of a large fleet commanded by Vice Admiral François, comte de Grasse. Successfully deceiving the British that an attack was planned in New York, Washington and de Rochambeau marched to Virginia, and de Grasse began landing forces near Yorktown, Virginia. On 5 September 1781 a major naval action was fought by de Grasse and the British at the Battle of the Virginia Capes, ending with the French fleet in control of the Chesapeake Bay. The U.S. Navy continued to interdict British supply ships until peace was finally declared in late 1783.
The Revolutionary War was ended by the Treaty of Paris in 1783, and by 1785 the Continental Navy was disbanded and the remaining ships were sold. The frigate Alliance, which had fired the last shots of the American Revolutionary War, was also the last ship in the Navy. A faction within Congress wanted to keep the ship, but the new nation did not have the funds to keep her in service. Other than a general lack of money, other factors for the disarmament of the navy were the loose confederation of the states, a change of goals from war to peace, and more domestic and fewer foreign interests.
After the American Revolutionary War the brand-new United States struggled to stay financially afloat. National income was desperately needed and most came from tariffs on imported goods. Because of rampant smuggling, the need was immediate for strong enforcement of tariff laws. On 4 August 1790 the United States Congress, urged on by Secretary of the Treasury Alexander Hamilton, created the Revenue-Marine, the forerunner for the United States Coast Guard, to enforce the tariff and all other maritime laws. Ten cutters were initially ordered. Between 1790 and 1797 when the Navy Department was created, the Revenue-Marine was the only armed maritime service for the United States.
American merchant shipping had been protected by the British Navy, and as a consequence of the Treaty of Paris and the disarmament of the Continental Navy, the United States no longer had any protection for its ships from pirates. The fledgling nation did not have the funds to pay annual tribute to the Barbary states, so their ships were vulnerable for capture after 1785. By 1789, the new Constitution of the United States authorized Congress to create a navy, but during George Washington's first term (1787–1793) little was done to rearm the navy. In 1793, the French Revolutionary Wars between Great Britain and France began, and a truce negotiated between Portugal and Algiers ended Portugal's blockade of the Strait of Gibraltar which had kept the Barbary pirates in the Mediterranean. Soon after, the pirates sailed into the Atlantic, and captured 11 American merchant ships and more than a hundred seamen.
In reaction to the seizure of the American vessels, Congress debated and approved the Naval Act of 1794, which authorized the building of six frigates, four of 44 guns and two of 36 guns. Supporters were mostly from the northern states and the coastal regions, who argued the Navy would result in savings in insurance and ransom payments, while opponents from southern states and inland regions thought a navy was not worth the expense and would drive the United States into more costly wars.
After the passage of the Naval Act of 1794, work began on the construction of the six frigates: USS United States, President, Constellation, Chesapeake, Congress, and Constitution. Constitution, launched in 1797 and the most famous of the six, was nicknamed "Old Ironsides" (like the earlier HMS Britannia) and, thanks to the efforts of Oliver Wendell Holmes, Sr., is still in existence today, anchored in Boston harbor. Soon after the bill was passed, Congress authorized $800,000 to obtain a treaty with the Algerians and ransom the captives, triggering an amendment of the Act which would halt the construction of ships if peace was declared. After considerable debate, three of the six frigates were authorized to be completed: United States, Constitution and Constellation. However, the first naval vessel to sail was USS Ganges, on 24 May 1798.
At the same time, tensions between the U.S. and France developed into the Quasi-War, which originated from the Treaty of Alliance (1778) that had brought the French into the Revolutionary War. The United States preferred to take a position of neutrality in the conflicts between France and Britain, but this put the nation at odds with both Britain and France. After the Jay Treaty was authorized with Britain in 1794, France began to side against the United States and by 1797 they had seized over 300 American vessels. The newly inaugurated President John Adams took steps to deal with the crisis, working with Congress to finish the three almost-completed frigates, approving funds to build the other three, and attempting to negotiate an agreement similar to the Jay Treaty with France. The XYZ Affair originated with a report distributed by Adams where alleged French agents were identified by the letters X, Y, and Z who informed the delegation a bribe must be paid before the diplomats could meet with the foreign minister, and the resulting scandal increased popular support in the country for a war with France. Concerns about the War Department's ability to manage a navy led to the creation of the Department of the Navy, which was established on 30 April 1798.
The war with France was fought almost entirely at sea, mostly between privateers and merchant ships. The first victory for the United States Navy was on 7 July 1798 when USS Delaware captured the French privateer Le Croyable, and the first victory over an enemy warship was on 9 February 1799 when the frigate Constellation captured the French frigate L'Insurgente. By the end of 1800, peace with France had been declared, and in 1801, to prevent a second disarmament of the Navy, the outgoing Federalist administration rushed through Congress an act authorizing a peacetime navy for the first time, which limited the navy to six active frigates and seven in ordinary, as well as 45 officers and 150 midshipmen. The remainder of the ships in service were sold and the dismissed officers were given four months pay.
The problems with the Barbary states had never gone away, and on 10 May 1801 the Tripolitans declared war on the United States by chopping down the flag in front of the American Embassy, which began the First Barbary War. USS Philadelphia was captured by the Moors, but then set on fire in an American raid led by Stephen Decatur. The Marines invaded the "shores of Tripoli" in 1805, capturing the city of Derna, the first time the U.S. flag ever flew over a foreign conquest. This act was enough to induce the Barbary rulers to sign peace treaties. Subsequently the Navy was greatly reduced for reasons of economy, and instead of regular ships, many gunboats were built, intended for coastal use only. This policy proved completely ineffective within a decade.
President Thomas Jefferson and his Republican party opposed a strong navy, arguing that small gunboats in the major harbors were all the nation needed to defend itself. They proved useless in wartime.
The Royal Navy continued to illegally press American sailors into the Royal Navy; an estimated 10,000 sailors between 1799 and 1812. In 1807, in the Chesapeake-Leopard Affair, HMS Leopard demanded that USS Chesapeake submit to an inspection, ostensibly looking for British citizens but in reality looking for any suitable sailors to press into the Royal Navy. Leopard severely damaged Chesapeake when she refused. The most violent of many such encounters, the affair further fueled the tensions and in June 1812 the U.S. declared war on Britain.
War of 1812 (1812–1815)
Much of the war was expected to be fought at sea; and within an hour of the announcement of war, the diminutive American navy set forth to do battle with an opponent outnumbering it 50-to-1. After two months, USS Constitution sank HMS Guerriere; Guerriere's crew were most dismayed to see their cannonballs bouncing off the Constitution's unusually strong live oak hull, giving her the enduring nickname of "Old Ironsides". On 29 December 1812 Constitution defeated HMS Java off the coast of Brazil and Java was burned after the Americans determined she could not be salvaged. On 25 October 1812, USS United States captured HMS Macedonian; after the battle Macedonian was captured and entered into American service. In 1813, USS Essex commenced a very fruitful raiding venture into the South Pacific, preying upon the British merchant and whaling industry. The Essex was already known for her capture of HMS Alert and a British transport the previous year, and gained further success capturing 15 British merchantmen/whalers. The British finally took action, dispatching HMS Cherub and HMS Phoebe to stop the Essex. After violating Chile's neutrality, the British captured the Essex in the Battle of Valparaíso.
The capture of the three British frigates led the British to deploy more vessels on the American seaboard to tighten the blockade. On 1 June 1813, off Boston Harbor, the frigate USS Chesapeake, commanded by Captain James Lawrence, was captured by the British frigate HMS Shannon under Captain Sir Philip Broke. Lawrence was mortally wounded and famously cried out, "Don't give up the ship!". Despite their earlier successes, by 1814 many of the Navy's best ships were blockaded in port and unable to prevent British incursions on land via the sea.
During the summer of 1814, the British fought the Chesapeake Campaign, which was climaxed by amphibious assaults against Washington and Baltimore. The capital fell to the British almost without a fight, and several ships were burned at the Washington Navy Yard, including the 44-gun frigate USS Columbia. At Baltimore, the bombardment by Fort McHenry inspired Francis Scott Key to write "The Star-Spangled Banner", and the hulks blocking the channel prevented the fleet from entering the harbor; the army reembarked on the ships, ending the battle.
The American naval victories at the Battle of Lake Champlain and Battle of Lake Erie halted the final British offensive in the north and helped to deny the British exclusive rights to the Great Lakes in the Treaty of Ghent. Shortly before the treaty was signed, USS President was captured by 4 British frigates. Three days after the treaty was signed, the Constitution captured HMS Levant and Cyane. The final naval action of the war occurred almost 5 months after the treaty on 30 June 1815 when the sloop USS Peacock captured the East India Company brig Nautilus, the last enemy ship captured by the U.S. Navy until World War II.
Continental Expansion (1815–1861)
After the war, the Navy's accomplishments paid off in the form of better funding, and it embarked on the construction of many new ships. However, the expense of the larger ships was prohibitive, and many of them stayed in shipyards half-completed, in readiness for another war, until the Age of Sail had almost completely passed. The main force of the Navy continued to be large sailing frigates with a number of smaller sloops during the three decades of peace. By the 1840s, the Navy began to adopt steam power and shell guns, but they lagged behind the French and British in adopting the new technologies.
Enlisted sailors during this time included many foreign-born men, and native-born Americans were usually social outcasts who had few other employment options or they were trying to escape punishment for crimes. In 1835, almost 3,000 men sailed with merchant ships out of Boston harbor, but only 90 men were recruited by the Navy. It was unlawful for black men to serve in the Navy, but the shortage of men was so acute this law was frequently ignored.
Discipline followed the customs of the Royal Navy but punishment was much milder than typical in European navies. Sodomy was rarely prosecuted. The Army abolished flogging as a punishment in 1812, but the Navy kept it until 1850.
During the War of 1812, the Barbary states took advantage of the weakness of the United States Navy to again capture American merchant ships and sailors. After the Treaty of Ghent was signed, the United States looked at ending the piracy in the Mediterranean which had plagued American merchants for two decades. On 3 March 1815, the U.S. Congress authorized deployment of naval power against Algiers, beginning the Second Barbary War. Two powerful squadrons under the command of Commodores Stephen Decatur, Jr. and William Bainbridge, including the 74-gun ships of the line Washington, Independence, and Franklin, were dispatched to the Mediterranean. Shortly after departing Gibraltar en route to Algiers, Decatur's squadron encountered the Algerian flagship Meshuda, and, in the Action of 17 June 1815, captured it. Not long afterward, the American squadron likewise captured the Algerian brig Estedio in the Battle off Cape Palos. By June, the squadrons had reached Algiers and peace was negotiated with the Dey, including a return of captured vessels and men, a guarantee of no further tributes and a right to trade in the region.
Piracy in the Caribbean sea was also a major problem, and between 1815 and 1822 an estimated 3,000 ships were captured by pirates. In 1819, Congress authorized President James Madison to deal with this threat, and since many of the pirates were privateers of the newly independent states of Latin America, he decided to embark on a strategy of diplomacy backed up by the guns of the Navy. An agreement with Venezuela was reached in 1819, but ships were still regularly captured until a military campaign by the West India Squadron, under the command of David Porter, used a combination of large frigates escorting merchant ships backed by many small craft searching small coves and islands, and capturing pirate vessels. During this campaign USS Sea Gull became the first steam-powered ship to see combat action. Although isolated instances of piracy continued into the 1830s, by 1826 the frequent attacks had ended and the region was declared free for commerce.
Another international problem was the slave trade, and the African squadron was formed in 1820 to deal with this threat. Politically, the suppression of the slave trade was unpopular, and the squadron was withdrawn in 1823 ostensibly to deal with piracy in the Caribbean, and did not return to the African coast until the passage of the Webster–Ashburton treaty with Britain in 1842. After the treaty was passed, the United States used fewer ships than the treaty required, ordered the ships based far from the coast of Africa, and used ships that were too large to operate close to shore. Between 1845 and 1850, the United States Navy captured only 10 slave vessels, while the British captured 423 vessels carrying 27,000 captives.
Congress formally authorized the establishment of the United States Military Academy in 1802, but it took almost 50 years to approve a similar school for naval officers. During the long period of peace between 1815 and 1846, midshipmen had few opportunities for promotion, and their warrants were often obtained via patronage. The poor quality of officer training in the U.S. Navy became visible after the Somers Affair, an alleged mutiny aboard the training ship USS Somers in 1842, and the subsequent execution of midshipman Philip Spencer. George Bancroft, appointed Secretary of the Navy in 1845, decided to work outside of congressional approval and create a new academy for officers. He formed a council led by Commodore Perry to create a new system for training officers, and turned the old Fort Severn at Annapolis into a new institution in 1845 which would be designated as the United States Naval Academy by Congress in 1851.
Naval forces participated in the effort to forcibly move the Seminole Indians from Florida to a reservation west of the Mississippi. After a massacre of army soldiers near Tampa on 28 December 1835, marines and sailors were added to the forces which fought the Second Seminole War from 1836 until 1842. A "mosquito fleet" was formed in the Everglades out of various small craft to transport a mixture of army and navy personnel to pursue the Seminoles into the swamps. About 1,500 soldiers were killed during the conflict, some Seminoles agreed to move but a small group of Seminoles remained in control of the Everglades and the area around Lake Okeechobee.
The Navy played a role in two major operations of the Mexican–American War (1845–1848); during the Battle of Veracruz, it transported the invasion force that captured Veracruz by landing 12,000 troops and their equipment in one day, leading eventually to the capture of Mexico City, and the end of the war. Its Pacific Squadron's ships facilitated the capture of California.
In 1853 Commodore Matthew Perry led the Perry Expedition, a squadron of four ships which sailed to Japan to establish normal relations with Japan. Perry's two technologically advanced steam-powered ships and calm, firm diplomacy convinced Japan to end three centuries of isolation and sign Treaty of Kanagawa with the U.S. in 1854. Nominally a treaty of friendship, the agreement soon paved the way for the opening of Japan and normal trade relations with the United States and Europe.
American Civil War (1861–1865)
Between the beginning of the war and the end of 1861, 373 commissioned officers, warrant officers, and midshipmen resigned or were dismissed from the United States Navy and went on to serve the Confederacy. On 20 April 1861, the Union burned its ships that were at the Norfolk Navy Yard to prevent their capture by the Confederates, but not all of the ships were completely destroyed. The screw frigate USS Merrimack was so hastily scuttled that her hull and steam engine were basically intact, which gave the South's Stephen Mallory the idea of raising her and then armoring the upper sides with iron plate. The resulting ship was named CSS Virginia. Meanwhile, John Ericsson had similar ideas, and received funding to build USS Monitor.
Winfield Scott, the commanding general of the U.S. Army at the beginning of the war, devised the Anaconda Plan to win the war with as little bloodshed as possible. His idea was that a Union blockade of the main ports would weaken the Confederate economy; then the capture of the Mississippi River would split the South. Lincoln adopted the plan in terms of a blockade to squeeze to death the Confederate economy, but overruled Scott's warnings that his new army was not ready for an offensive operation because public opinion demanded an immediate attack.
On 8 March 1862, the Confederate Navy initiated the first combat between ironclads when the Virginia successfully attacked the blockade. The next day, the Monitor engaged the Virginia in the Battle of Hampton Roads. Their battle ended in a draw, and the Confederacy later lost the Virginia when the ship was scuttled to prevent capture. The Monitor was the prototype for the monitor warship and many more were built by the Union Navy. While the Confederacy built more ironclad ships during the war, they lacked the ability to build or purchase ships that could effectively counter the monitors.
Along with ironclad ships, the new technologies of naval mines, which were known as torpedoes after the torpedo eel, and submarine warfare were introduced during the war by the Confederacy. During the Battle of Mobile Bay, mines were used to protect the harbor and sank the Union monitor USS Tecumseh. After Tecumseh sank, Admiral David G. Farragut famously said, "Damn the torpedoes, full speed ahead!". The forerunner of the modern submarine, CSS David, attacked USS New Ironsides using a spar torpedo. The Union ship was barely damaged and the resulting geyser of water put out the fires in the submarine's boiler, rendering the submarine immobile. Another submarine, CSS H.L. Hunley, was designed to dive and surface but ultimately did not work well and sank on five occasions during trials. In action against USS Housatonic the submarine successfully sank its target but was lost by the same explosion.
The Confederate States of America operated a number of commerce raiders and blockade runners, CSS Alabama being the most famous, and British investors built small, fast blockade runners that traded arms and luxuries brought in from Bermuda, Cuba, and The Bahamas in return for high-priced cotton and tobacco. When the Union Navy seized a blockade runner, the ship and cargo were sold and the proceeds given to the Navy sailors; the captured crewmen were mostly British and they were simply released.
The blockade of the South caused the Southern economy to collapse during the war. Shortages of food and supplies were caused by the blockade, the failure of Southern railroads, the loss of control of the main rivers, and foraging by Union and Confederate armies. The standard of living fell even as large-scale printing of paper money caused inflation and distrust of the currency. By 1864 the internal food distribution had broken down, leaving cities without enough food and causing food riots across the Confederacy. The Union victory at the Second Battle of Fort Fisher in January 1865 closed the last useful Southern port, virtually ending blockade running and hastening the end of the war.
After the war, the Navy went into a period of decline. In 1864, the Navy had 51,500 men in uniform, and almost 700 ships and about 60 monitor-type coastal ironclads which made the U.S. Navy the second largest in the world after the Royal Navy. By 1880 the Navy only had 48 ships in commission, 6,000 men, and the ships and shore facilities were decrepit but Congress saw no need to spend money to improve them. The Navy was unprepared to fight a major maritime war before 1897.
In 1871, an expedition of five warships commanded by Rear Admiral John Rodgers was sent to Korea to obtain an apology for the murders of several shipwrecked American sailors and secure a treaty to protect shipwrecked foreigners in the future. After a small skirmish, Rodgers launched an amphibious assault of approximately 650 men on the forts protecting Seoul. Despite the capture of the forts, the Koreans refused to negotiate, and the expedition was forced to leave before the start of typhoon season. Nine sailors and six marines received Medals of Honor for their acts of heroism during the Korean campaign; the first for actions in a foreign conflict.
By the 1870s most of the ironclads from the Civil War were laid up in reserve, leaving the United States virtually without an ironclad fleet. When the Virginius Affair first broke out in 1873, a Spanish ironclad happened to be anchored in New York Harbor, leading to the uncomfortable realization on the part of the U.S. Navy that it had no ship capable of defeating such a vessel. The Navy hastily issued contracts for the construction of five new ironclads, and accelerated its existing repair program for several more. USS Puritan and the four Amphitrite-class monitors were subsequently built as a result of the Virginius war scare. All five vessels would later take part in the Spanish–American War of 1898.
By the time the Garfield administration assumed office in 1881, the Navy's condition had deteriorated still further. A review conducted on behalf of the new Secretary of the Navy, William H. Hunt, found that of 140 vessels on the Navy's active list, only 52 were in an operational state, of which a mere 17 were iron-hulled ships, including 14 aging Civil War era ironclads. Hunt recognized the necessity of modernizing the Navy, and set up an informal advisory board to make recommendations. Also to be expected, morale was considerably down; officers and sailors in foreign ports were all too aware that their old wooden ships would not survive long in the event of war. The limitations of the monitor type effectively prevented the United States from projecting power overseas, and until the 1890s the United States would have come off badly in a conflict with even Spain or the Latin American powers.
In 1882, on the recommendation of an advisory panel, the Navy Secretary William H. Hunt requested funds from Congress to construct modern ships. The request was rejected initially, but in 1883 Congress authorized the construction of three protected cruisers, USS Chicago, USS Boston, and USS Atlanta, and the dispatch vessel USS Dolphin, together known as the ABCD ships. In 1885, two more protected cruisers, USS Charleston and USS Newark which was the last American cruiser to be fitted with a sail rig, were authorized. Congress also authorized the construction of the first battleships in the Navy, USS Texas and USS Maine. The ABCD ships proved to be excellent vessels, and the three cruisers were organized into the Squadron of Evolution, popularly known as the White Squadron because of the color of the hulls, which was used to train a generation of officers and men.
Alfred Thayer Mahan's book The Influence of Sea Power upon History, 1660–1783, published in 1890, was very influential in justifying the naval program to the civilian government and to the general public. With the closing of the frontier, some Americans began to look outwards, to the Caribbean, to Hawaii and the Pacific, and with the doctrine of Manifest Destiny as philosophical justification, many saw the Navy as an essential part of realizing that doctrine beyond the limits of the American continent.
In 1890, Mahan's doctrine influenced Navy Secretary Benjamin F. Tracy to propose the United States start building no less than 200 ships of all types, but Congress rejected the proposal. Instead, the Navy Act of 1890 authorized building three battleships, USS Indiana, USS Massachusetts, and USS Oregon, followed by USS Iowa. By around the start of the 20th century, two Kearsarge-class battleships and three Illinois-class battleships were completed or under construction, which brought the U.S. Navy from twelfth place in 1870 to fifth place among the world's navies.
Battle tactics, especially long-range gunnery, became a central concern.
Spanish–American War (1898)
The United States was interested in purchasing colonies from Spain, specifically Cuba, but Spain refused. Newspapers wrote stories, many which were fabricated, about atrocities committed in Spanish colonies which raised tensions between the two countries. A riot gave the United States an excuse to send USS Maine to Cuba, and the subsequent explosion of Maine in Havana Harbor increased popular support for war with Spain. The cause of the explosion was investigated by a board of inquiry, which in March 1898 came to the conclusion the explosion was caused by a sea mine, and there was pressure from the public to blame Spain for sinking the ship. However, later investigations pointed to an internal explosion in one of the magazines caused by heat from a fire in the adjacent coal bunker.
Assistant Navy secretary Theodore Roosevelt quietly positioned the Navy for attack before the Spanish–American War was declared in April 1898. The Asiatic Squadron, under the command of George Dewey, immediately left Hong Kong for the Philippines, attacking and decisively defeating the Spanish fleet in the Battle of Manila Bay. A few weeks later, the North Atlantic Squadron destroyed the majority of heavy Spanish naval units in the Caribbean in the Battle of Santiago de Cuba.
The Navy's experience in this war was both encouraging, in that it had won, and cautionary, in that the enemy had one of the weakest of the worlds' modern fleets, and that the Manila Bay attack was extremely risky; if the American ships had been severely damaged or had run out of supplies, they were 7,000 miles from the nearest American harbor. This realization would have a profound effect on Navy strategy, and, indeed, American foreign policy, in the next several decades.
Fortunately for the New Navy, its most ardent political supporter, Theodore Roosevelt, became President in 1901. Under his administration, the Navy went from the sixth largest in the world to second only to the Royal Navy. Theodore Roosevelt's administration became involved in the politics of the Caribbean and Central America, with interventions in 1901, 1902, 1903, and 1906. At a speech in 1901, Roosevelt said, "Speak softly and carry a big stick, you will go far", which was a cornerstone of diplomacy during his presidency.
Roosevelt believed that a U.S.-controlled canal across Central America was a vital strategic interest to the U.S. Navy, because it would significantly shorten travel times for ships between the two coasts. Roosevelt was able to reverse a decision in favor of a Nicaraguan Canal and instead moved to purchase the failed French effort across the Isthmus of Panama. The isthmus was controlled by Colombia, and in early 1903, the Hay–Herrán Treaty was signed by both nations to give control of the canal to the United States. After the Colombian Senate failed to ratify the treaty, Roosevelt implied to Panamanian rebels that if they revolted, the US Navy would assist their cause for independence. Panama proceeded to proclaim its independence on 3 November 1903, and USS Nashville impeded any interference from Colombia. The victorious Panamanians allowed the United States control of the Panama Canal Zone on 23 February 1904, for US$10 million. The naval base at Guantanamo Bay, Cuba was built in 1905 to protect the canal.
The latest technological innovation of the time, submarines, were developed in the state of New Jersey by an Irish-American inventor, John Philip Holland. His submarine, USS Holland was officially commissioned into U.S. Navy service in the fall of 1900. The Russo-Japanese War of 1905 and the launching of HMS Dreadnought in the following year lent impetus to the construction program. At the end of 1907 Roosevelt had sixteen new battleships to make up his "Great White Fleet", which he sent on a cruise around the world. While nominally peaceful, and a valuable training exercise for the rapidly expanding Navy, it was also useful politically as a demonstration of United States power and capabilities; at every port, the politicians and naval officers of both potential allies and enemies were welcomed on board and given tours. The cruise had the desired effect, and American power was subsequently taken more seriously.
The voyage taught the Navy more fueling stations were needed around the world, and the strategic potential of the Panama Canal, which was completed in 1914. The Great White Fleet required almost 50 coaling ships, and during the cruise most of the fleet's coal was purchased from the British, who could deny access to fuel during a military crisis as they did with Russia during the Russo-Japanese War.
World War I (1914–1918)
When United States agents discovered that the German merchant ship Ypiranga was carrying illegal arms to Mexico, President Wilson ordered the Navy to stop the ship from docking at the port of Veracruz. On 21 April 1914, a naval brigade of marines and sailors occupied Veracruz. A total of 55 Medals of Honor were awarded for acts of heroism at Veracruz, the largest number ever granted for a single action.
Preparing for war 1914-1917
Despite U.S. declarations of neutrality and German accountability for its unrestricted submarine warfare, in 1915 the British passenger liner Lusitania was sunk, leading to calls for war. President Wilson forced the Germans to suspend unrestricted submarine warfare and after long debate Congress passes the Naval Act of 1916 that authorized a $500 million construction program over three years for 10 battleships, 6 battlecruisers, 10 scout cruisers, 50 destroyers and 67 submarines. The idea was a balanced fleet, but in the event destroyers were much more important, because they had to handle uboats and convoys. By the end of the war 273 destroyers had been ordered; most were finished after World War I ended but many served in World War II. There were few war plans beyond the defense of the main American harbors.
Navy Secretary Josephus Daniels, a pacifistic journalist, had built up the educational resources of the Navy and made its Naval War College an essential experience for would-be admirals. However, he alienated the officer corps with his moralistic reforms, (no wine in the officers' mess, no hazing at Annapolis, more chaplains and YMCAs). Ignoring the nation's strategic needs, and disdaining the advice of its experts, Daniels suspended meetings of the Joint Army and Navy Board for two years because it was giving unwelcome advice. He chopped in half the General Board's recommendations for new ships, reduced the authority of officers in the Navy yards where ships were built and repaired, and ignored the administrative chaos in his department. Bradley Fiske, one of the most innovative admirals in American naval history, in 1914 was Daniels' top aide; he recommended a reorganization that would prepare for war, but Daniels refused. Instead he replaced Fiske in 1915 and brought in for the new post of Chief of Naval Operations an unknown captain, William S. Benson. Chosen for his compliance, Benson proved a wily bureaucrat who was more interested in preparing for an eventual showdown with Britain than an immediate one with Germany.
In 1915 Daniels set up the Naval Consulting Board headed by Thomas Edison to obtain the advice and expertise of leading scientists, engineers, and industrialists. It popularized technology, naval expansion, and military preparedness, and was well covered in the media. Daniels and Benson rejected proposals to send observers to Europe, leaving the Navy in the dark about the success of the German submarine campaign. Admiral William Sims charged after the war that in April, 1917, only ten percent of the Navy's warships were fully manned; the rest lacked 43% of their seamen. Only a third of the ships were fully ready. Light antisubmarine ships were few in number, as if no one had noticed the u-boat factor that had been the focus of foreign policy for two years. The Navy's only warfighting plan, the "Black Plan" assumed the Royal Navy did not exist and that German battleships were moving freely about the Atlantic and the Caribbean and threatening the Panama Canal. His most recent biographer concludes that, "it is true that Daniels had not prepared the navy for the war it would have to fight."
Fighting a world war, 1917–18
America entered the war in April 1917 and the Navy's role was mostly limited to convoy escort and troop transport and the laying of a minefield across the North Sea. The United States Navy sent a battleship group to Scapa Flow to join with the British Grand Fleet, destroyers to Queenstown, Ireland and submarines to help guard convoys. Several regiments of Marines were also dispatched to France. The first victory for the Navy in the war occurred on 17 November 1917 when USS Fanning and USS Nicholson sank the German U-boat U-58. During World War I, the Navy was the first branch of the United States armed forces to allow enlistment by women in a non-nursing capacity, as Yeoman (F). The first woman to enlist in the U.S. Navy was Loretta Perfectus Walsh on 17 March 1917.
The Navy's vast wartime expansion was overseen by civilian officials, especially Assistant Secretary Franklin D. Roosevelt. In peacetime, the Navy confined all munitions that lacked civilian uses, including warships, naval guns, and shells to Navy yards. The Navy yards expanded enormously, and subcontracted the shells and explosives to chemical companies like DuPont and Hercules. Items available on the civilian market, such as food and uniforms were always purchased from civilian contractors. Armor plate and airplanes were purchased on the market.
Inter-war entrenchment and expansion (1918–1941)
At the end of World War I, the United States Navy had almost 500,000 officers and enlisted men and women and in terms of personnel was the largest in the world. Younger officers were enthusiastic about the potential of land-based naval aviation as well as the potential roles of aircraft carriers. Chief of Naval Operations Benson was not among them. He tried to abolish aviation in 1919 because he could not "conceive of any use the fleet will ever have for aviation." However Roosevelt listened to the visionaries and reversed Benson's decision.
After a short period of demobilization, the major naval nations of the globe began programmes for increasing the size and number of their capital ships. Wilson's plan for a world-leading set of capital ships led to a Japanese counter-programme, and a plan by the British to build sufficient ships to maintain a navy superior to either. American isolationist feeling and the economic concerns of the others led to the Washington Naval Conference of 1921. The outcome of the conference included the Washington Naval Treaty (also known as the Five-Power treaty), and limitations on the use of submarines. The Treaty prescribed a ratio of 5:5:3:1:1 for capital ships between treaty nations. The treaty recognized the U.S. Navy as being equal to the Royal Navy with 525,000 tons of capital ships and 135,000 tons of aircraft carriers, and the Japanese as the third power. Many older ships were scrapped by the five nations to meet the treaty limitations, and new building of capital ships limited.
One consequence was to encourage the development of light cruisers and aircraft carriers. The United States's first carrier, a converted collier named USS Langley was commissioned in 1922, and soon joined by USS Lexington and USS Saratoga, which had been designed as battlecruisers until the treaty forbade it. Organizationally, the Bureau of Aeronautics was formed in 1921; naval aviators would become referred to as members of the United States Naval Air Corps.
Army airman Billy Mitchell challenged the Navy by trying to demonstrate that warships could be destroyed by land-based bombers. He destroyed his career in 1925 by publicly attacking senior leaders in the Army and Navy for incompetence for their "almost treasonable administration of the national defense."
The Vinson-Trammell Act of 1934 set up a regular program of ship building and modernization to bring the Navy to the maximum size allowed by treaty. The Navy's preparation was helped along by another Navy assistant secretary turned president, Franklin D. Roosevelt. The naval limitation treaties also applied to bases, but Congress only approved building seaplane bases on Wake Island, Midway Island and Dutch Harbor and rejected any additional funds for bases on Guam and the Philippines. Navy ships were designed with greater endurance and range which allowed them to operate further from bases and between refits.
The Navy had a presence in the Far East with a naval base in the US-owned Philippines and river gunboats in China on the Yangtze River. The gunboat USS Panay was bombed and machine-gunned by Japanese airplanes. Washington quickly accepted Japan's apologies and compensation.
African-Americans were enlisted during World War I, but this was halted in 1919 and they were mustered out of the Navy. Starting in the 1930s a few were recruited to serve as stewards in the officers mess. African-Americans were recruited in larger numbers only after Roosevelt insisted in 1942.
The Naval Act of 1936 authorized the first new battleship since 1921, and USS North Carolina, was laid down in October 1937. The Second Vinson Act authorized a 20% increase in the size of the Navy, and in June 1940 the Two-Ocean Navy Act authorized an 11% expansion in the Navy. Chief of Naval Operations Harold Rainsford Stark asked for another 70% increase, amounting to about 200 additional ships, which was authorized by Congress in less than a month. In September 1940, the Destroyers for Bases Agreement gave Britain much-needed destroyers—of WWI vintage—in exchange for United States use of British bases.
In 1941, the Atlantic Fleet was reactivated. The Navy's first shot in anger came on 9 April, when the destroyer USS Niblack dropped depth charges on a U-boat detected while Niblack was rescuing survivors from a torpedoed Dutch freighter. In October, the destroyers Kearny and Reuben James were torpedoed, and Reuben James was lost.
Submarines were the "silent service"—in terms of operating characteristics and the closed-mouth preferences of the submariners. Strategists had, however, been looking into this new type of warship, influenced in large part by Germany's nearly successful U-boat campaign. As early as 1912, Lieutenant Chester Nimitz had argued for long-range submarines to accompany the fleet to scout the enemy's location. The new head of the Submarine Section in 1919 was Captain Thomas Hart, who argued that submarines could win the next war: "There is no quicker or more effective method of defeating Japan than the cutting of her sea communications." However Hart was astonished to discover how backward American submarines were compared to captured German U-boats, and how unready they were for their mission. The public supported submarines for their coastal protection mission; they would presumably intercept enemy fleets approaching San Francisco or New York. The Navy realized it was a mission that isolationists in Congress would fund, but it was not actually serious. Old-line admirals said the mission of the subs ought to be as eyes of the battle fleet, and as assistants in battle. That was unfeasible since even on the surface submarines could not move faster than 20 knots, far slower than the 30 knot main warships. The young commanders were organized into a "Submarine Officers' Conference" in 1926. They argued they were best suited for the commerce raiding that had been the forte of the U-boats. They therefore redesigned their new boats along German lines, and added the new requirement that they be capable of sailing alone for 7,500 miles on a 75-day mission. Unrestricted submarine warfare had led to war with Germany in 1917, and was still vigorously condemned both by public opinion and by treaties, including the London Treaty of 1930. Nevertheless, the submariners planned a role in unrestricted warfare against Japanese merchant ships, transports and oil tankers. The Navy kept its plans secret from civilians. It was an admiral, not President Roosevelt, who within hours of the Pearl Harbor attack, ordered unrestricted warfare against any enemy ship anywhere in the Pacific.
The submariners had won over Navy strategists, but their equipment was not yet capable of handling their secret mission. The challenge of designing appropriate new boats became a high priority by 1934, and was solved in 1936 as the first new long-range, all welded submarines were launched. Even better were the S-class Salmon class (launched in 1937), and its successors the T-class or Tambor submarines of 1939 and the Gato class of 1940. The new models cost about $5–6 million each. At 300 feet in length and 1500 tons, they were twice as big as the German U-boats, but still highly maneuverable. In only 35 seconds they could crash dive to 60 feet. The superb Mark 3 TDC Torpedo Data Computer (an analog computer) took data from periscope or sonar readings on the target's bearing, range and angle on the bow, and continuously set the course and proper gyroscope angle for a salvo of torpedoes until the moment of firing. Six forward tubes and 4 aft were ready for the 24 Mk-14 "fish" the subs carried. Cruising on the surface at 20 knots (using 4 diesel engines) or maneuvering underwater at 8-10 knots (using battery-powered electric motors) they could circle around slow-moving merchant ships. New steels and welding techniques strengthened the hull, enabling the subs to dive as deep as 400 feet in order to avoid depth charges. Expecting long cruises the 65 crewmen enjoyed good living conditions, complete with frozen steaks and air conditioning to handle the hot waters of the Pacific. The new subs could remain at sea for 75 days, and cover 10,000 miles, without resupply. The submariners thought they were ready—but they had two hidden flaws. The penny-pinching atmosphere of the 1930s produced hypercautious commanders and defective torpedoes. Both would have to be replaced in World War II.
World War II (1941–1945)
After the disaster at Pearl Harbor Roosevelt turned to the most aggressive sailor available, Admiral Ernest J. King (1878-1956). Experienced in big guns, aviation and submarines, King had a broad knowledge and a total dedication to victory. He was perhaps the most dominating admiral in American naval history; he was hated but obeyed, for he made all the decisions from his command post in the Washington, and avoided telling anyone. The civilian Secretary of the Navy was a cipher whom King kept in the dark; that only changed when the Secretary died in 1944 and Roosevelt brought in his tough-minded aide James Forrestal. Despite the decision of the Joint Chiefs of Staff under Admiral William D. Leahy to concentrate first against Germany, King made the defeat of Japan his highest priority. For example, King insisted on fighting for Guadalcanal despite strong Army objections. His main strike force was built around carriers based at Pearl Harbor under the command of Chester Nimitz. Nimitz had one main battle fleet, with the same ships and sailors but two command systems that rotated every few months between Admiral Bull Halsey and Admiral Raymond A. Spruance. The Navy had a major advantage: it had broken the Japanese code. It deduced that Hawaii was the target in June 1942, and that Yamamoto’s fleet would strike at Midway Island. King only had four carriers in operation; he sent them all to Midway where in a miraculous few minutes they sank the Japanese carriers. This gave the Americans the advantage in firepower that grew rapidly as new American warships came on line much faster than Japan could build them. King paid special attention to submarines to use against the overextended Japanese logistics system. They were built for long-range missions in tropical waters, and set out to sink the freighters, troop transports and oil tankers that held the Japanese domains together. The Southwest Pacific theatre, based in Australia, was under the control of Army General Douglas MacArthur; King assigned him a fleet of his own without any big carriers.
On 7 December 1941, Japan's carriers launched the Attack on Pearl Harbor, sinking or disabling the entire battleship fleet. The stupendous defeat forced Admiral King to develop a new strategy based on carriers. Although the sunken battleships were raised, and many new ones were built, battleships played a secondary role in the war, limited chiefly to bombardment of islands scheduled for amphibious landings. The "Big Gun" club that had dominated the Navy since the Civil War lost its clout.
The U.S. was helpless in the next six months as the Japanese swept through the Western Pacific and into the Indian Ocean, rolling up the Philippines as well as the main British base at Singapore. After reeling from these defeats the Navy stabilized its lines in summer 1942.
At the start of the war, the United States and Japan were well matched in aircraft carriers, in terms of numbers and quality. Both sides had nine, but the Mitsubishi A6M Zero carrier fighter plane was superior in terms of range and maneuverability to its American counterpart, the F4F Wildcat. By reverse engineering a captured Zero, the American engineers identified its weaknesses, such as inadequate protection for the pilot and the fuel tanks, and built the Hellcat as a superior weapon system. In late 1943 the Grumman F6F Hellcats entered combat. Powered by the same 2,000 horsepower Pratt and Whitney 18-cylinder radial engine as used by the F4U Corsair already in service with the Marine Corps and the UK's allied Fleet Air Arm, the F6Fs were faster (at 400 mph) than the Zeros, quicker to climb (at 3,000 feet per minute), more nimble at high altitudes, better at diving, had more armor, more firepower (6 machine guns fired 120 bullets per second) than the Zero's two machine guns and pair of 20 mm autocannon, carried more ammunition, and used a gunsight designed for deflection shooting at an angle. Although the Hellcat was heavier and had a shorter range than the Zero, on the whole it proved a far superior weapon. Japan's carrier and pilot losses at Midway crippled its offensive capability, but America's overwhelming offensive capability came from shipyards that increasingly out produced Japan's, from the refineries that produced high-octane gasoline, and from the training fields that produced much better trained pilots. In 1942 Japan commissioned 6 new carriers but lost 6; in 1943 it commissioned 3 and lost 1. The turning point came in 1944 when it added 8 and lost 13. At war's end Japan had 5 carriers tied up in port; all have been damaged, all lacked fuel and all lacked warplanes. Meanwhile, the US launched 13 small carriers in 1942 and one large one; and in 1943 added 15 large and 50 escort carriers, and more came in 1944 and 1945. The new American carriers were much better designed, with far more antiaircraft guns, and powerful radar.
Both sides were overextended in the exhaustive sea, air and land battles for Guadalcanal. The Japanese were better at night combat (because they American destroyers had only trained for attacks on battleships). However, the Japanese could not feed its soldiers so the Americans eventually won because of superior logistics. The Navy built up its forces in 1942-43, and developed a strategy of "island-hopping, that is to skip over most of the heavily defended Japanese islands and instead go further on and select islands to seize for forward air bases.
In the Atlantic, the Allies waged a long battle with German submarines which was termed the Battle of the Atlantic. Navy aircraft flew from bases in Greenland and Iceland to hunt submarines, and hundreds of escort carriers and destroyer escorts were built which were specifically designed to protect merchant convoys. In the Pacific, in an ironic twist, the U.S. submarines fought against Japanese shipping in a mirror image of the Atlantic, with U.S. submarines hunting Japanese merchant ships. At the end of the war the U.S. had 260 submarines in commission. It had lost 52 submarines during the war, 36 in actions in the Pacific. Submarines effectively destroyed the Japanese merchant fleet by January 1945 and choked off Japan's oil supply.
In the summer of 1943, the U.S. began the Gilbert and Marshall Islands campaign to retake the Gilbert and Marshall Islands. After this success, the Americans went on to the Mariana and Palau Islands in summer 1944. Following their defeat at the Battle of Saipan, the Imperial Japanese Navy's Combined Fleet, with 5 aircraft carriers, sortied to attack the Navy's Fifth Fleet during the Battle of the Philippine Sea, which was the largest aircraft carrier battle in history. The battle was so one-sided that it became known as the "Marianas turkey shoot"; the U.S. lost 130 aircraft and no ships while the Japanese lost 411 planes and 3 carriers. Following victory in the Marianas, the U.S. began the reconquest of the Philippines at Leyte in October 1944. The Japanese fleet sortied to attack the invasion fleet, resulting in the four-day Battle of Leyte Gulf, one of the largest naval battles in history. The first kamikaze missions are flown during the battle, sinking USS St. Lo and damaging several other U.S. ships; these attacks were the most effective anti-ship weapon of the war.
The Battle of Okinawa became the last major battle between U.S. and Japanese ground units. Okinawa was to become a staging area for the eventual invasion of Japan since it was just 350 miles (560 km) south of the Japanese mainland. Marines and soldiers landed unopposed on 1 April 1945, to begin an 82-day campaign which became the largest land-sea-air battle in history and was noted for the ferocity of the fighting and the high civilian casualties with over 150,000 Okinawans losing their lives. Japanese kamikaze pilots inflicted the largest loss of ships in U.S. naval history with the sinking of 36 and the damaging of another 243. Total U.S. casualties were over 12,500 dead and 38,000 wounded, while the Japanese lost over 110,000 men, making Okinawa one of the bloodiest battles in history.
The fierce fighting on Okinawa is said to have played a part in President Truman’s decision to use the atomic bomb and to forsake an invasion of Japan. When the Japanese surrendered, a flotilla of 374 ships entered Tokyo Bay to witness the ceremony conducted on the battleship USS Missouri. By the end of the war the US Navy had over 1200 warships.
Cold War (1945–1991)
The immediate postwar fate of the Navy was the scrapping and mothballing of ships on a large scale; by 1948 only 267 ships were active in the Navy.
The Navy gradually developed a reputation for having the most highly developed technology of all the U.S. services. The 1950s saw the development of nuclear power for ships, under the leadership of Admiral Hyman G. Rickover, the development of missiles and jets for Navy use and the construction of supercarriers. The USS Enterprise was the world's first nuclear-powered aircraft carrier and was followed by the Nimitz-class supercarriers. Ballistic missile submarines grew ever more deadly and quiet, culminating in the Ohio-class submarines.
Tension with the Soviet Union came to a head in the Korean War, and it became clear that the peacetime Navy would have to be much larger than ever imagined. Fleets were assigned to geographic areas around the world, and ships were sent to hot spots as a standard part of the response to the periodic crises. However, because the North Korean navy was not large, the Korean War featured few naval battles; the combatant navies served mostly as naval artillery for their in-country armies. A large amphibious landing at Inchon succeeded in driving the North Koreans back across the 38th parallel. The Battle of Chosin Reservoir ended with the evacuation of almost 105,000 UN troops from the port of Hungnam.
The U.S. Navy's 1956 shipbuilding program was significant because it included authorization for the construction of eight submarines, the largest such order since World War II. This FY-56 program included five nuclear-powered submarines – Triton, the guided missile submarine Halibut, the lead ship for the Skipjack class, and the final two Skate-class attack submarines, Sargo and Seadragon. It also included the three diesel-electric Barbel class, the last diesel-electric submarines to be built by the U.S. Navy.
An unlikely combination of Navy ships fought in the Vietnam War; aircraft carriers offshore launched thousands of air strikes, while small gunboats of the "Brown-water navy" patrolled the rivers. Despite the naval activity, new construction was curtailed by Presidents Johnson and Nixon to save money, and many of the carriers on Yankee Station dated from World War II. By 1978 the fleet had dwindled to 217 surface ships and 119 submarines.
Meanwhile, the Soviet fleet had been growing, and outnumbered the U.S. fleet in every type except carriers, and the Navy calculated they probably would be defeated by the Soviet Navy in a major conflict. This concern led the Reagan administration to set a goal for a 600-ship Navy, and by 1988 the fleet was at 588, although it declined again in subsequent years. The Iowa-class battleships Iowa, New Jersey, Missouri, and Wisconsin were reactivated after 40 years in storage, modernized, and made showy appearances off the shores of Lebanon and elsewhere. In 1987 and 1988, the United States Navy conducted various combat operations in the Persian Gulf against Iran, most notably Operation Praying Mantis, the largest surface-air naval battle since World War II.
Post–Cold War (1991–present)
When a crisis confronts the nation, the first question often asked by policymakers is: 'What naval forces are available and how fast can they be on station?'
Following the collapse of the Soviet Union, the Soviet Navy fell apart, without sufficient personnel to man many of its ships or the money to maintain them—indeed, many of them were sold to foreign nations. This left the United States as the world's undisputed naval superpower. U.S. naval forces did undergo a decline in absolute terms but relative to the rest of the world, however, United States dwarfs other nations' naval power as evinced by its 11 aircraft supercarriers and their supporting battle groups. During the 1990s, the United States naval strategy was based on the overall military strategy of the United States which emphasized the ability of the United States to engage in two simultaneous limited wars along separate fronts.
The ships of the Navy participated in a number of conflicts after the end of the Cold War. After diplomatic efforts failed, the Navy was instrumental in the opening phases of the 1991 Gulf War with Iraq; the ships of the navy launched hundreds of Tomahawk II cruise missiles and naval aircraft flew sorties from six carriers in the Persian Gulf and Red Sea. The battleships Missouri and Wisconsin fired their 16-inch guns for the first time since the Korean war on several targets in Kuwait in early February. In 1999, hundreds of Navy and Marine Corps aircraft flew thousands of sorties from bases in Italy and carriers in the Adriatic against targets in Serbia and Kosovo to try to stop the ethnic cleansing in Kosovo. After a 78-day campaign Serbia capitulated to NATO's demands.
In the wake of a tidal wave of command officers tossed overboard when their commands have run aground (sometimes literally), in 2012 the CNO ordered a change of course for Navy-wide command officer selection to try to free the Navy from its current doldrums.
In March 2007, the U.S. Navy reached its smallest fleet size, with 274 ships, since World War I. Since the end of the Cold War, the Navy has shifted its focus from preparations for large-scale war with the Soviet Union to special operations and strike missions in regional conflicts. The Navy participated in Operation Enduring Freedom, Operation Iraqi Freedom, and is a major participant in the ongoing War on Terror, largely in this capacity. Development continues on new ships and weapons, including the Gerald R. Ford-class aircraft carrier and the Littoral combat ship. One hundred and three U.S. Navy personnel died in the Iraq War. U.S. Navy warships launched cruise missiles into military targets in Libya during Operation Odyssey Dawn to enforce a UN resolution.
Former U.S. Navy admirals who head the U.S. Naval Institute have raised concerns about what they see as the ability to respond to 'aggressive moves by Iran and China.' As part of the pivot to the Pacific, Defense Secretary Leon E. Panetta said that the Navy would switch from a 50/50 split between the Pacific and the Atlantic to a 60/40 percent split that favored the Pacific, but the Chief of Naval Operations, Admiral Jonathan Greenert, and the Chairman of the Joint Chiefs of Staff, General Martin Dempsey, have said that this would not mean "a big influx of troops or ships in the Western Pacific". This pivot is a continuation of the trend towards the Pacific that first saw the Cold War's focus against the Soviet Union with 60 percent of the American submarine fleet stationed in the Atlantic shift towards an even split between the coasts and then in 2006, 60 percent of the submarines stationed on the Pacific side to counter China. The pivot is not entirely about numbers as some of the most advanced platforms will now have a Pacific focus, where their capabilities are most needed. However even a single incident can make a big dent in a fleet of modest size with global missions.
- Stewart, Joshua (16 April 2012). "SECNAV: Navy can meet mission with 300 ships". Navy Times. Retrieved 7 November 2012.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Freedberg, Sydney J., Jr. (21 May 2012). "Navy Strains To Handle Both China And Iran At Once". Aol Defense. Retrieved 7 November 2012.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Jonathan R. Dull, American Naval History, 1607–1865: Overcoming the Colonial Legacy (University of Nebraska Press; 2012)
- Miller 1997, p. 15
- Howarth 1999, p. 6
- Westfield, Duane. Purdin, Bill (ed.). "The Birthplace of the American Navy". Marblehead Magazine. Retrieved 26 April 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Establishment of the Navy, 13 October 1775". Naval History & Heritage Command. US Navy. Retrieved 5 November 2009.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Miller 1997, p. 16
- Miller 1997, p. 17
- Miller 1997, pp. 21–22
- Miller 1997, p. 19
- Howarth 1999, p. 16
- Howarth 1999, p. 39
- Sweetman 2002, p. 8
- Sweetman 2002, p. 9
- Robert W. Love, Jr., History of the U.S. Navy (1992) vol 1 pp 27–41
- "Alliance". Dictionary of American Naval Fighting Ships. Navy Department, Naval History & Heritage Command. Retrieved 23 November 2009.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Miller 1997, pp. 33–35
- Howarth 1999, pp. 65–66
- Sweetman 2002, p. 14
- "The First 10 Cutters". United States Coast Guard. Retrieved 12 April 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "U.S. Coast Guard History Program". United States Coast Guard. Retrieved 25 November 2009.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Howarth 1999, pp. 49–50
- Miller 1997, pp. 35–36
- Sweetman 2002, p. 15
- Sweetman 2002, p. 16
- "Action between U.S. Frigate Constellation and French Frigate Insurgente, 9 February 1799". Naval History & Heritage Command. US Navy. Retrieved 18 November 2009.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Miller 1997, p. 40
- Miller 1997, pp. 45–46
- Miller 1997, p. 46
- Sweetman 2002, p. 19
- Sweetman 2002, p. 22
- Miller 1997, pp. 52–53
- Miller 1997, p. 59
- David Stephen Heidler; Jeanne T. Heidler (2004). Encyclopedia of the War of 1812. Naval Institute Press. p. 218.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Miller 1997, p. 58
- Sweetman 2002, p. 23
- Miller 1997, p. 65
- Sweetman 2002, p. 26
- Sweetman 2002, p. 30
- Miller 1997, p. 68
- Howarth 1999, p. 109
- Miller 1997, p. 72
- Miller 1997, pp. 75–77
- Sweetman 2002, pp. 34–35
- Miller 1997, p. 84
- Miller 1997, p. 94
- B. R. Burg, "Sodomy, Masturbation, and Courts-Martial in the Antebellum American Navy," Journal of the History of Sexuality, 23 (Jan. 2014), 53–78. online
- Harold Langley, Social Reform in the United States Navy, 1798–1862 (University of Illinois Press, 1967)
- Sweetman 2002, p. 35
- Sweetman 2002, p. 37
- Miller 1997, p. 87
- Sweetman 2002, p. 44
- Miller 1997, p. 103
- Sweetman 2002, p. 54
- Sweetman 2002, pp. 40–44
- Sweetman 2002, pp. 48–51
- Sweetman 2002, pp. 54–55
- Dudley, William S. (1981). "Going South: U. S. Navy Officer Resignations & Dismissals on the Eve of the Civil War". Naval History & Heritage Command. US Navy. Retrieved 6 October 2010.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Howarth 1999, p. 182
- Howarth 1999, pp. 184–185
- Dudley, William S. "CSS Alabama: Lost and Found". Naval History & Heritage Command. US Navy. Retrieved 6 October 2010.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Howarth 1999, p. 191
- Howarth 1999, pp. 208–209
- Howarth 1999, pp. 203–205
- Howarth 1999, pp. 206–207
- Luraghi 1996, pp. 334–335[clarification needed]
- Miller 1997, p. 114
- Naval Encyclopedia 2010, p. 462[clarification needed]
- Miller 1997, pp. 144–147
- Wolters, Timothy S. (January 2011). "A Material Analysis of Late-Nineteenth-Century U.S. Naval Power". Technology and Culture. 52 (1). doi:10.1353/tech.2011.0023.
- Sweetman 2002, p. 84
- "Medal of Honor recipients Korean Campaign 1871". United States Army Center of Military History. Retrieved 22 July 2010.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Swann 1965, pp. 141–142
- Swann 1965, pp. 152–154
- Sondhaus 2001, pp. 126–128
- Sondhaus 2001, pp. 173–179
- Miller 1997, p. 149
- Sweetman 2002, p. 87
- Miller 1997, p. 153
- Miller 1997, p. 144
- Miller 1997, p. 155
- Katherine C. Epstein, "No One Can Afford to Say 'Damn the Torpedoes': Battle Tactics and U.S. Naval History before World War I," Journal of Military History 77 (April 2013), 491–520.
- Howarth 1999, pp. 249–250
- Howarth 1999, pp. 253–257
- Miller 1997, pp. 163–165
- Howarth 1999, p. 288
- Howarth 1999, p. 275
- Howarth 1999, p. 278
- Miller 1997, p. 169
- Miller 1997, pp. 166–168
- Miller 1997, pp. 170–171
- Anderson 2008, p. 106[clarification needed]
- Sweetman 2002, pp. 116–117
- Howarth 1999, pp. 301–302
- Sweetman 2002, p. 121
- Miller 1997, p. 186
- Henry Woodhouse (1917). Text Book of Naval Aeronautics. Century. pp. 174–75.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Theodore A. Thelander, "Josephus Daniels and the Publicity Campaign for Naval and Industrial Preparedness before World War I," North Carolina Historical Review (1966) 43#3 pp 316–332
- Love, History of the U.S. Navy(1992) 1:458–78
- Love, History of the U.S. Navy(1992) 1:479–81
- Michael Simpson (1991). Anglo-American naval relations, 1917–1919. Scolar Press.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Lee A. Craig (2013). Josephus Daniels: His Life and Times. U. North Carolina Press. pp. 364–65.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Howarth 1999, p. 309
- Sweetman 2002, p. 124
- Sweetman 2002, p. 122
- Howarth 1999, p. 324
- Jeffery S. Underwood, The wings of democracy: the influence of air power on the Roosevelt Administration, 1933-1941 (1991) p. 11
- Howarth 1999, pp. 339–342
- Howarth 1999, pp. 341–342
- Thomas Wildenberg, "Billy Mitchell Takes on the Navy." Naval History (2013) 27#5
- Howarth 1999, pp. 357–358
- Morison 2007, pp. 21–22
- Morison 2007, p. 23
- Rose 2007, p. 132
- ""The New Bases Acquired for old Destroyers"". Guarding the United States and its Outposts. United States Army Center of Military History. 1964. CMH Pub 4-2.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Samuel Eliot Morison (2001). History of United States Naval Operations in World War II: The Battle of the Atlantic, September 1939-May 1943 (reprint ed.). University of Illinois Press. p. 94.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Quoted in Talbott, Naval War College Review (1984) 37#1 p 56
- Gary E. Weir, "The Search for an American Submarine Strategy and Design: 1916-1936," Naval War College Review (1991) 44#1 pp 34-48. online
- I. J. Galantin (1997). Submarine Admiral: From Battlewagons to Ballistic Missiles. U. of Illinois Press. p. 29.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Joel Ira Holwitt (2009). "Execute against Japan": The U.S. Decision to Conduct Unrestricted Submarine Warfare. Texas A&M U.P. p. 155.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- J. E. Talbott, "Weapons Development, War Planning and Policy: The U.S. Navy and the Submarine, 1917–1941," Naval War College Review (1984) 37#1 pp 53–71. online
- Thomas Buell, Master of Sea Power: A Biography of Fleet Admiral Ernest J. King (1980)
- Townsend Hoopes and Douglas Brinkley, Driven Patriot: The Life and Times of James Forrestal (2012)
- Thomas B. Buell, "Guadalcanal: Neither Side Would Quit," U.S. Naval Institute Proceedings (1980) 106#4 pp 60–65
- Edwin P. Hoyt, How They Won the War in the Pacific: Nimitz and His Admirals (2000) excerpt and text search
- John Wukovits, Admiral "Bull" Halsey: The Life and Wars of the Navy's Most Controversial Commander (2010)
- Thomas B. Buell, The Quiet Warrior: A Biography of Admiral Raymond A. Spruance (2009)
- John Mack, "Codebreaking in the Pacific: Cracking the Imperial Japanese Navy's Main Operational Code, JN-25," The RUSI Journal (2012) 157#5 pp 86-92 DOI:10.1080/03071847.2012.733119
- Walter R. Borneman, The Admirals: Nimitz, Halsey, Leahy, and King—The Five-Star Admirals Who Won the War at Sea (2012) excerpt and text search
- David C. Fuquea, "Task Force One: The wasted assets of the United States Pacific battleship fleet, 1942," Journal of Military History (1997) 61#4 pp 707-734
- Love, 2:1–39
- Cory Graff (2009). F6F Hellcat at War. Zenith. p. 5.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- James P. Levy, "Race for the Decisive Weapon," Naval War College Review (2005) 58#1 pp 136–150.
- Trent Hone, "'Give Them Hell!': The US Navy's Night Combat Doctrine and the Campaign for Guadalcanal," War in History (2006) 13#2 pp 171–199
- Richard B. Frank, "Crucible at Sea," Naval History (2007) 21#4 pp 28–36
- Howarth 1999, pp. 418–424
- Sweetman 2002, pp. 159–160
- Howarth 1999, p. 436
- Blair 2001, p. 819
- Sweetman 2002, pp. 173–174
- Miller 1997, pp. 239–243
- Sweetman 2002, pp. 181–182
- Sweetman 2002, p. 194
- Howarth 1999, pp. 471–472
- Howarth 1999, pp. 476
- "Women In Military Service For America Memorial". Womensmemorial.org. 27 July 1950. Retrieved 9 August 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Miller 1997, pp. 255–257
- Miller 1997, pp. 245–247
- Howarth 1999, pp. 490–493
- Polmar and Moore. Cold War Submarines, pp. 353–354n43.
- Cite error: Invalid
<ref>tag; no text was provided for refs named
- Polmar and Moore. Cold War Submarines, p. 63.
- Miller 1997, pp. 261–271
- Howarth 1999, pp. 530–531
- Miller 1997, pp. 272–282
- "US Navy in Desert Storm/Desert Shield". Naval History & Heritage Command. US Navy. Retrieved 29 November 2008.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Miller 1997, pp. 294–296
- Sweetman 2002, pp. 278–282
- Sweetman 2002, pp. 302–303
- Thompson, Mark (12 June 2012). "New Standards for Navy Skippers". Time. Retrieved 7 November 2012.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Hampson, Rick (28 December 2011). "West Point's Quiet Place Of Honor, Lost Dreams". USA Today. p. 1.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Joint Task Force Odyssey Dawn". USNavyEurope-Africa.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Perlez, Jane (1 June 2012). "Panetta Outlines New Weaponry for Pacific". The New York Times. Retrieved 7 November 2012.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Carroll, Chris (10 January 2012). "CNO: Don't expect more troops, ships in Pacific". Stars and Stripes. Retrieved 7 November 2012.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Carroll, Chris (7 June 2012). "New Pacific focus won't include massive troop influx, Dempsey says". Stars and Stripes. Retrieved 7 November 2012.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Mcavoy, Audrey (11 June 2012). "Navy's most advanced to the Pacific". San Francisco Chronicle. Associated Press.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>[dead link]
- "So, A Cruiser and a Sub Meet near a Sandbar (CG 56 & SSN 765)". Defense Industry Daily. 6 November 2012. Retrieved 7 November 2012.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Blair, Clay (2001). Silent Victory: The U.S. Submarine War Against Japan. Annapolis: Naval Institute Press. ISBN 1-55750-217-X.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Howarth, Stephen (1999). To Shining Sea: a History of the United States Navy, 1775–1998. Norman, OK: University of Oklahoma Press. ISBN 0-8061-3026-1. OCLC 40200083.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Langley, Harold. Social Reform in the United States Navy, 1798–1862 (University of Illinois Press, 1967)
- Love, Robert W., Jr. (1992 2 vol). History of the U.S. Navy. Check date values in:
- Miller, Nathan (1997). The U.S. Navy: A History (3rd ed.). Annapolis, MD: Naval Institute Press. ISBN 1-55750-595-0. OCLC 37211290.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Morison, Samuel Eliot (2007). The Two-Ocean War: A Short History of the United States Navy in the Second World War. Annapolis, MD: Naval Institute Press. ISBN 1-59114-524-4.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Rose, Lisle (2007). Power at Sea: The Age of Navalism, 1890-1918. Jefferson City, MO: University of Missouri Press. ISBN 0-8262-1701-X.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Sondhaus, Lawrence (2001). Naval Warfare 1815–1914. London: Routledge. ISBN 0-415-21478-5. OCLC 44039349.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Swann, Leonard Alexander, Jr. (1965). John Roach, Maritime Entrepreneur: the Years as Naval Contractor 1862–1886. Annapolis, MD: Naval Institute Press. ISBN 978-0-405-13078-6. OCLC 6278183.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Sweetman, Jack (2002). American Naval History: An Illustrated Chronology of the U.S. Navy and Marine Corps, 1775-present. Annapolis, MD: Naval Institute Press. ISBN 1-55750-867-4.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Albertson, Mark (2008). They'll Have to Follow You!: The Triumph of the Great White Fleet. Mustang, OK: Tate Publishing. ISBN 1-60462-145-1. OCLC 244006553.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Baer, George W. (1994). One Hundred Years of Sea Power: The U.S. Navy, 1890–1990.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Bennett, Michael J. Union Jacks: Yankee Sailors in the Civil War (University of North Carolina Press, 2003)
- Dull, Jonathan R. American Naval History, 1607-1865: Overcoming the Colonial Legacy (University of Nebraska Press; 2012) excerpt and text search; full text online
- Hagan, Kenneth J. and Michael T. McMaster, eds. In Peace and War: Interpretations of American Naval History (2008), essays by scholars
- Isenberg, Michael T. Shield of the Republic: The United States Navy in an Era of Cold War and Violent Peace 1945-1962 (1993)
- McKee, Christopher. A Gentlemanly and Honorable Profession: The Creation of the U.S. Naval Officer Corps, 1794–1815 (Naval Institute Press, 1991)
- McPherson, James M. (2012). War on the Waters: The Union and Confederate Navies, 1861-1865. University of North Carolina Press.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Potter, E.B. Sea Power: A Naval History (1981), battle history
- Rose, Lisle A. Power at Sea, Volume 1: The Age of Navalism, 1890-1918 (2006) excerpt and text search vol 1; Power at Sea, Volume 2: The Breaking Storm, 1919-1945 (2006) excerpt and text search vol 2; Power at Sea, Volume 3: A Violent Peace, 1946-2006 (2006) excerpt and text search vol 3
- Symonds, Craig L. Decision at Sea: Five Naval Battles that Shaped American History (2006) excerpt and text search; Lake Erie, Hampton Roads, Manila Bay. Midway, Persian Gulf
- Tucker, Spencer C., ed. (2010). The Civil War Naval Encyclopedia. 2. Santa Barbara, CA: ABC-CLIO. ISBN 1-59884-338-9.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Turnbull, Archibald Douglas, and Clifford Lee Lord. History of United States Naval Aviation (Ayer Co Pub, 1972) to 1939
- Hackemer, Kurt H. "The US Navy, 1860–1920." in James C. Bradford A Companion to American Military History (2 vol 2009) 1: 388–98
- Holwitt, Joel I. "Review Essay: Reappraising the Interwar U.S. Navy," Journal of Military History (2012) 76#1 193–210
- McKee, Christopher. "The US Navy, 1794–1860: Men, Ships, and Governance." in James C. Bradford A Companion to American Military History (2 vol 2009) 1: 378-87.
- Winkler, David F. "The US Navy since 1920." in James C. Bradford A Companion to American Military History (2 vol 2009) 1: 399–410.
|Wikimedia Commons has media related to [[commons:Script error: The function "getCommonsLink" does not exist.|Script error: The function "getCommonsLink" does not exist.]].|
- "Naval History & Heritage Command's official website". U.S. Navy.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "A History of the Navy in 100 Objects". United States Naval Academy. U.S. Navy. 2013.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- A Short History of the United States Navy by Admiral George R. Clark et al. (textbook written for use at the Naval Academy by its Commandant of Midshipmen; rev. ed. 1927)
- National Museum of the U.S. Navy
- Hampton Roads Naval Museum
- Great Lakes Naval Museum
- National Naval Aviation Museum
- Naval Museum of Armament and Technology
- Naval Undersea Museum
- Naval War College Museum
- Puget Sound Navy Museum
- Patuxent River Naval Air Museum
- U.S. Navy Seabee Museum
- Submarine Force Library & Museum
- U.S. Naval Academy Museum
- U.S. Navy Supply Corps Museum
| 0.5424
|
FineWeb
|
Is there a way to identify Source Items in a list that have not been mapped to an item in the target for a Upload to a module in Grid Mode?
I am aware you can manually identify the non-mapped items by scanning the Mapping tab but this seems inefficient and error prone (especially with large data sets). When you upload with a non-mapped item no error is recorded and hence it is difficult to identify
My example is below is only using a small data set but displays my question
It's not currently an issue for me but was just thinking about how you could confirm completeness of uploads with large data sets. I could definitely see scenarios where 1 or multiple Source items are slightly different to the Target etc and it would be difficult to determine whether the full data set has been imported
You could always create Dummy list item in the data source which is the SUM of all the other list items the source. You could then create a module with this item mapped through and compare vs the results from the upload module to identify variances
Think it would be helpful for Anaplan to have a native solution for this because it doesn't matter how fantastic the calculation engine is if the source data is incorrect!
But until then, I would follow what @Jared Dolich has explained. When you need a robust data integration, then you would build staging modules and landing modules. The unmapped entities would remain in the staging module and validated items would land in the landing/target module. This way the user/admin can know which data items dint make their way into Anaplan.
Re: Upload - Non Mapped Source Items Identification
Similar to what all of you have mentioned, what we did was to load the file to a flat staging module and create a reconciliation module which is by time and line items only. In the staging module, we create line item that uses FINDITEM to identify the possibly unmapped list items for each list, and create module view with filter to show the unmapped items only. The reconciliation module summarizes data in the staging module and target module such as total value in source file (staging module), total value in target module, total value of unmapped items of each list. When administrator notices that there is unmapped value in the reconciliation module, further investigation is done via module views in the staging module.
| 0.9243
|
FineWeb
|
By adding the right amount of heat, researchers have developed a method that improves the electrical capacity and recharging lifetime of sodium ion rechargeable batteries, which could be a cheaper alternative for large-scale uses such as storing energy on the electrical grid.
To connect solar and wind energy sources to the electrical grid, grid managers require batteries that can store large amounts of energy created at the source. Lithium ion rechargeable batteries — common in consumer electronics and electric vehicles — perform well, but are too expensive for widespread use on the grid because many batteries will be needed, and they will likely need to be large. Sodium is the next best choice, but the sodium-sulfur batteries currently in use run at temperatures above 300 degrees Celsius, or three times the temperature of boiling water, making them less energy efficient and safe than batteries that run at ambient temperatures.
Battery developers want the best of both worlds — to use both inexpensive sodium and use the type of electrodes found in lithium rechargeables. A team of scientists at the Department of Energy’s Pacific Northwest National Laboratory and visiting researchers from Wuhan University in Wuhan, China used nanomaterials to make electrodes that can work with sodium, they reported June 3 online in the journal Advanced Materials.
“The sodium-ion battery works at room temperature and uses sodium ions, an ingredient in cooking salt. So it will be much cheaper and safer,” said PNNL chemist Jun Liu, who co-led the study with Wuhan University chemist Yuliang Cao.
| 0.7799
|
FineWeb
|
Interactive Java Tutorials
Light Diffraction Through a Periodic Grating
A model for the diffraction of visible light through a periodic grating is an excellent tool with which to address both the theoretical and practical aspects of image formation in optical microscopy. Light passing through the grating is diffracted according to the wavelength of the incident light beam and the periodicity of the line grating. This interactive tutorial explores the mechanics of periodic diffraction gratings when used to interpret the Abbe theory of image formation in the optical microscope.
In its simplest form, a line or amplitude grating is composed of a linear array of thin opaque strips (or slits) having a periodic spacing and suspended on a solid matrix, usually an optical glass plate. The most convenient and accurate method of forming gratings of this type is through the use of metallic vacuum deposition techniques. The spacing between the centers of two adjacent slits (d) is called the grating period, and the reciprocal of d is termed the spatial frequency, which is measured in the number of slits or periods per unit length.
The tutorial initializes with a grating periodicity of 1000 nanometers (producing a spatial frequency equal to 1000 lines/millimeter) and an incident light beam of 400-nanometer wavelength impacting the grating at a 90-degree angle. Each slit in the grating diffracts light over the entire range of angles covering 180 degrees on the opposite side of the grating. The Spatial Frequency slider is utilized to change the grating periodicity and the Wavelength slider alters the wavelength of the incident light wave.
Individual light waves diffracted from successive grating slits are emitted as concentric spherical wavelets that interfere both constructively and destructively because they are all derived from the same wavefront and are therefore in phase. Wavefronts passing through the grating slits that are parallel to the incident light wave are referred to as zeroth order (undiffracted) or direct light. Diffracted higher-order wavefronts are inclined at an angle (q) according to the equation:
sin(q) = M(l/d)
where l is the wavelength of the wavefront, d is the grating slit spacing and M is an integer termed the diffraction order (e.g., M = 0 for direct light, ±1 for first order diffracted light, etc.) of light waves deviated by the grating. The combination of diffraction and interference effects on the light wave passing through the periodic grating produces a diffraction spectrum, which occurs in a symmetrical pattern on both sides of the zero order direct light wave.
If the diffracted light waves produced by the periodic grating are then passed through a convergent lens, they appear as a series of bright spots on the focal plane of the lens. The intensity of these spots decreases as the diffraction order increases, and the number of higher order diffracted waves that can enter the lens is restricted by the size of the lens aperture. Those waves that enter the lens form what is termed a Fraunhofer diffraction spectrum (also called a Fourier spectrum) that can be observed at the focal plane of the lens.
The periodic diffraction grating can now be used to examine Ernst Abbe's theory of image formation in the optical microscope. When the line grating is placed on a microscope stage and illuminated with a parallel beam of light that is restricted in size by the condenser aperture diaphragm, both zero and higher order diffracted light rays enter the front lens of the objective. Direct light that passes through the grating unaltered is imaged in the center of the optical axis on the objective rear focal plane. First and higher order diffracted light rays enter the objective at an angle and are focused at discrete points (a Fraunhofer diffraction pattern) on both sides of the direct light beam at the objective rear focal plane. A linear relationship exists between the position of the diffracted light beams and their corresponding points on the periodic grating.
If the periodic grating placed on the microscope stage is a micrometer or similar grid, then the Fraunhofer diffraction pattern can be observed by removing one of the microscope eyepieces and examining the objective rear focal plane (or by using a phase telescope or Bertrand lens). First, reduce the condenser aperture size to a minimal value then, using a low-power (10x or 20x) objective, focus the bright central spot on the focal plane while viewing through the eyepiece tube. A series of higher-order light spots of diminishing intensity can now be observed flanking the central spot. The diffracted light spots display a spectrum of color with lower wavelengths (blue and purple) nearer the optical axis and higher wavelengths (red) spread on the periphery. Spacing between the light spots is dependent upon the grating interval and the wavelength of light passed through the condenser. Finely spaced gratings and longer wavelengths produce larger spot intervals than do coarse gratings and lower wavelengths.
At the microscope intermediate image plane, coherent light emitted from the diffracted orders at the objective rear aperture undergoes interference to produce an intermediate image of the periodic grating, which is further magnified by the eyepieces. The integrity of the intermediate image depends upon how many diffracted orders produced by the grating pass through the aperture and are captured by the objective front lens. Objectives having a higher numerical aperture are able to gather more of the diffracted light waves and produce clearly better images.
Abbe determined that in order to form a recognizable image, the objective must capture the zeroth order light rays and at least one of the higher order diffracted waves or two adjacent orders. Because the diffraction angle is dependent upon the grid spacing and the wavelength is determined by the refractive index (n) of the medium between the grating and the objective front lens, the diffraction equation (given above) can be rewritten as:
d = l/n(sin(q))
Abbe originally defined the numerical aperture (NA) of the objective as:
NA = n(sin(q))
so the equation reduces to:
d = l/NA
This equation is one of the most fundamental to optical microscopy and demonstrates that an objective's ability to resolve fine details in a specimen, such as a periodic grating, is dependent upon both the wavelength of illuminating light rays and the numerical aperture. Thus, the lower the wavelength or the higher the numerical aperture, the greater the resolving power of the objective.
Kenneth R. Spring - Scientific Consultant, Lusby, Maryland, 20657.
Matthew J. Parry-Hill and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
BACK TO IMAGE FORMATION
BACK TO DIFFRACTION OF LIGHT
Questions or comments? Send us an email.
© 1998-2019 by
Michael W. Davidson and The Florida State University.
All Rights Reserved. No images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
Last modification: Thursday, Jun 14, 2018 at 04:29 PM
Access Count Since September 12, 2000: 74047
For more information on microscope manufacturers,
use the buttons below to navigate to their websites:
| 0.9818
|
FineWeb
|
History and What Makes You Not a Buddhist
A guide to Buddhism for Westerners that's chock full of very hip, timely examples that will date the book within a year. Until I finished the book and was ready to drop it like a hot potato with its judgmental ending metaphor--that not believing the 4 noble truths is like reading a medicine bottle and not taking the medicine--I had not noticed the implicit judgment in the title itself. It actually tells the reader up front that you are not a Buddhist. This guy is to Pema Chodoron like kayaking the Colorado River rapids are to canoeing on a placid lake. I don't trust his take on emotions. I find this with a lot of men who write about Buddhism. They treat emotions like annoying children that just need discipline rather than potential sources of wisdom.
History: A Very Short Introduction by John Arnold
This couldn't be shorter, but it's chock full of good insights about why and how to do history. My favorite story within the book was about the multiple versions of Sojourner Truth's famous Ain't I a Woman speech, which appeared in both standard English and dialect versions. The dialect version captured the public imagination, but historians believe the standard version was more likely to be her actual voice. This begs all sorts of interesting questions about the appearance of authenticity and the question of accuracy. My favorite metaphor was that of history as a "foreign land," because I think this is how I orient myself to my historical work. I look at the past for moments when things changed, and then try to understand as best I can what happened in the "country" of the past such that the shift I've identified occurred.
| 0.6923
|
FineWeb
|
Your thesis should tell the reader exactly what you are going to compare or contrast there are two basic methods or styles of organizing a compare and contrast essay with the subject by subject or block method, you address each subject in separate paragraphs if you have selected the subject by subject method for your. Teaching compare and contrast essay writing author: kucor handbook for writers: excellence in literature, johnston and campbell - a writer's handbook for introduction sample perhaps written comparison and contrast will become a little easier if we review some ideas about in the block method, you describe all. Comparison and contrast essay writing style: block arrangement in using the block arrangement, you're going to describe each of the two things you're comparing (object a and object b) in two separate paragraphs you're going to make statements about object a to form a single paragraph you do the same for object b. To compare and contrast two or more things, be fully acquainted with their details use either the block method or the point-by-point method for your essay. Adapted from the comparative essay, by v visvis and j plotnick, for the university of toronto's writing lab essay in the alternating method, you find related points common to your central subjects a and b, and alternate let's apply the block method to the comparison between the french and russian revolutions. Here given is a very helpful schedule of catchy topics for your compare and contrast essay writing a compare and contrast essay helps there are two basic patterns writers use for comparison/contrast essays: the block method and the point-by-point method (4) choose the 3 most significant points of.
In this video, i highlight the basic differences between point-by-point and block- style essay structures, and i give examples of both for a compare and thank you for this video, since i am a visual learner it helped me to better understand both methods where can i find research paper writing company. There are two basic patterns writers use for comparison/contrast essays: the block method and the point-by-point method in the block method, you describe all the similarities in the first body paragraph and then all the differences in the second body paragraph the guideline below will help you remember what you need to. A) block approach this organizational pattern is most effective when used on short essays, such as in-class essays the body of such an essay is organized by discussing one subject, point by point, in complete detail before moving on to the next subject the writer should select points by which both subjects can be. This site has many resources to help you when teaching the compare/contrast essay following is an explanation of two methods students can use when writing a comparison essay two methods for writing a comparison essay are the block and the feature-by-feature methods use the following information.
The purpose is to compare and contrast the works under review, to identify key themes and critical issues, and to evaluate each writer's contributions to understanding the overarching the rest of the essay, whether organized by block method or point-by-point, will be your analysis of the key differences among the books. For this reason, writers generally use this method for longer essays please note: this method, like the block method, only offers an outline for the body of an essay remember, you also need to include an effective introduction and conclusion point-by-point method outline example: thesis: john stewart mill and.
Block method: subject-by-subject patternin the block method (ab), you discuss all of a, then all of b for example, a comparative essay using the block method on the french and russian revolutions would address the french revolution in the first half of the essay and the russian revolution in the second half. A comparison displays how two topics are alike or similar a contrast displays how two topics are dissimilar or different students must identify adequate statements for contrast essay and write appropriate thesis statements and concluding statements to end the comparison and contrast essay essay writing in short will give.
Compare and contrast essays by anne garrett when writing compare and contrast essays, one is often dealing with a vast amount of detail the subject must be broken down into parts for analysis and a sufficient number of order of presentation choose a method, either: one-side-at-a-time or point- by-point. In a block arrangement the body paragraphs are organised according to the objects the block arrangement discusses one of the objects in the first body paragraph and the other object in the second all the ideas provided in the first paragraph are also provided in the second paragraph in the same order point by point. Santa barbara city college outlines two methods you can use when writing a compare and contrast essay, each of which helps you convey the similarities and differences between your two ideas these include: the point-by-point method the block method so how do these methods break down.
| 0.9421
|
FineWeb
|
A Nickel-free seal additive for use in medium temperature sealing applications
This liquid concentrate is a weakly acidic solution without any heavy metals for use in medium temperature sealing of anodized aluminium including anodized and coloured parts. Improves sealing quality and prevents sealing smut formation on the surface. Process is Qualanod approved and all international standards (admittance, weight loss, dye-spot test etc.) for sealing are achieved by the use of this sealing additive.
Features & Benefits
- Qualanod approved n°009
- Eco-Friendly – Ni Free formulation, no heavy metals, no CMR substances
- Organic Based sealing chemistry
- No Yellowing after the Seal application
- Not Compatible with Dyes due to Bleeding
- Optimum quality values according to International standards ISO 2143 (dye-spot test), ISO 3210 (weight loss), ISO 2931 (admittance)
Este proceso pertenece aALUMAL SEAL SERIES
Because of the porous structure obtained in commercial sulfuric acid anodizing, the pores require sealing to optimize performance of the layer, including corrosion resistance and dye colour intensity, colour stability and fade resistance. Sealing of anodized layers is simply filling up the pores with some type of larger molecule, and converting the inside of the pore to a hydrated aluminium oxide which closes/plugs the pore.
There are different types of seals that all offer various levels of corrosion performance and dye absorption/stability. ALUMAL Cold Seal technology provides the maximum protection qualities and corrosion resistance for sulfuric anodized coatings. The oxide layer benefits from the sealing process after formation or after colouring. This specific type of hydration of the oxide pores helps with stain and corrosion resistance. Due to higher energy costs of higher temperature sealing, 75-85 OC for mid temperature and 100 OC for traditional hot seals, the cold sealing at room temperature is favoured and allows all the proper chemical reactions to take place for improving anodize layer performance. Sealing at room temperature also helps better stability of colouring or dye processing and offers improved colour fade resistance over conventional high temperature or mid temperature sealing processes.
| 0.8291
|
FineWeb
|
The king of all meals, breakfast is one of the most important meals of the day. The word breakfast is the literal meaning. You are having your first meal after “fasting” for say five to eight hours. Yes, sleeping not only rests your mind and your external limbs, but relaxes your internal organs as well.
Now, the all important question is, ‘if breakfast is the most important meal of the day, what should be included in it?’
A breakfast is never complete unless it has egg in it. Well, who does not like it sunny side up! Many among us like the savor the yolk at the end, some of us consider the white to be the special treat, but it’s safe to say that we all love having eggs.
Eggs are a great source of protein. They contain equal proportions of the nine amino acids necessary to fulfill the dietary needs of the human body. Studies have proven that having two to three eggs a day leads to a healthy lifestyle. However, consuming eggs on a daily basis has its own myths, here are all the answers regarding the myths and beliefs of having eggs on a daily basis.
Many individuals among us consider eggs to be a non-vegetarian food. For those vegetarian extremists’ foods like quinoa, buckwheat, spinach and fruits are the replacements for eggs. They might not be complete sources of proteins but they do provide the necessary energy the body needs to function. However, the catch here is vegetarians must cook these foods in a very methodical fashion so that the crucial vitamins do not get destroyed.
Non-vegetarians have no reason to worry! They can also get complete proteins from beef, chicken, fish and other meat products, beside eggs. They get the best of both-great taste and lot of proteins.
Now let us see what makes an Egg such a dependable source of proteins. This dietary chart given below depicts the protein content in eggs, and compares its protein content with other foods like beef, milk, fish, nuts etc.
Egg Protein Chart:
Here is the detailed breakdown of the protein levels present in various eggs:
- An egg contains about 6.3 grams of proteins – and about 3.6 grams of protein in egg white & 2.7 grams in Egg Yolk.
- Eggs even have calories which should be balanced with the complete diet calorie intake.
- An average boiled egg has about 6 grams of proteins.
- An omelet, which is a very common breakfast item made with eggs contain about 10 grams of proteins.
- A Duck’s egg has 15 grams of protein
- A Quail’s egg has 2 grams of proteins.
- Scrambled eggs made up from 2 eggs and milk together contain 14 grams of proteins
[Read: Eggs: History, How to Use and Benefits ]
A. Apart from these, Eggs are also used in the following ways:
Eggs are a rich source of protein and are often used to make protein powders. These protein powders provide proteins to people who are nutritionally deprived. You may be aware of whey, casein and soy protein powders, but have you heard of egg-white protein powders? The two main benefits of the egg-white protein powders are:
- It is lactose free, so those who are lactose-intolerant and can’t have whey or casein protein powders can go for egg-white protein powder. The egg-white protein powder contains 25grams of proteins in a 30 gram serving. This protein content is similar to whey and casein, so one doesn’t have to compromise on their daily dose of protein from these supplements.
- Whey is a fast-digesting protein while caesin is a slow-digesting protein. Egg-white powder falls in between so it helps the muscle synthesis to go on for longer.
- Egg White powder is a complete protein because it has all the 10 essential amino acids. No other naturally occurring substance has the same amount of amino acids.
B. The amino acids available in the egg proteins are exhaustive and they give your body all the necessary amino acids. Adequate dietary protein intake is necessary. It should include all the essential amino acids your body needs daily. An egg has all the amino acids such as histidine, isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan and valine. These amino acids are present in a proportion that suits the needs of the human body. Hence the egg is often used as a yardstick to compare protein content of other foods. Eggs not only have nine essential amino acids, it also contains nine other amino acids.
According to the Protein Digestibility Corrected Amino Acid Score (PDCAAS) whole egg, whey protein and soy protein score 1 on the scale of 0 to 1. However, the Amino Acid Score (AAS) rates the egg at 1.21, which is above human needs. The Protein Efficiency Ratio of eggs is 3.8 and the Biological Value of eggs is rated between 88 and 100. So each large egg provides a total of 6.29 grams of high quality protein, that’s why eggs are classified with meat in the Protein Foods Group.
The egg yolk contains higher proportion of the egg’s vitamins than the white, like vitamins A, D, E and K. It also contains vitamin B6 and B12, folic acid, pantothenic acid, thiamine, calcium, copper, iron, manganese, phosphorus, selenium and zinc. So don’t ignore the yolk because it is high on calories, after all you do need some calories for energy.
C. The egg protein chart is an effective guide. It keeps you informed about the proteins you gain from consuming eggs. It also informs you of other foods that you could consume in case you miss out on eggs. So any lapse in the consumption of these foods can be compensated.
Everyone knows who a vegetarian is but for those who do not know the technicalities here is the definition: A vegetarian is a person who doesn’t eat meat or any by-product from animal slaughter. There are some vegetarians who have limited themselves to certain foods that are considered non-vegetarian to have a wholesome diet. A well planned vegetarian diet can be healthy and nutritionally adequate. Here are some types of vegetarian diets:
- Vegans or Total Vegetarians: They eat only plant food like fruits, vegetables, seeds, legumes, nuts and grains.
- Lacto-Vegetarians eat plant foods as well as dairy products such as milk and cheese.
- Lacto-Ovo Vegetarians eat plant foods, dairy products and eggs. Most Americans follows this diet.
- Semi-Vegetarians don’t eat red meats but they do have chicken or seafood with plant foods, eggs and dairy products.
Vegetarians generally receive adequate amount of nutrients, however, they do have to cut back on some nutrients like these:
Protein: Protein is not only important for growth and maintenance of body tissues; it is an important component of enzymes and hormones. Protein helps in the production of milk in lactating women. An assortment of plant foods like tofu, tempeh, whole grains, legumes, vegetables, seeds and nuts provide the essential amino acids.
The proteins in egg whites can be easily digested by the body, so wrestlers and body builders swear by it. Athletes also have egg whites as a source of protein since it provides a high ratio of proteins to calories with very little or no fat. Eggs also contain abundant antioxidants that fight free radicals in the body and keep you safe from cancer. There are so many benefits of eggs that you can’t just ignore them. Eggs add flavor to many foods and make you strong from within.
Omega 3 Fatty Acids: Omega 3 Fatty Acids reduce the risk of cardiovascular diseases and improves cognitive function and vision. The primary sources of Omega 3 Fatty Acids are fish, organ meats and DHA rich foods like eggs. Vegetarians cannot get Omega3 Fatty acids from vegetable sources alone, they have to take supplements.
Calcium: Though calcium deficiencies in vegetarians are rare, there are some vegetables that inhibit calcium absorption. So in that case dairy and poultry products are needed to make the diet balanced.
Vitamin D: Vitamin D helps in calcium absorption from the digestive tract and uses it for building strong bones and teeth. The best sources of vitamin D are derived from milk and eggs. So vegetarians lose out on vitamin D entirely.
Vitamin B 12: Vegetarians need to pay special attention to this nutrient. The body needs little amounts of vitamin B12 for red blood cell formation and normal nerve function. Vitamin B12 deficiency can cause irreversible nerve damage. Vegans lack vitamin B12 in their diet and need to have dairy or soy products and vitamin B12 supplements.
Iron: Iron is found in both animal and plant foods, but iron from animal foods gets absorbed by the body easily. Iron from plant foods does not get absorbed by the body due to high fiber content. Fiber is not absorbed by the body and it ties up with minerals like iron and hinders its absorption as well.
Zinc: Zinc is a mineral that is present in plant food but better absorbed from animal sources. So some vegetarian diets do not provide the recommended amount of zinc. So they have to eat nuts, cheese and soy products along with vitamin C rich foods to enable a better absorption of zinc.
Vegetarians should follow the diet principles recommended in the Dietary Guidelines for Americans. A well planned vegan diet can meet all the guidelines. The recommendations emphasize that 26 oz. per week of meat, poultry and eggs should be consumed.
D. The best part about eating eggs is that they are routinely consumed for breakfast and hence you don’t have to fit them forcibly into your daily diet schedule. Also they go very well as a complementary food with other dishes – so you don’t have to worry about eating them separately.
Best Ways to Cook Egg:
Have you ever thought about the right way of cooking eggs to maximize nutrition? Here are some guiding principles for making the most of the eggs lying in your fridge:
- Generally applying heat to food is a naturally destructive process. If you heat egg whites the protein becomes denatured and more bioavailable. A protein called ‘avidin’ also gets destroyed in the process which is a good thing. So heating egg whites is beneficial. However, less heat should be applied to egg yolk as the fats and other nutrients tend to get damaged.
- Pastured egg yolks are best sources of fats and proteins. So you should eat the egg yolk and not keep it aside.
- Many good fats oxidize and become less beneficial, harmful even. This is true in case of egg yolks. It is best to leave egg yolks uncooked. If you plan to fry eggs make sure you do not heat it in the presence of oxygen for a long duration. The heat can curl up proteins in the egg whites and egg yolks and create sticky fats that your body cannot utilize. This happens because oxygen accelerates the process of destruction while heating.
Different Ways of Cooking An Egg Are:
- Soft boiled, is the optimum way to cook an egg as fats and nutrients in the yolk essentially have three protective layers from oxidation- the water, eggshell and egg white. This way all the nutrients in the eggs are preserved. The egg whites are cooked for the best protein utilization and the removal of avidin. In addition, making a soft boiled egg is faster and easier compared to a frying an egg in a pan. Also the yolk remains creamier and thicker.
- Poached eggs are loved by many for its pure taste. However, when the egg yolk remains submerged in water surrounded by the egg white, the protective shell layer is lost. Also it is very inconvenient to serve poached eggs.
- The thought of having raw eggs can make you scream but it is the best way to have eggs. Be careful not to have too much of raw egg whites because they have a protein called avidin that can combine with B vitamin called Biotin and can create serious health problems. If you can’t have raw egg yolks or raw whole eggs, then blend them in your morning smoothie. Make sure that you put the eggs at the end and fold it in for few seconds. By this way you can reduce the oxidative stress from chopping fats up to very small particles and exposing them to oxygen.
- When eggs are boiled the yolk reaches a higher temperature and thus the destructive process starts. However due to the presence of eggshells the contact with oxygen is not possible thus some amount damage is controlled. You can never replace entire meals with hard boiled eggs.
- While making the sunny side up the heat is given from the bottom of the pan so the yolk remains preserved but the protective water coating on the egg is lost. So it is advisable to have less of the world favorite ‘sunny side up’.
- To make an over, easy heat has to be applied on both sides’ thus precious nutrients and fats in egg yolks are completely lost.
- Scrambling the eggs is a way of chopping the fats and proteins to tiny particles and exposing them to heat and oxygenation. Avoid this method of cooking if you are using conventional feedlot eggs. The fats present in conventional feedlot eggs are pro-inflammatory and they don’t need to be oxidized further. Even if you are using pastured egg this method remains to be the worst way to cook eggs. The oxidation of fat and cholesterol makes it detrimental to your health.
Whichever way you prefer cooking an egg, make sure the egg yolk is preserved. Having an egg yolk is never bad.
Should You Eat Eggs Daily?
Whole eggs are high in calories, fat and cholesterol so consuming eggs every day can put you at the risk of heart disease by increasing the levels of blood cholesterol. So it is better to stick to egg whites and other egg alternatives to reduce health risks. Here is why you should limit your egg intake:
- Excess Calories: Large eggs provide 75 calories each. Would you believe that just a plate of scrambled eggs can fill you up with 225 calories? This high amount of calories will surely cause weight gain. Eating three eggs per day can lead to one pound of weight gain in less than three weeks.
- Increased Fat Intake: Consuming eggs on a daily basis increases your fat intake. While some fats are good as it helps to absorb fat soluble vitamins, saturated fat from eggs can be harmful as it increases your risk of heart disease. In fact, three scrambled eggs at breakfast can pack up 35% of calories for the entire day.
- High Cholesterol: Consuming eggs every day adds excessive amount of cholesterol to the diet. Cholesterol is a fatty and waxy substance that builds up on arterial walls when you have too much of them in your bloodstream. The cholesterol becomes hard and stiff resulting in blood clots. Your heart has to work harder to push blood through the arteries thus increasing blood pressure. The harmful effects of cholesterol increase the risk of heart attacks. So keep your daily cholesterol intake less than 200 milligrams. One large egg contains 185 milligrams of cholesterol and it takes up more than your daily cholesterol allotment. Having meat and seafood in your diet along with egg adds more cholesterol to your diet.
All the fat and cholesterol come from the egg yolk, so opt for plain egg whites or separate yolk from the whites before cooking. If you make three scrambled egg whites then your calorie count goes down to less than 60. The cholesterol and fat content in egg whites is far less than those present in whole eggs.You can also go for fat-free and cholesterol-free eggs.
Are you already planning to stop eating eggs after reading the demerits of whole eggs? Blood cholesterol levels are mainly responsible for cardiovascular diseases and eggs are considered the main culprits behind increasing the blood cholesterol levels. Well that is a myth; let us first clear the misunderstanding by showing you what exactly increases the cholesterol levels and the role fat plays in that process.
Previously, all dietary cholesterol was thought to increase the risk of heart disease. The truth is only certain types of blood cholesterol can line the arteries and indicate a potential risk of heart attack. Cholesterol is needed by the body for proper functioning and it is produced by liver and other organs.The amount of dietary cholesterol is of less significance compared to blood cholesterol caused due to saturated and unsaturated fat.
It is true that egg yolk contains high amount of fat, but it is unsaturated, which is associated with “good” or HDL Cholesterol (High-Density Lipoprotein). The HDL Cholesterol helps to remove the LDL Cholesterol (Low-Density Cholesterol) which lines the blood vessels causing blood clots and heart attacks. So having eggs instead of other sources of animal protein can decrease the amount of saturated fat intake.
Studies show that individuals who have already had elevated levels of LDL Cholesterol, the risk of heart disease did not increase when they started including eggs in their daily diet. However, fat in eggs does contain calories. Therefore limit your intake but never omit it from your diet. If you deny yourself a diet of eggs a week you are losing out on all essential nutrients.
So are you going to put back the egg in the refrigerator or eat it? Now that all your doubts are clear you can eat an egg without feeling guilty at all!
Chickens that are raised in a natural environment and fed with a proper diet that enhances the health of the chicken tend to provide eggs with higher nutritional value. Often the chicken feed is spruced up with chemicals or supplements that will make the chickens give eggs that have a longer shelf life. So always check the labels before you buy eggs, make sure they are ‘verified’ cage free or free-range eggs.
Proteins are the building blocks of life and every cell contain proteins. You need protein in your diet to help your body repair cells and make new ones. Protein is also important for the growth and development in children, teens and pregnant women. Lack of proteins can lead to skin and hair problems, indigestion, weak immune system and muscle related problems. The protein chart for eggs very well determines its use as a rich source of protein and makes everyone aware of how to use it in different forms providing different quantities of protein proportions.
I hope this article Proteins in Eggs helps you attain good health!
- The start to the article was abrupt.
- The flow is good
- Need to refine the language by a bit.
- Few errors in the sentence formation and tenses.
- Tone is good, a tad bit informal in places.
- Overall a great article.
| 0.9427
|
FineWeb
|
Fragments of five Cretaceous cm-size baddeleyite (ZrO2) and two zircon (ZrSiO4) crystals from the Mbuji-Mayi kimberlite (DR Congo) were measured for trace elements by the LA-ICP-MS technique. EPMA was applied adjacent to the laser spots to control Hf, used as internal standard for all LA-ICP-MS measurements. The 22 analyses confirm perfectly identical Hf concentration on the intra and inter-grain scale. Trace elements also yield surprisingly similar over-all patterns, although individual megacrysts have crystallized within magmas generated from differently depleted mantle domains (ɛHfi + 5.1 to +10.2). Examined in detail, the trace elements show differences in concentrations being correlated either on the inter or intra-grain scale. The five baddeleyite megacrysts have high Th, U, Ta, and Nb reaching up to 5000 times chondrite concentrations. In intermediate to heavy REE they are by 30–120 times chondrite enriched but strongly depleted in the lightest REE, except Ce showing a positive anomaly (Ce4+ substitutes Zr4+). Corresponding (La/Yb)N range between 0.006 and 0.040. The two zircon megacrysts show trace element patterns similar to those of baddeleyite, also having high Th and U but low Ta and Nb. Intermediate to heavy REE are from 1 to 100 times chondrite enriched and they also show strong depletion in light REE yielding (La/Yb)N at 0.0001. Cerium is again an exception having about 20 times chondrite abundance. Highest trace element concentrations are correlated with the lowest ɛHfi of +5.1, and lowest concentrations with the highest ɛHfi of +10.2. This corroborates that not only ɛHfi but also trace elements, in particular HFSE of the megacrysts reflect the level of trace elements available at the time of partial mantle melting and megacryst growth. Such growth has occurred in the deep mantle at temperatures exceeding 2000 °C, to explain structural data indicative for the original presence of cubic ZrO2. On their way to the surface cubic baddeleyites were successively transformed to tetragonal and then monoclinic symmetry. These transformations were not accompanied by measurable chemical changes. All megacrysts record (1) ancient time-integrated Lu/Hf-fractionation of their mantle sources, with the differences in ɛHfi most likely reflecting the combined effect of different degrees of fractionation and differences in ages, and (2) the Cretaceous kimberlite event, during which the megacryst formed from highly trace element-HFSE enriched magmas. These highly enriched magmas were most likely produced by very small degrees of partial melting of the originally depleted mantle. The important correlation between ɛHfi and trace-element abundance of the individual megacrysts rules out any significant earlier re-enrichment of the individual mantle domains, which would have erased this correlation.
| 0.6752
|
FineWeb
|
Data ONTAP 7 and earlier
Is the default domain mode setting on Windows 2000 Domain Controllers. Mixed mode allows Windows NT and 2000 backup Domain Controllers to co-exist in a domain. Mixed mode does not support the universal and nested group enhancements of Windows 2000. The domain mode setting can be changed to Windows 2000 native mode when all Windows NT Domain Controllers are removed from a domain.
Is the condition in which all Domain Controllers in the domain have been upgraded to Windows 2000 and an administrator has enabled native mode operation (through Active directory users and computer)
To check if the client is running mixed or native mode, complete the following steps:
- Select Start -> Programs -> Admin Tools -> Active Directory Users and Computers
- In the Domain Properties, under Domain Operation mode, the domain information will be listed:
filer> cifs domaininfo
| 0.9948
|
FineWeb
|
Photos supplied by various ALCA members see credits in footer for details.
Over the last decade, the Australian Land Conservation Alliance (ALCA) has grown from an informal coalition into Australia’s peak national body representing organisations that work to conserve, manage, and restore nature on private land.
We represent and advocate for our members and supporters, fostering the growth of private land conservation and enhancing its impact, capacity, and influence, and ultimately contributing to a healthier and more resilient Australia.
ALCA is at the helm of an expanding membership that is actively addressing some of the nation’s most critical conservation challenges. This includes initiatives that restore endangered ecosystems, build the protected area estate, combat invasive species, expand conservation finance, and deploy nature-based solutions to mitigate climate change.
In a time where nature decline and biodiversity loss threaten life as we know it, the need for collective action and systemic change is fundamental. ALCA plays a crucial role in supporting and enabling its members to scale their impact by advocating for good policy, securing significant investment, fostering a capable sector, cultivating a pipeline of leaders, and, at the core, building a community that understands and values the role of private land conservation.
Together, we are a growing force for nature.
Square kilometers of land
| 0.6339
|
FineWeb
|
Supply of measuring equipment
Company "Komdiagnostika" is accredited to provide verification and calibration works for vibroacoustic measuring devices (vibration meters and vibration transducers).
To answer the requirements to the quality of calibration and verification of measuring and diagnostic devices, the following resources are being used:
- calibration devices;
- regulatory documents, specifying the organization and execution of calibration and verification works;
- rooms, corresponding to the necessary demands;
- qualified personnel.
All metrology specialists have higher education in the field of automation systems, professional qualification and experience in calibration and verification of measuring devices.
| 0.9313
|
FineWeb
|
Don't take it personally? Sometimes it IS personal!
We hear so often, "Don't take it personally." What does this really mean? The answer is NOT simple!
Let's say you are in a great mood, feeling loving and expansive, and someone—either someone close to you or a stranger like a clerk in a store—is withdrawn or attacking.
This is when it is important not to take it personally. Their behavior is coming from whatever is going on for them—they are tired, not feeling well, feeling inadequate, angry from a previous interaction, judging themselves, coming from their own fears of rejection or engulfment, and so on. When you take their behavior personally, it is because you want to believe that you have some control over their behavior. You want to believe that if only you were different, they wouldn't treat you badly. This is a huge false belief, as you have no control over what is going on with them, and their behavior has nothing to do with you.
On the other hand, let's say you are in your ego wounded self, and you are shut down, harsh, attacking, blaming or people-pleasing. When this is the case, if others are also shut down or attacking, their behavior might be personal to a certain extent. They might be taking your behavior personally and reacting to it from their own ego wounded self. While you are not causing them to react with withdrawal or attack—it is the fact that they are taking your behavior personally that is causing them to react—you are also not innocent in the interaction. So it is always important to notice your own open or closed energy to see whether their behavior is not at all about you, or whether they are being reactive with you.
Another scenario to be aware of: if you are open and loving and another is closed and harsh, their behavior DOES affect you. Even if you do not take their behavior personally, their unloving behavior can cause some deeper core feelings of loneliness, helplessness, heartache, heartbreak and sadness. Taking their behavior personally may be a way to cover over these deeper painful feelings, because when you tell yourself that their behavior is your fault, then you might feel anxious, depressed, guilty or shamed. As bad as these feelings feel, they are actually easier to feel because you are the one causing them by taking their behavior personally.
Likewise, if you are the withdrawn or harsh one, and a person close to you is not taking your behavior personally and are feeling their own core painful feelings caused by your unloving behavior, they may choose not to be with you. They might not want to be with you when you are withdrawn or attacking. In this case, it is important that you DO take their behavior personally and explore what you are doing that is resulting in exactly what you likely don't want—their moving away from you.
The bottom line is that if you are being open and loving, then it is important to never take another's behavior personally. If you are operating from your wounded self and are withdrawn or attacking, then you might want to explore your own behavior when others are also withdrawn, attacking, or when they disengage from you because they don't want to be around you. Your open and loving behavior is NEVER the cause of another's unloving behavior. Your closed, withdrawn or harsh behavior is also not the cause of their closed, withdrawn or harsh behavior, but can be the cause of them not wanting to be with you, and it is important to open to learning about your own withdrawn or harsh behavior.
To begin learning how to love and connect with yourself so that you can connect with your partner and others, take advantage of our free Inner Bonding eCourse, receive Free Help, and take our 12-Week home study eCourse, "The Intimate Relationship Toolbox" – the first two weeks are free!
Connect with Margaret on Facebook.
This article was originally published at Inner Bonding . Reprinted with permission from the author.
| 0.8384
|
FineWeb
|
|20040063100||Nanoneedle chips and the production thereof||April, 2004||Wang|
|20060079012||Method of manufacturing carbon nanotube field emission device||April, 2006||Jeong et al.|
|20080150156||Stacked die package with stud spacers||June, 2008||Lin et al.|
|20080304821||Camera module package and method of manufacturing the same||December, 2008||Jeung et al.|
|20100072619||WIRE BONDING STRUCTURE AND MANUFACTURING METHOD THEREOF||March, 2010||Tzu|
|20080026550||Laser doping of solid bodies using a linear-focussed laser beam and production of solar-cell emitters based on said method||January, 2008||Werner et al.|
|20090309177||Wafer level camera module and method of manufacturing the same||December, 2009||Jeung et al.|
|20090016202||METHOD OF PRODUCING A PHOTOELECTRIC TRANSDUCER AND OPTICAL PICK UP||January, 2009||De Oliveira et al.|
|20080102573||CMOS device with raised source and drain regions||May, 2008||Liang et al.|
|20040175939||Susceptor apparatus for inverted type MOCVD reactor||September, 2004||Nakamura et al.|
|20050046015||Array-molded package heat spreader and fabrication method therefor||March, 2005||Shim et al.|
This invention generally relates to homogenous mixing of fluids and more particularly to a method and apparatus for achieving a homogeneously mixed solution bath, particularly useful in wet etching processes in semiconductor wafer manufacturing processes.
In the field of semiconductor wafer processing it is common practice to subject the semiconductor wafer to immersion in a solution bath for purposes of, for example, cleaning the wafer process surface or conducting an etching process for removing a selected portion of material from the wafer process surface. The cleaning process or etching process is frequently quite sensitive to slight variations in concentration or solubility of the solution. Various types of mixing processes have been in use in other fields, such as mechanically driven mixers where a mechanical source of energy is imparted to stirring members immersed in the solution. In addition, mixers relying on the passing a flow of pressurized gases into a solution where the buoyancy of the gaseous bubbles created are relied on for mixing the solution. Yet other methods rely on the re-circulation of the solution through a solution container where flowing turbulences are created to impart mixing.
Traditional methods of mixing have been found to be inadequate in the semiconductor manufacturing process. Prior art methods of mixing typically rely on the creation of turbulent volume portions within the fluid to achieve mixing of miscible fluids to achieve a homogeneous or mixed solution. The homogeneity of mixing is generally limited by the volumetric size of turbulence disturbances, for example eddy currents, created in the solution by the mixing means. For example, the larger the volumetric size of the turbulent disturbances, the lower the level of homogeneity in the solution. For example, local concentration gradients in a solution are created within the turbulent disturbance volumes where, for example, in a wet etching process localized volume portions of the solution include concentration gradients which upon contacting an immersed substrate result in localized transient non-uniformities in etching rates over the substrate surface. In the semiconductor wafer processing art where features are on the order of 0.25 microns and less, such localized non-uniformities in etching rates are undesirable.
For example, in a gate oxide formation process, for example following shallow trench isolation formation, a silicon nitride layer is removed according to a hot phosphoric acid wet etching process. The uniformity of the etching process is in many cases critical to subsequent processes to form a reliably functioning transistor overlying the silicon semiconductor wafer. Since hot phosphoric acid is selective to silicon nitride etching, an underlying thin silicon oxide layer acts to protect the silicon substrate from contamination. During the wet etching process, as the silicon nitride etching proceeds, solvated silicon and silicon dioxide form as a chemical reaction byproduct of silicon nitride etching, which in the case of inadequate mixing, forms localized volumetric portions adjacent the wafer surface where the solubility limit of silicon dioxide is reached. Undesirably, when the solubility limit of silicon dioxide is reached, silicon dioxide frequently precipitates by nucleation onto the wafer surface where it may readily subsequently grow into larger particles. As a result, the reliability of semiconductor devices is severely compromised, frequently resulting in the rejection of semiconductor wafers and adversely affecting wafer yield.
Thus, there is a need in the semiconductor manufacturing art for a reliable method and apparatus to achieve an acceptable level of mixing homogeneity in wafer processing solutions.
It is therefore an object of the invention to provide a reliable method and apparatus to achieve an acceptable level of mixing homogeneity in wafer processing solutions while overcoming other shortcomings and deficiencies of the prior art.
To achieve the foregoing and other objects, and in accordance with the purposes of the present invention, as embodied and broadly described herein, the present invention provides a method and apparatus for mixing a fluid to form a homogeneous mixing volume.
In a first embodiment, the method includes providing at least two aspiration members at least partially immersed in a solution each of the at least two aspiration members including an aspiration surface having a plurality of aspiration openings for injecting a pressurized gas flow into the solution to produce a plurality of flow vortices the aspiration surfaces disposed in opposing gas flow relationship and spaced apart to define an aspiration treatment volume to produce intersecting flow vortices within the aspiration treatment volume; providing a pressurized gas flow to at least a first aspiration member to produce a first plurality of flow vortices; and, adjusting the pressurized gas flow to at least a second aspiration member to produce a second plurality of flow vortices to form a homogeneous mixing volume within a portion of the solution comprising intersection flow vortices.
These and other embodiments, aspects and features of the invention will be better understood from a detailed description of the preferred embodiments of the invention which are further described below in conjunction with the accompanying Figures.
Although the method of the present invention in exemplary implementation of the mixing apparatus of the present invention is explained with respect to, and is particularly advantageously used in the semiconductor processing art including wet etching processes, it will be appreciated that the method and apparatus of the present invention may be used in any process where a homogeneous mixing zone may be created within a fluid for advantageously affecting a process, including selectively varying the homogeneous mixing zone over a substrate surface.
Referring to
Still referring to
Still referring to
Referring to
Referring to
Referring to
Referring again to
In another embodiment, referring to
For example referring to
Thus, a mixing apparatus for aspirated mixing of a chemical treatment solution has been presented for producing a mixing zone having improved homogeneous fluid mixing. The apparatus is particularly useful in semiconductor etching or cleaning processes, for example including use in a hot phosphoric acid etching process for removing silicon nitride. The homogeneous fluid mixing zone reduces concentration gradients in the solution thereby preventing nucleation and growth of chemical species in solution, for example silicon dioxide. The dynamic stagnation zone mixing system and method has the advantages of the ability to homogeneously mix a large volume of fluid at a relatively lower energy cost compared to mechanical mixing means and achieve superior homogeneity compared to prior art aspirated mixing means. The dynamic stagnation zone mixing system has the further benefits of being easily maintained and cleaned to increase a process throughput.
The preferred embodiments, aspects, and features of the invention having been described, it will be apparent to those skilled in the art that numerous variations, modifications, and substitutions may be made without departing from the spirit of the invention as disclosed and further claimed below.
| 0.524
|
FineWeb
|
Whenever faced with a bed bug problem, the question must be asked. How can I safely solve this problem? IPM or Integrated Pest Management is a great option.
According to the EPA, integrated pest management is "an effective and
environmentally sensitive approach to pest management that relies on
a combination of common-sense practices. IPM programs use current,
comprehensive information on the life cycles of pest and their
interaction with the environment. This information in combination
with available pest control methods, is used to manage pest damage by
the most economical means, and with the least possible hazard to
people, property, and the environment.”
Currently the safest and most effective bed bug IPM strategy is the Fire/Ice™ type. Using a combination of carbon dioxide and heat treatment, bed bugs can be wiped out without the hazards of chemical pesticides.
Curious about IPM? Contact Hart-Shegos Inspection Services
| 0.6128
|
FineWeb
|
NB: We pull threads based on an estimate of thread use from over 30 years of experience. If your kit or canvas order contains threads, and for whatever reason, you find you need more, there may be an additional charge.
- a handpainted needlepoint canvas featuring six adorable dogs, by Laurie Ludwin from Julie Mar Designs.
- The cartoon-style of this design makes it easy to stitch and suitable for a beginner stitcher.
- The design is on 13 mesh canvas and measures 12" x 9".
- If you require stretcher bars we recommend a pair of 16" and a pair of 13" and some thumbtacks.
- This whimsical puppies needlepoint is sold canvas-only or as a kit. We use Planet Earth luxury fibers for orders requesting wool or silk threads and DMC embroidery floss if cotton is selected.
- The needlepoint canvas is usually in stock and ships in a few days.
| 0.8679
|
FineWeb
|
Context Although corticosteroids are widely used to relieve cancer-related fatigue (CRF), information regarding the factors predicting responses to corticosteroids remains limited. Objectives The aim of this study was to identify potential factors predicting responses to corticosteroids for CRF in advanced cancer patients. Methods Inclusion criteria for this multicenter, prospective, observational study were patients who had metastatic or locally advanced cancer and had a fatigue intensity score of 4 or more on a 0–10 Numerical Rating Scale (NRS). Univariate and multivariate analyses were conducted to identify the factors predicting two-point reduction or more in NRS on day 3. Results Among 179 patients who received corticosteroids, 86 (48%; 95% CI 41%–56%) had a response with two-point reduction or more. Factors that significantly predicted responses were performance status score of 3 or more, Palliative Performance Scale score more than 40, absence of ascites, absence of drowsiness, absence of depression, serum albumin level greater than 3 mg/dL, serum sodium level greater than 135 mEq/L, and baseline NRS score greater than 5. A multivariate analysis showed that the independent factors predicting responses were baseline NRS score greater than 5 (odds ratio [OR] 6.6, 95% CI 2.8–15.4), Palliative Performance Scale score more than 40 (OR 4.4, 95% CI 2.1–9.3), absence of drowsiness (OR 3.4, 95% CI 1.7–6.9), absence of ascites (OR 2.3, 95% CI 1.1–4.7), and absence of pleural effusion (OR 2.2, 95% CI 1.0–5.0). Conclusion Treatment responses to corticosteroids for CRF may be predicted by baseline symptom intensity, performance status, drowsiness, and severity of fluid retention symptoms. Larger prospective studies are needed to confirm these results.
- palliative care
ASJC Scopus subject areas
- Clinical Neurology
- Anesthesiology and Pain Medicine
| 0.9749
|
FineWeb
|
Discussion 1: The Modern Presidency
Please note: This discussion will be graded and you need to post twice: one original post and one post responding to someone else. Please cite pages and consider using direct quotes from the reading to support your points. Be considerate in your responses to your peers (and follow the norms of ‘netiquette’ listed in the introduction module). Please post both of your posts by the end of the day on July 27.
Chapter 1 from Grover (from his book, The President as Prisoner) describes the rise of the modern presidency as a fundamental shift in the nature of the office (see also Pika et al., Chapter 1 from Politics of the Presidency).
So, what is the modern presidency? Please write a post exploring the rise and nature of the modern presidency compared to the earlier presidency. Your answer should consider some specific insights from the readings. Please discuss at least one of the following:
A. What led to the emergence of the modern presidency and how does it differ from the earlier ‘traditional’ presidency?
B. What does Laski see as the obstacles to a powerful ‘modern’ presidency before the 1930s? Do some of these still apply even if the presidency has become more powerful?
C. What is Rossiter’s approach to understanding the modern presidency? Why is he confident this modern office will not become dangerously powerful?
D. How does Neustadt understand presidential power, even in the modern presidency? What does it take for a president to be successful in handling the modern office? How does Neustadt’s view potentially challenge or go beyond Rossiter?
| 0.9998
|
FineWeb
|
“As the sea swallowed the sun once more, a tear fell from her eye and dissolved into the sand beneath us.
“What is it, why the tears?”, I asked
and with a smile that nearly leaped across her cheeks,
she answered: “Here comes the moon.”
I knew that moment that it is and always will be the simple things that plant the most phenomenal truths inside us.”
– Christopher Poindexter
| 0.9888
|
FineWeb
|
A focused antibody library for improved hapten recognition
Publikation/Tidskrift/Serie: Journal of Molecular Biology
Förlag: Elsevier Science Ltd.
The topography of the antigen-binding site as well as the number and the positioning of the antigen contact residues are strongly correlated with the size of the antigen with which the antibody interacts. On the basis of these considerations, we have designed a focused scFv repertoire biased for haptens, designated the cavity library. The hapten-specific scFv, FITC8, was used as a scaffold for library construction. FITC8, like other hapten binders, displays a characteristic cavity in its paratope into which the hapten binds. In five of the six complementarity-determining regions, diversity-carrying residues were selected rationally on the basis of a model structure of FITC8 and on known antibody structure-function relationships, resulting in variation of 11 centrally located, cavity-lining residues. L3 was allowed to carry a more complex type of diversity. In addition, length variation was introduced into H2, as longer versions of this loop have been shown to correlate with increased hapten binding. The library was screened, using phage display, against a panel of five different haptens, yielding diverse and highly specific binders to four of the antigens. Parallel selections were performed with a library having diversity spread onto a greater area, including more peripherally located residues. This resulted in the isolation of binders, which, in contrast to the clones selected from the cavity library, were not able to bind to the soluble hapten in the absence of the carrier protein. Thus, we have shown that by focusing diversity to the hotspots of interaction a library with improved hapten-binding ability can be created. The study supports the notion that it is possible to create antibody libraries that are biased for the recognition of antigens of pre-defined size. (c) 2006 Elsevier Ltd. All rights reserved.
- Medicine and Health Sciences
- focused diversity
- antibody evolution
- antibody library
- ISSN: 0022-2836
| 0.5414
|
FineWeb
|
this is Vector calculus. I am working on finding stokes' theorem for different three dimensional surfaces. As we have to perform the surface integral along the binding surface, we must first find what that binding surface is.
For a unit half sphere, a cylinder and a cone, the bounding surface is the same: a unit circle in the x-y plane.
Why is this?
Why is that the binding surface and not the rest?
Also, why is it that when converting to spherical polar coordinates, the limits of integration of the cita variable are 0 to pi/2 and not pi?
I don't know if i expressed myself clearly enough,
thanks a million!
| 0.8175
|
FineWeb
|
After resetting a password, please remember to update any saved passwords on computers and mobile devices.
If the password prompts persist, clear any saved credentials and remove any outdated mobile devices as per the instructions below.
If there are saved passwords on the computer, the computer may be automatically entering an old password during the logon process. To clear saved credentials:
- Open "Control Panel".
- Next to "View by", select "Large icons". Click on "Credential Manager".
- Click "Windows Credentials".
- Under "Generic Credentials", find the entry that includes the OneNet email address. Click on the arrow at the end of the row.
- Select "Edit", and type in the most recent password.
If you have accessed your mailbox from several different mobile devices, please remove any devices you no longer use from your list of active devices:
Outlook Web App
- Browse to https://owa.onenet.co.nz and login with your OneNet credentials.
- In the top right-hand corner, click the Tools icon and select "Options".
- In the left-hand menu, under "General", select "Mobile Devices".
- Select any devices you are no longer using, and click the remove icon to stop the device from syncing with your account.
| 0.6042
|
FineWeb
|
Save time, empower your teams and effectively upgrade your processes with access to this practical PandC Data Platforms Toolkit and guide. Address common challenges with best-practice templates, step-by-step work plans and maturity diagnostics for any PandC Data Platforms related project.
Download the Toolkit and in Three Steps you will be guided from idea to implementation results.
The Toolkit contains the following practical and powerful enablers with new and updated PandC Data Platforms specific requirements:
STEP 1: Get your bearings
- The latest quick edition of the PandC Data Platforms Self Assessment book in PDF containing 49 requirements to perform a quickscan, get an overview and share with stakeholders.
Organized in a data driven improvement cycle RDMAICS (Recognize, Define, Measure, Analyze, Improve, Control and Sustain), check the…
- Example pre-filled Self-Assessment Excel Dashboard to get familiar with results generation
Then find your goals…
STEP 2: Set concrete goals, tasks, dates and numbers you can track
Featuring 654 new and updated case-based questions, organized into seven core areas of process design, this Self-Assessment will help you identify areas in which PandC Data Platforms improvements can be made.
Examples; 10 of the 654 standard requirements:
- What is the source of the strategies for PandC Data Platforms strengthening and reform?
- Will PandC Data Platforms have an impact on current business continuity, disaster recovery processes and/or infrastructure?
- Are there any constraints known that bear on the ability to perform PandC Data Platforms work? How is the team addressing them?
- Is a solution implementation plan established, including schedule/work breakdown structure, resources, risk management plan, cost/budget, and control plan?
- Is data collected on key measures that were identified?
- Why are PandC Data Platforms skills important?
- Are stakeholder processes mapped?
- What is an unallowable cost?
- Has everyone on the team, including the team leaders, been properly trained?
- Whom among your colleagues do you trust, and for what?
Complete the self assessment, on your own or with a team in a workshop setting. Use the workbook together with the self assessment requirements spreadsheet:
- The workbook is the latest in-depth complete edition of the PandC Data Platforms book in PDF containing 654 requirements, which criteria correspond to the criteria in…
Your PandC Data Platforms self-assessment dashboard which gives you your dynamically prioritized projects-ready tool and shows your organization exactly what to do next:
- The Self-Assessment Excel Dashboard; with the PandC Data Platforms Self-Assessment and Scorecard you will develop a clear picture of which PandC Data Platforms areas need attention, which requirements you should focus on and who will be responsible for them:
- Shows your organization instant insight in areas for improvement: Auto generates reports, radar chart for maturity assessment, insights per process and participant and bespoke, ready to use, RACI Matrix
- Gives you a professional Dashboard to guide and perform a thorough PandC Data Platforms Self-Assessment
- Is secure: Ensures offline data protection of your Self-Assessment results
- Dynamically prioritized projects-ready RACI Matrix shows your organization exactly what to do next:
STEP 3: Implement, Track, follow up and revise strategy
The outcomes of STEP 2, the self assessment, are the inputs for STEP 3; Start and manage PandC Data Platforms projects with the 62 implementation resources:
- 62 step-by-step PandC Data Platforms Project Management Form Templates covering over 6000 PandC Data Platforms project requirements and success criteria:
Examples; 10 of the check box criteria:
- Issue Log: In classifying stakeholders, which approach to do so are you using?
- Process Improvement Plan: Has the time line required to move measurement results from the points of collection to databases or users been established?
- Procurement Audit: Are travel expenditures monitored to determine that they are in line with other employees and reasonable for the area of travel?
- Stakeholder Management Plan: Do PandC Data Platforms project managers participating in the PandC Data Platforms project know the PandC Data Platforms projects true status first hand?
- Responsibility Assignment Matrix: What tool can show you individual and group allocations?
- Quality Audit: How does the organization know that the research supervision provided to its staff is appropriately effective and constructive?
- Team Member Performance Assessment: To what degree are sub-teams possible or necessary?
- Quality Management Plan: Do trained quality assurance auditors conduct the audits as defined in the Quality Management Plan and scheduled by the PandC Data Platforms project manager?
- Project Scope Statement: Once its defined, what is the stability of the PandC Data Platforms project scope?
- Procurement Management Plan: Are the results of quality assurance reviews provided to affected groups & individuals?
Step-by-step and complete PandC Data Platforms Project Management Forms and Templates including check box criteria and templates.
1.0 Initiating Process Group:
- 1.1 PandC Data Platforms project Charter
- 1.2 Stakeholder Register
- 1.3 Stakeholder Analysis Matrix
2.0 Planning Process Group:
- 2.1 PandC Data Platforms project Management Plan
- 2.2 Scope Management Plan
- 2.3 Requirements Management Plan
- 2.4 Requirements Documentation
- 2.5 Requirements Traceability Matrix
- 2.6 PandC Data Platforms project Scope Statement
- 2.7 Assumption and Constraint Log
- 2.8 Work Breakdown Structure
- 2.9 WBS Dictionary
- 2.10 Schedule Management Plan
- 2.11 Activity List
- 2.12 Activity Attributes
- 2.13 Milestone List
- 2.14 Network Diagram
- 2.15 Activity Resource Requirements
- 2.16 Resource Breakdown Structure
- 2.17 Activity Duration Estimates
- 2.18 Duration Estimating Worksheet
- 2.19 PandC Data Platforms project Schedule
- 2.20 Cost Management Plan
- 2.21 Activity Cost Estimates
- 2.22 Cost Estimating Worksheet
- 2.23 Cost Baseline
- 2.24 Quality Management Plan
- 2.25 Quality Metrics
- 2.26 Process Improvement Plan
- 2.27 Responsibility Assignment Matrix
- 2.28 Roles and Responsibilities
- 2.29 Human Resource Management Plan
- 2.30 Communications Management Plan
- 2.31 Risk Management Plan
- 2.32 Risk Register
- 2.33 Probability and Impact Assessment
- 2.34 Probability and Impact Matrix
- 2.35 Risk Data Sheet
- 2.36 Procurement Management Plan
- 2.37 Source Selection Criteria
- 2.38 Stakeholder Management Plan
- 2.39 Change Management Plan
3.0 Executing Process Group:
- 3.1 Team Member Status Report
- 3.2 Change Request
- 3.3 Change Log
- 3.4 Decision Log
- 3.5 Quality Audit
- 3.6 Team Directory
- 3.7 Team Operating Agreement
- 3.8 Team Performance Assessment
- 3.9 Team Member Performance Assessment
- 3.10 Issue Log
4.0 Monitoring and Controlling Process Group:
- 4.1 PandC Data Platforms project Performance Report
- 4.2 Variance Analysis
- 4.3 Earned Value Status
- 4.4 Risk Audit
- 4.5 Contractor Status Report
- 4.6 Formal Acceptance
5.0 Closing Process Group:
- 5.1 Procurement Audit
- 5.2 Contract Close-Out
- 5.3 PandC Data Platforms project or Phase Close-Out
- 5.4 Lessons Learned
With this Three Step process you will have all the tools you need for any PandC Data Platforms project with this in-depth PandC Data Platforms Toolkit.
In using the Toolkit you will be better able to:
- Diagnose PandC Data Platforms projects, initiatives, organizations, businesses and processes using accepted diagnostic standards and practices
- Implement evidence-based best practice strategies aligned with overall goals
- Integrate recent advances in PandC Data Platforms and put process design strategies into practice according to best practice guidelines
Defining, designing, creating, and implementing a process to solve a business challenge or meet a business objective is the most valuable role; In EVERY company, organization and department.
Unless you are talking a one-time, single-use project within a business, there should be a process. Whether that process is managed and implemented by humans, AI, or a combination of the two, it needs to be designed by someone with a complex enough perspective to ask the right questions. Someone capable of asking the right questions and step back and say, ‘What are we really trying to accomplish here? And is there a different way to look at it?’
This Toolkit empowers people to do just that – whether their title is entrepreneur, manager, consultant, (Vice-)President, CxO etc… – they are the people who rule the future. They are the person who asks the right questions to make PandC Data Platforms investments work better.
This PandC Data Platforms All-Inclusive Toolkit enables You to be that person:
Includes lifetime updates
Every self assessment comes with Lifetime Updates and Lifetime Free Updated Books. Lifetime Updates is an industry-first feature which allows you to receive verified self assessment updates, ensuring you always have the most accurate information at your fingertips.
| 0.9096
|
FineWeb
|
Clearwater Beach, FL, USA USA
Oct. 22, 2012 to Oct. 25, 2012
Hyunbum Kim , Department of Computer Science, The University of Texas at Dallas, Richardson, 75083-0688, USA
Jorge A. Cobb , Department of Computer Science, The University of Texas at Dallas, Richardson, 75083-0688, USA
Wireless sensor and actor networks (WSANs) are composed of static sensor nodes and mobile actor nodes. We assume actors have a random initial location in the two-dimensional sensing area. The objective is to move each actor to a location such that every sensor node is within a bounded number of hops from some actor. Because sensor nodes have limited energy, the new actor locations are chosen as to minimize the transmission range required from the sensor nodes. However, actors also have a limited (although larger) power supply, and their movement depletes their resources. It follows that by carefully choosing the new actor locations, the total actor movement can be minimized. In this paper, we study the trade-off between minimizing sensor transmission radius and minimizing actor movement. Due to the complexity of the problem, we introduce an optimal ILP formulation, and compare its results against a proposed heuristic. For the ILP solution to be feasible, we introduce a finite set of potential actor positions such that an optimal solution is guaranteed to be found within this set.
Optimization, Mobile communication, Complexity theory, Spread spectrum communication, Batteries, Indexes, Linear programming
Hyunbum Kim, Jorge A. Cobb, "Optimization trade-offs in the design of wireless sensor and actor networks", LCN, 2012, 37th Annual IEEE Conference on Local Computer Networks, 37th Annual IEEE Conference on Local Computer Networks 2012, pp. 559-567, doi:10.1109/LCN.2012.6423675
| 0.7829
|
FineWeb
|
Intercellular invasion is the active migration of cells on one type into the interiors of tissues composed of cells of dissimilar cell types. Contact paralysis of locomotion is the cessation of forward extension of the pseudopods of a cell as a result of its collision with another cell. One hypothesis to account for intercellular invasion proposes that a necessary condition for a cell type to be invasive to a given host tissue is that it lack contact paralysis of locomotion during collision with cells of that host tissue. The hypothesis has been tested using rabbit peritoneal neutrophil granulocytes (PMNs) as the invasive cell type and chick embryo fibroblasts as the host tissue. In organ culture, PMNs rapidly invade aggregates of fibroblasts. The behavior of the pseudopods of PMNs during collision with fibroblasts was analyzed for contact paralysis by a study of time-lapse films of cells in mixed monolayer culture. In monolayer culture, PMNs show little sign of paralysis of the pseudopods upon collision with fibroblasts and thus conform in their behavior to that predicted by the hypothesis.
| 0.9083
|
FineWeb
|
- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Journal of Control Science and Engineering
Volume 2013 (2013), Article ID 763165, 19 pages
Model Reduction Using Proper Orthogonal Decomposition and Predictive Control of Distributed Reactor System
1Faculty of Minas, National University of Colombia, 050041 Medellín, Colombia
2Chemical Engineering Department, University of São Paulo, 05508-900 São Paulo, SP, Brazil
Received 28 November 2012; Revised 28 February 2013; Accepted 15 March 2013
Academic Editor: James Lam
Copyright © 2013 Alejandro Marquez et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper studies the application of proper orthogonal decomposition (POD) to reduce the order of distributed reactor models with axial and radial diffusion and the implementation of model predictive control (MPC) based on discrete-time linear time invariant (LTI) reduced-order models. In this paper, the control objective is to keep the operation of the reactor at a desired operating condition in spite of the disturbances in the feed flow. This operating condition is determined by means of an optimization algorithm that provides the optimal temperature and concentration profiles for the system. Around these optimal profiles, the nonlinear partial differential equations (PDEs), that model the reactor are linearized, and afterwards the linear PDEs are discretized in space giving as a result a high-order linear model. POD and Galerkin projection are used to derive the low-order linear model that captures the dominant dynamics of the PDEs, which are subsequently used for controller design. An MPC formulation is constructed on the basis of the low-order linear model. The proposed approach is tested through simulation, and it is shown that the results are good with regard to keep the operation of the reactor.
Advances in power computing have propelled the development of process models increasingly detailed and precise, which are then used in the design, optimization, monitoring, and diagnosis of faults, among other tasks. Usually distributed chemical reactors are described by partial differential equations (PDEs) that show the space-time evolution of some variables of interest. In order to simulate PDEs, the spatial domain is discretized, obtaining a large number of ordinary differential equations (ODEs). However, a fine discretization leads to an increase in model complexity. To reduce the complexity of the models, a technique based on orthogonal decomposition of a set of measurements of physical quantities (such as temperature and concentration) is used to represent these quantities along the space and time. This technique, proper orthogonal decomposition (POD), has been used to reduce the order of a large number of systems. This method is based on orthonormal basis functions generated from process data (snapshot matrix) which are obtained by simulation or experimentation on the process; These data are taken by excitation of the process through manipulated variables, external inputs and disturbances of the process. The main idea is to consider POD basis functions which capture the spatial dynamics of our system. The advantage of working with these basis functions is that it is possible to reduce the model order from hundreds or thousands to a few tens. This reduction, resulting in ease of simulation, assimilation and optimization, enables that such models can be applied in real time applications.
There are several methods available for reducing the dimension of a system. The most immediate is a heuristic approach that consists of proposing a priori solution to the equations of motion on the grounds of symmetry and boundary conditions. These solutions usually take the form of a truncated series in terms of general sets of orthogonal functions, such as Fourier modes or spherical harmonics. Antoulas proposes a classification into two main groups, namely, Krylov and singular value decomposition (SVD) methods. Krylov methods make use of iterations for finding approximations to large-scale dynamical systems. The SVD methods are based on the decomposition of the state vector into a set of vectors that can be ordered in a certain sense. These methods include balanced truncation and Hankel approximations for linear systems and the proper orthogonal decomposition (POD) methods and empirical grammians for nonlinear systems. In this work, we are concerned with the POD and its combination with Galerkin projection to produce reduced linear dynamical versions of the original large-scale system (the methodology used in this work is shown in Figure 1). The interested reader is referred to for a more complete account of the available reduction methods. The POD [2, 3] is a statistical technique to extract features from a given dataset by searching for patterns that optimally represent the dataset with respect to quantities such as variance or energy. The output of the POD is a set of time-independent orthogonal functions called empirical orthogonal functions (EOFs). Each EOF is associated with a certain amount of variance or energy. The first suggestion to use the POD in the analysis of dynamical systems was due to Lumley . One of the first phenomena to be studied by means of reduced models arising from the use of the POD was turbulence . Perhaps for nonlinear systems the most studied method is the POD in conjunction with Galerkin projections . However, this is not the only available method, and some other ideas have been proposed, such as a decomposition into principal interaction patterns (PIPs), which explicitly incorporates information about the system’s dynamics within the determination of the patterns [6, 7]. There are other techniques that might be suitable for developing low-order models such as the independent component analysis (ICA) . The ICA aims at answering questions such as, what signal comes from what source? This problem is known as the cocktail-party problem. The solution relies on the assumption of statistical independence of the sources and, therefore, of the signals. The outcome is a set of modes and a mixing matrix that sets the relationship between the modes in order to reconstruct the original mixed signal. The ICA modes are statistically independent but not necessarily orthogonal, and there is no specific order. Furthermore, they are not clearly related to any physical quantity that can help to decide what modes to retain and what to rule out. These features may constitute a disadvantage when trying to construct low-order models since a truncation using these modes would lack an appropriate parameter to measure the expected accuracy of the resulting model. Original algorithms assume linearity although some efforts have been made to extend the formalism to nonlinear systems. So far these ideas have not yet been fully investigated in the context of dynamical systems. Beside this introduction section, this paper has four other sections. Section 2 describes the model reduction method that is used here. Section 3 describes the model of the nonisothermal tubular reactor that is used to illustrate the application of the proposed reduced and control methods. Section 4 addresses the control of a nonisothermal tubular reactor using a reduced model and MPC of infinite horizon. Finally, Section 5 concludes the paper.
2. Proper Orthogonal Decomposition (POD) and Galerkin Projection
Proper orthogonal decomposition (POD) is a powerful method for data analysis aimed at obtaining low-dimensional approximate descriptions of a high-dimensional process. POD provides a basis for the modal decomposition of an ensemble of functions, such as data obtained in the course of experiments or numerical simulations. The basis functions are commonly called empirical eigenfunctions, empirical basis functions, empirical orthogonal functions, proper orthogonal modes, or basis vectors. The most striking feature of the POD is its optimality: it provides the most efficient way of capturing the dominant components of an infinite-dimensional process with only a finite number of modes, and often surprisingly few modes. In general, there are two different interpretations for the POD. The first interpretation regards the POD as the Karhunen-Loeve decomposition (KLD), and the second one considers that the POD consists of three methods: the KLD, the principal component analysis (PCA), and the singular value decomposition (SVD). In recent years, there have been many reported applications of the POD methods in engineering fields such as in studies of turbulence [9–13], vibration analysis [14–18], process identification [19–22], and control in chemical engineering [23–32]. In general, POD is a methodology that first identifies the most energetic modes in a time-dependent system and subsequently provides a means of obtaining a low-dimensional description of the system’s dynamics where the low-dimensional system is obtained directly from the Galerkin projection of the governing equations on the empirical basis set (the POD modes).
2.1. General Procedure
Let be the state vector of a given dynamical system, and let with be the so-called snapshot matrix that contains a finite number of samples or snapshots of the evolution of at . In POD, we start assuming that each snapshot can be written as a linear combination of a set of ordered orthonormal basis vectors (POD basis vectors) : where is the coordinate of with respect to the basis vector (it is also called time-varying coefficient or POD coefficient) and denotes the Euclidean inner product. Since the first most relevant basis vectors capture most of the energy in the collected data set, we can construct an th-order approximation of the snapshots by means of the following truncated sequence:
In POD, the orthonormal basis vectors are calculated in such a way that the reconstruction of the snapshots using the first most relevant basis vectors is optimal in the sense that the sum-squared-error (SSE) between and , for all , is minimized. Herein denotes the -norm or the euclidean Norm. In other words, the POD basis vectors are the ones that solve the following constrained optimization problem: subject to
The aim of the previous constraints is to ensure the basis vector orthogonality. The orthonormal basis vectors that solve (4) can be found by calculating the singular value decomposition of the snapshot matrix (). The singular values of are positive real numbers that are ordered in a decreasing way, . These values quantify the importance of the basis vectors in capturing the information present in the data. Therefore, the first POD basis vector is the most relevant one, and the last POD basis vector is the least important element. For the application of POD to practical problems, the choice of the most relevant basis vectors is certainly of central importance. A criterion commonly used for choosing based on heuristic considerations is the so-called energy criterion . In this criterion, we check the ratio between the modelled energy and the total energy contained in The ratio is used to determine the truncation degree of the selected POD basis vectors. The number of POD basis elements should be chosen such that the fraction of the first singular values in (6) is large enough to capture most of the information in the data . An ad hoc rule frequently applied is that has to be determined for . The closer to , or similarly the closer to , the better the approximation of .
2.2. Galerkin Projection
The derivation of the dynamical model for the POD coefficients can be done in two ways, by using the Galerkin projection or by means of other system identification techniques. The Galerkin projection is the most common way of deriving the dynamical model for the POD coefficients, and it will be the method used in this work.
For explaining the ideas, we suppose that the dynamical behaviour of the high-dimensional system from which we want to find a reduced-order model is described by the following nonlinear model in state space form:
Let us define a residual function for (7) as follows: and let be the residual when the state vector is approximated by its th-order approximation: where and . In the Galerkin projection, the projection of the residual on the space spanned by the basis vectors vanishes, that is, where denotes the Euclidean inner product. Replacing by its th-order approximation in (7): and then we apply the inner product criterion (10) as follows: and given that because of the orthonormality of the basis vectors, we have the model for the POD coefficients reduce to
Finally, the reduced order model of (10) with only states has the following form,
3. Tubular Chemical Reactor with Axial and Radial Diffusion
In the following subsection we will describe in detail the kind of tubular reactor for which we will design and implement POD-based model predictive control (MPC) control strategies.
3.1. Nonisothermal Tubular Reactor Model with Axial and Radial Diffusion, Reaction, and Convection
The system to be controlled is a nonisothermal tubular reactor with three phenomena: axial and radial diffusion, reaction and convection, where a reversible, second-order, exothermic, catalyzed reaction takes place (). The reactor is surrounded by 3 cooling/heating jackets as it is shown in Figure 2. The temperature of the jackets fluids (, and ) can be manipulated independently in order to control the concentration and temperature profiles in the reactor.
It is assumed that radial and axial variations in concentration and temperature are present; also we consider laminar flow regime. In this study we are neglecting the heat transfer effects between the jackets fluids and the reactor wall. Under the previous assumptions, the mass balance of specie and the energy balance on the differential cylinder shown in Figure 3 produce the following equations: where
with the following boundary conditions:(i)radial: (1)at , we have symmetry and ,(2)at , the temperature flux to the wall on the reaction side equals the convective flux out of the reactor into shell side of the heat exchanger, (3)at , there is no mass flow through the tube walls ;(ii)axial: (1) and at , (2)at the outlet of the reactor , and .
Here, is the reactant concentration in [mol · m−3], is the reactant temperature in , is the diffusivity of all species in [m2 · s−1], is the thermal conductivity of the reaction mixture in [J · m−1 · s−1 · K−1], is the equilibrium constant at , is the fluid velocity in [m · s−1], is the heat of the reaction in [J · mol−1], and are the density in [kg · m−3] and the specific heat in [J · kg−1 · K−1] of the mix, respectively, is the rate constant in [m6 · mol−1 · s−1 · kg−1], is the activation energy in [J · mol−1], is the ideal gas constant in [J · mol−1 · K−1], is the heat transfer coefficient in [J · m−2 · s−1 · K−1], is the reactor length in [m], and are the concentration in [mol · m−3] and the temperature in [K] of the feed flow, is the axial coordinate in [m], is the radial coordinate in [m], is the time in [s], and is the reactor wall temperature in [K] which is defined as follows:
The temperature of the jacket sections , , and must be between 280 K and 330 K. In addition, the temperature inside the reactor must be below 400 K in order to avoid the formation of side products. The kinds of disturbances that affect the reactor are principally variations in temperature and concentration of the feed flow. Typically, such variations are in the range of ±10 K for the temperature and ±5% of the nominal value for the concentration. In this system, only the temperature of the feed flow is measured directly.
3.2. Operating Profile
The operating profiles (steady state concentration and temperature profiles) of the reactor are derived by means of an optimization algorithm, which minimizes an objective function subject to the steady state equations of the reactor described by (15) and the input and state constraints defined previously. The steady state model of the reactor is given by the following partial differential equations (PDEs):
with , , , , , , , and , where is a function that depends on the boundary conditions when . The discrete version of (19) can be found by replacing the spatial derivatives by first-order upwind and backward differences approximations as follows:
with where and are the number of sections in axial and radial directions in which the reactor is divided, and are the reactor sections defining the ending of the first and second jacket, respectively, and are normalization factors, and are the normalized concentration and temperature of the th section of the reactor, is the normalized reactor wall temperature of the th section, and and are the length and thickness of each section, respectively. The variables are normalized in order to avoid possible numerical problems. The optimization problem that is solved for deriving the operating profiles is defined as follows:
subject to where is the desired concentration (normalized) at the reactor output, is the desired temperature (normalized) inside the reactor of the th increment, is the concentration (normalized) at the reactor output, is a trade-off coefficient, and are the lower and upper temperature values of the fluids of the jackets, and is the maximum allowed temperature inside the tubular reactor. The objective function involves an inherent trade-off between minimising conversion and energy costs. The first term of the cost function corresponds to the squared error of the normalized concentration at the reactor output (terminal cost), and the second term is related to the mean squared error of the normalized temperature along the reactor (integral cost). In this problem, and was set to , was selected equal to the normalized temperature of the feed flow () for and . The trade-off parameter can take values between 0 and 1. When goes to 1, the reduction of the reactant concentration at the reactor output becomes more important than the temperature deviations. On the other hand when goes to , the temperature deviations become more important than the concentration at the reactor output and the risk of the formation of hot spots is reduced. To solve the optimization problem described by (22), an algorithm (a kind of sequential quadratic programming (SQP)) proposed by was used in this work.
The algorithm was executed in MATLAB with the following parameters: , , K, mol/m3, K, K, K, , and . The maximum allowed temperature () inside the reactor was chosen degrees below the actual limit (340 K) in order to give the feedback controller enough room of maneuverability. The trade-off coefficient was found by trial and error.
The algorithm was executed using different initial conditions. Along the experiments, one local minima was found. The operating point was given by , , and . The optimal concentration and temperature profiles can be observed in Figures 4 and 5.
3.3. Linear Model
The linear model of the tubular chemical reactor is obtained by linearizing (15) around the jackets temperatures and the operating profiles presented in Figure 4. This linear model is given by with the following boundary conditions:(i)radial:(1)at , we have symmetry and ,(2)at , (3)at ; ,(ii)axial:(1) and at , (2)at the outlet of the reactor , and .
Here, , , and are the deviations from the steady state of the concentration, temperature, and reactor wall temperature; , and are the steady state profiles (operating profiles) of the concentration, temperature, and reactor wall temperature, respectively. In order to reduce the infinite dimensionality of (24), the partial derivatives with respect to space are replaced by first-order upwind, and backward differences approximations and if we define the following vectors, then (24) can be cast as follows: where , , and are the matrices describing the system, is the state vector, is the vector of the inputs, and is the vector of the disturbances. It is important to mention that (27) is a stable linear system; that is, the operating profile around which the reactor is linearized is nominally stable. This property is very important in Section 4.2 in order to design the IHMPC controllers.
Since the spatial domain of the reactor is divided into sections, the number of states of (27) is equal to 1800. Given that such large number of states makes the design and implementation of feedback controllers for the reactor difficult, in the next section a reduced order model will be derived using POD and Galerkin projection.
3.4. Model Reduction for the Tubular Reactor Using POD
The derivation of a reduced order model of (27) was done in 5 steps. These steps are described in the following subsections.
3.4.1. Generation of the Snapshot Matrix
We have created a snapshot matrix from the system response () when independent step changes were made in the input and perturbation signals on the nonlinear model (15): Along the simulations, 10000 samples were collected using a sampling time of 1 s. The amplitude of the step changes was chosen in such a way as to produce changes of similar magnitude in the temperature and concentration profiles. This avoids a possible bias in the resulting model.
3.4.2. Derivation of the POD Basis Vectors
The POD basis vectors are obtained by computing the SVD of the snapshot matrix : where and are unitary matrices and is a matrix that contains the singular values of in a decreasing order on its main diagonal. The left singular vectors, that is, the columns of , are the POD basis vectors.
3.4.3. Selection of the Most Relevant POD Basis Vectors
The most relevant POD basis vectors are chosen using the energy criterion presented in Section 2.1. The plot of (see (6)) for the first 100 basis vectors is shown in Figure 6. In this problem, we chose the first POD basis vectors based on their truncation degree . The th order approximation of is given by the following truncated sequence: where and .
3.4.4. Construction of the Model for the First POD Coefficients
The Galerkin projection is the most common way of deriving the dynamical model for the POD coefficients, and it will be the method used in this work. Let us define a residual function for (27) as follows: and we replace by its th-order approximation in (32); the Galerkin projection states that the projection of on the space spanned by the basis functions vanishes, that is, where denotes the inner product. Replacing by its th order approximation in (27) and applying the inner product criterion (Galerkin projection) to the resulting equation, we have By evaluating the inner product in (34), and we obtain the model for the first POD coefficients. The reduced order model of the reactor with only 50 states is then given by where , and . The initial condition for reads as .
For validating the reduced order model of the reactor, we applied constant input signals (K), ( K), and ( K) and constant perturbation signals ( mol/m3) and ( K) to both the full-order model (15) and the reduced order model (36), and afterwards we compared their responses. Figures 7, 8, and 9 show the temperature and concentration profiles of the reactor at different time instants and coordinates for each model. In order to measure the quality of the reduced-order model, the averages of the absolute error for the temperature and concentration were calculated by means of the following formulas: where is the number of time steps and s. The plots of and are shown in Figure 10. For the temperature profile, the maximum value for the error is 0.7542 K. For the concentration, the maximum peak for the error is mol/m3. From the previous results, we can conclude that the reduced order model with only 50 states provides an acceptable approximation of the full order model.
The discrete-time version of (36) that is used for designing the digital controller was obtained using the discretization method known as zero-order hold (ZOH) with a sampling time of 0.2 s: where , , and are the matrices describing the new system. A modeling approach frequently adopted in model predictive controller (MPC) considers a discrete-time state-space model in the incremental form ; hence (38) can be represented in the following form: where is the input increment, is the disturbance increment, and , are transformation matrices. In the state equation defined in (39), the state component corresponds to the integrating poles produced by the incremental form of the model, and corresponds to the system modes. For stable systems, it is easy to show that when the system approaches steady state, component tends to zero. is a diagonal matrix with components corresponding to the poles of the system.
4. Model Predictive Control of the Tubular Reactor
Model predictive control (MPC), also referred to as receding horizon control (RHC) or moving horizon control, is a control strategy where a finite or infinite horizon open-loop optimal control problem is solved on-line at each sampling time using the current state of the plant as the initial state, in order to get a sequence of future control actions from which only the first one is applied to the plant. The fact of solving an optimization problem on-line where common plant constraints are included makes MPC different from conventional optimal control which uses a precomputed control law . MPC has been widely adopted by the industrial process control community and implemented successfully in many applications. First of all, the MPC algorithms can handle in a very natural way constraints on both process inputs (manipulated variables or control actions) and process outputs values (controlled variables), which often have a significant impact on the quality, effectiveness, and safety of the production. Additionally, the MPC controllers can take into account the internal interactions within the process, thanks to the multivariable models on which they are typically based. This makes the MPC algorithms a quite suitable option for multivariable process control. Another reason of the success of MPC is the fact that the principle of operation is comprehensible and relatively easy to explain to process operators and engineers. This is an important aspect at the moment of introducing new techniques into industrial practice. the MPC technology can be found in a wide variety of application areas including chemicals, food processing, automotive, and aerospace applications. A recent survey that provides an overview of commercially available model predictive control technology can be found in . Several notable past reviews regarding theoretical and practical aspects of MPC are offered in [37, 39]. Linear MPC refers to a family of MPC schemes in which linear models are used to predict the system’s dynamics, even though the dynamics of the closed-loop system is nonlinear due to the presence of constraints. Along this work, we will deal with MPC controllers based on discrete-time linear time invariant (LTI) models in state space form. model predictive control based on linear models has been successfully implemented in the control of nonlinear distributed parameter systems [40–42]. Nonlinear model predictive control (NMPC) has gained popularity for low-order lumped parameters nonlinear systems, but for large size distributed systems, the computational cost to solve the nonconvex NLP problem is still excessive.
4.1. Infinite Horizon Model Predictive Control
A modelling approach frequently adopted in model predictive controller (MPC) considers a discrete-time state-space model in incremental form :
MPC is usually based on a discrete-time state-space model as shown in (41). The cost of the infinite horizon MPC considered here can be defined as follows: where is the output prediction at time instant made at time , is the desired output reference, is the control horizon, and and are positive definite weighting matrices. The controller that is based on the minimization of the above cost function corresponds to the MPC for the output-tracking case. Most of the infinite horizon controllers reduce to finite horizon controllers by defining a terminal state penalty . For the cost defined in (42), such a terminal penalty is computed by the following Lyapunov equation : Since an infinite horizon is used and the model defined in (39) has integrating modes, terminal constraints must be added to prevent the cost from becoming unbounded. Hence, constraints can be written as follows: where With the terminal penalty, the cost defined in (42) reduces to
Finally, the control optimization problem of the infinite horizon MPC can be formulated as subject to
For large changes on or or if corresponds to an unreachable steady state, then the optimization problem defined through (47)-(48) may become infeasible because of a conflict between constraints. Consequently, the MPC as defined above cannot be implemented in practice.
4.2. Extended Infinite Horizon Model Predictive Control
To produce an infinite horizon MPC, which is implementable in practice, the objective function of infinite horizon MPC is redefined as follows: where is a vector of slack variables and is assumed positive definite. Observe that each slack variable refers to a given controlled output. Weight matrix should be selected such that the controller tends to zero the slacks or at least minimize them depending on the number of inputs, which are not constrained. Analogously to the MPC, the extended infinite horizon controllers reduce to finite horizon controllers by defining a terminal state penalty that is obtained by solving (43), and terminal constraints must be added to prevent the cost from becoming unbounded; this constraint can be written as follows: Hence, the control objective defined in (49) becomes as follows:
Finally, the control optimization problem of the extended infinite horizon MPC can be formulated as follows: subject to
Theorem 1. For a stable system, if in the control objective defined in (51) the slack weight matrix is positive definite, then the control law produced by the solution to the problem defined in (52) drives the system output to the desired set point if it corresponds to a reachable state. If the desired set point is not reachable, the controller will stabilize the system in a steady state such that the distance between this steady state and the desired steady state is minimized.
Proof. The proof is provided in .
4.3. Extended Infinite Horizon MPC and POD Applied to Control of a Tubular Reactor
The control objective is to reject the disturbances that affect the reactor, that is, the changes in the temperature and concentration of the feed flow. In addition, the control actions must satisfy the input constraints of the process (), and the control system should keep the temperature inside the reactor below 335 K. In , two POD-based IHMPC control schemes for a nonisothermal tubular reactor were presented. In the first scheme the formulation of the IHMPC controller is formulated in terms of the POD coefficients, and in the second scheme the IHMPC is in terms of physical variables. In the first case, the control of the reactor profiles is achieved indirectly by controlling the POD coefficients which have no physical meaning. This makes the tuning of the controller little intuitive and the definition of the control goals little flexible. This is not the case for the second IHMPC controller, where its formulation is in terms of the temperature of some selected points along the reactor and the concentration at the reactor output. In this work we want to explore the scheme based on the POD coefficients where the control of the temperature and concentration profiles is achieved indirectly and where the references of these POD coefficients can be calculated by where is the reference of the vector and is equal to zero (the model of the MPC is a discrete-time linear model) since the control system has to keep the reactor operating around the profiles shown in Figure 4. The MPC controller, which uses model (39) to predict the future behaviour of the reactor, is formulated as (52) and (53).
In this formulation . Since the state vector is unknown and the changes in the concentration of the feed flow ( are not measured directly, they are estimated by means of an observer (in this case a Kalman filter) with the following formulation: where is the estimated vector of the POD coefficients, is the estimation , is the normalized temperature deviation of the feed flow , is a vector containing ten temperature measurements (normalized deviations) along the reactor, is the estimate of , and are the submatrices of the observer gain (Kalman gain), and are the column vectors of , and is a selection matrix which selects the measured temperatures from the vector .
The control horizon was set to samples; and were selected according to the input constraints of the process and the operating temperatures of the jackets, and the weighting matrices were in this way , and . The Kalman gain matrix was computed from the following covariance matrices: , .
4.4. Simulation Results
In order to evaluate the performance of the control system, the following tests were carried out.
Test 1. The temperature of the feed flow increased to 10 K at the 5000 s, and the concentration of the feed flow decreased to mol/m3 at the 5 s.
Test 2. The temperature of the feed flow decreased to 10 K at the 5000 s, and the concentration of the feed flow increased to mol/m3 at the 5 s.
These disturbances have a big impact on the temperature profile of the reactor.
Furthermore, some quantities of interest are given in Table 2. In this table, is the maximum temperature reached inside the reactor during the test. is the percentage of the change of the mean steady state concentration at the reactor outlet with respect to its nominal value, that is, where is the mean nominal value (0.0339 mol/m3) and is the mean concentration at the reactor outlet in steady state after the test.
In general, the control schemes showed a good behavior for rejecting the disturbances (typical magnitudes: mol/m3 and K) and both presented a similar performance.
In this work, it is shown how POD and Galerkin projections can be used for deriving reduced order model of systems with reaction, diffusion, and convection in two dimensions. The method proposed here is illustrated with a nonisothermal reactor and based on the proposed reduced model, a state observer and a predictive controller are designed and tested.
The algorithm proposed in to find the steadystate operating profiles is extended for the reactor with diffusion, reaction, and convection in two dimensions, and it is shown that it works properly complying with the reactor operating constraints.
The POD method is characterized for its capability to describe the spatial distribution of the relevant physical variables in terms of a set of orthonormal basis functions. These basis functions are selected from observed data and are optimal in a well-defined sense. In the nonisothermal tubular reactor model, the spatial domain is discretized into a high number of grid cells, while in POD models, the spatial distributions are described by the first few and most relevant POD basis functions. The time-dependent characteristics of the variables are given by the time varying coefficients of the POD basis functions. The model of the time varying coefficients is denominated by the reduced order model and is obtained by projecting the POD basis functions onto the original governing equations. Throughout the results presented in this work, it is shown that with very few POD basis functions (less than 3% of the number of grid cells), the temporal and spatial dynamics of the nonisothermal tubular reactor with diffusion, reaction and, convection have an acceptable approximation.
In the application of POD technique, the data matrix () was taken from the nonlinear system unlike in . The reason why this was done is that the linear model did not capture the reaction kinetics (irreversible reaction) in a desired way. The use of the non-linear model data gives a more realistic sense to the results of this work because usually the data would be taken from the process. However, using nonlinear model data increases the number of basis functions.
In Section 4, an infinite horizon MPC controller has been designed for the nonisothermal tubular reactor model with axial and radial diffusion on the basis of the reduced order model. The true model of the reactor is nonlinear, and the linearized model is of very high order. The control and optimization problem becomes very tractable if the model can be reduced based on a small number of POD basis functions inferred from the open-loop data. It is shown in Section 4 that the desired temperature and concentration distribution can be controlled using the reduced order model as the base model for the controller (Figures 11 and 17); in this case, the control of the reactor profiles is achieved indirectly by controlling the POD coefficients which have no physical meaning. A very important result of this work is the trade-off between complexity and performance; on the one hand it was possible to reduce the complexity of a high-order model to design control systems and estimators. On the other hand in spite of the spatial discretization of the nonlinear PDE’s describing the reactor, the linearization, and the dramatic reduction of the order by means of POD, the controller has a good performance in order to keep the operation of the reactor at a desired operating condition despite the disturbances in the feed flow. However, if larger disturbances are applied to the tubular chemical reactor, the behaviour of the MPC controllers may not be as good as it has been thus far. This is due to the differences between the nonlinear model and linear model and consequently the reduced order model.
The authors acknowledge the European 7th framework STREP project Hierarchical and Distributed Model Predictive Control (HD-MPC). Contract no. INFSO-ICT-223854 for funding this work.
- A. C. Antoulas, Approximation of Large-Scale Dynamical Systems (Advances in Design and Control), vol. 6, Society for Industrial and Applied Mathematics (SIAM), 2005.
- K. Karhunen, “Zur spectral theorie stochastischer prozesse,” Annales Academiæ Scientiarum Fennicæ, vol. 36, pp. 1–7, 1946.
- M. Loeve, “Fonctions aleatoires de second ordre,” Comptes Rendus De L'Académie Des Sciences, vol. 220, 1945.
- J. L. Lumley, Stochastic Tools in Turbulence, vol. 12 of Applied mathematics and mechanics Mathematics in Science and Engineering, Academic Press, 1970.
- G. Berkooz, P. Holmes, and J. L. Lumley, “The proper orthogonal decomposition in the analysis of turbulent flows,” Fluid Mechanics, vol. 25, pp. 539–575, 1993.
- F. Kwasniok, “The reduction of complex dynamical systems using principal interaction patterns,” Physica D, vol. 92, no. 1-2, pp. 28–60, 1996.
- F. Kwasniok, “Optimal Galerkin approximations of partial differential equations using principal interaction patterns,” Physical Review E, vol. 55, no. 5, pp. 5365–5375, 1997.
- P. Comon, “Independent component analysis: a new concept,” Signal Processing, vol. 36, no. 3, pp. 287–314, 1994.
- P. Druault, J. Delville, and J. P. Bonnet, “Proper orthogonal decomposition of the mixing layer flow into coherent structures and turbulent Gaussian fluctuations,” Comptes Rendus, vol. 333, no. 11, pp. 824–829, 2005.
- S. Rahal, P. Cerisier, and H. Azuma, “Application of the proper orthogonal decomposition to turbulent convective flows in a simulated Czochralski system,” International Journal of Heat and Mass Transfer, vol. 51, no. 17-18, pp. 4216–4227, 2008.
- G. Solari and F. Tubino, “A turbulence model based on principal components,” Probabilistic Engineering Mechanics, vol. 17, no. 4, pp. 327–335, 2002.
- M. V. Tabib and J. B. Joshi, “Analysis of dominant flow structures and their flow dynamics in chemical process equipment using snapshot proper orthogonal decomposition technique,” Chemical Engineering Science, vol. 63, no. 14, pp. 3695–3715, 2008.
- Y. Utturkar, B. Zhang, and W. Shyy, “Reduced-order description of fluid flow with moving boundaries by proper orthogonal decomposition,” International Journal of Heat and Fluid Flow, vol. 26, no. 2, pp. 276–288, 2005.
- M. Amabili, A. Sarkar, and M. P. Païdoussis, “Chaotic vibrations of circular cylindrical shells: galerkin versus reduced-order models via the proper orthogonal decomposition method,” Journal of Sound and Vibration, vol. 290, no. 3–5, pp. 736–762, 2006.
- U. Galvanetto and G. Violaris, “Numerical investigation of a new damage detection method based on proper orthogonal decomposition,” Mechanical Systems and Signal Processing, vol. 21, no. 3, pp. 1346–1361, 2007.
- P. B. Gonçalves, F. M. A. Silva, and Z. J. G. N. Del Prado, “Low-dimensional models for the nonlinear vibration analysis of cylindrical shells based on a perturbation procedure and proper orthogonal decomposition,” Journal of Sound and Vibration, vol. 315, no. 3, pp. 641–663, 2008, EUROMECH colloquium 483, Geometrically non-linear vibrations of structures.
- M. Amabili, A. Sarkar, and M. P. Païdoussis, “Reduced-order models for nonlinear vibrations of cylindrical shells via the proper orthogonal decomposition method,” Journal of Fluids and Structures, vol. 18, no. 2, pp. 227–250, 2003, Axial and Internal Flow Fluid-Structure Interactions.
- B. F. Feeny and Y. Liang, “Interpreting proper orthogonal modes of randomly excited vibration systems,” Journal of Sound and Vibration, vol. 265, no. 5, pp. 953–966, 2003.
- M. Khalil, S. Adhikari, and A. Sarkar, “Linear system identification using proper orthogonal decomposition,” Mechanical Systems and Signal Processing, vol. 21, no. 8, pp. 3123–3145, 2007.
- X. Gilliam, J. P. Dunyak, D. A. Smith, and F. Wu, “Using projection pursuit and proper orthogonal decomposition to identify independent flow mechanisms,” Journal of Wind Engineering and Industrial Aerodynamics, vol. 92, no. 1, pp. 53–69, 2004.
- T. Katayama, H. Kawauchi, and G. Picci, “Subspace identification of closed loop systems by the orthogonal decomposition method,” Automatica, vol. 41, no. 5, pp. 863–872, 2005.
- D. Chelidze and W. Zhou, “Smooth orthogonal decomposition-based vibration mode identification,” Journal of Sound and Vibration, vol. 292, no. 3–5, pp. 461–473, 2006.
- O. M. Agudelo, The application of proper orthogonal decomposition to the control of tubular reactors [Ph.D. thesis], Katholieke Universiteit Leuven, 2009.
- F. Leibfritz and S. Volkwein, “Reduced order output feedback control design for PDE systems using proper orthogonal decomposition and nonlinear semidefinite programming,” Linear Algebra and Its Applications, vol. 415, no. 2-3, pp. 542–575, 2006, Special Issue on Order Reduction of Large-Scale Systems.
- D. Hömberg and S. Volkwein, “Control of laser surface hardening by a reduced-order approach using proper orthogonal decomposition,” Mathematical and Computer Modelling, vol. 38, no. 10, pp. 1003–1028, 2003.
- H. V. Ly and H. T. Tran, “Modeling and control of physical processes using proper orthogonal decomposition,” Mathematical and Computer Modelling, vol. 33, no. 1–3, pp. 223–236, 2001, Computation and control VI proceedings of the sixth Bozeman conference.
- J. A. Atwell and B. B. King, “Proper orthogonal decomposition for reduced basis feedback controllers for parabolic equations,” Mathematical and Computer Modelling, vol. 33, no. 1–3, pp. 1–19, 2001, Computation and control VI proceedings of the sixth Bozeman conference.
- R. Padhi and S. N. Balakrishnan, “Proper orthogonal decomposition based optimal neurocontrol synthesis of a chemical reactor process using approximate dynamic programming,” Neural Networks, vol. 16, no. 5-6, pp. 719–728, 2003, Advances in Neural Networks Research (IJCNN '03).
- C. Xu, Y. Ou, and E. Schuster, “Sequential linear quadratic control of bilinear parabolic PDEs based on POD model reduction,” Automatica, vol. 47, no. 2, pp. 418–426, 2011.
- S. S. Ravindran, “Control of flow separation over a forward-facing step by model reduction,” Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 41-42, pp. 4599–4617, 2002.
- S. S. Ravindran, “Optimal boundary feedback flow stabilization by model reduction,” Computer Methods in Applied Mechanics and Engineering, vol. 196, no. 25-28, pp. 2555–2569, 2007.
- W. Xie, I. Bonis, and C. Theodoropoulos, “Off-line model reduction for on-line linear MPC of nonlinear large-scale distributed systems,” Computers and Chemical Engineering, vol. 35, no. 5, pp. 750–757, 2011.
- P. Astrid, Reduction of process simulation models: a proper orthogonal decomposition approach [Ph.D. thesis], Technishche Universiteit Eindhoven, 2004.
- S. Fogler, Elements of Chemical Reaction Engineering, Prentice Hall, Boston, Mass, USA, 4th edition edition, 2008.
- O. M. Agudelo, J. J. Espinosa, and B. De Moor, “Control of a tubular chemical reactor by means of POD and predictive control techniques,” in Proceedings of the European Control Conference (ECC '07), vol. 20, pp. 1046–1053.
- M. A. Rodrigues and D. Odloak, “MPC for stable linear systems with model uncertainty,” Automatica, vol. 39, no. 4, pp. 569–583, 2003.
- D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert, “Constrained model predictive control: stability and optimality,” Automatica, vol. 36, no. 6, pp. 789–814, 2000.
- S. J. Qin and A. B. Badgwell, “A survey of industrial model predictive control technology,” Control Engineering Practice, vol. 11, no. 7, pp. 733–764, 2003.
- C. E. García, D. M. Prett, and M. Morari, “Model predictive control: theory and practice: a survey,” Automatica, vol. 25, no. 3, pp. 335–348, 1989.
- D. Panagiotis and D. Prodromus, “Nonlinear control of diffusion-convection-reaction processes,” Computers and Chemical Engineering, vol. 20, Supplement 2, pp. S1071–S1076, 1996.
- M. Li and D. Panagiotis, “Optimal transition control of diffusion-convection-reaction processes,” in Proceedings of the 8th International IFAC Symposium on Dynamics and Control of Process System, Cancun, Mexico, 2007.
- Y. Ou and E. Schuster, “Model predictive control of parabolic PDE systems with dirichlet boundary conditions via galerkin model reduction,” in Proceedings of the American Control Conference (ACC '09), pp. 1–7, June 2009.
- J. H. Lee, M. Morari, and C. E. Garcia, “State-space interpretation of model predictive control,” Automatica, vol. 30, no. 4, pp. 707–717, 1994.
- K. R. Muske and J. B. Rawlings, “Model predictive control with linear models,” AIChE Journal, vol. 39, no. 2, pp. 262–287, 1993.
- D. Odloak, “Extended robust model predictive control,” AIChE Journal, vol. 50, no. 8, pp. 1824–1836, 2004.
- A. Marquez, J. J. Espinosa, and D. Odloak, “IHMPC and POD to the control of a non-isothermal tubular reactor,” in Proceedings of the 9th International Symposium on Dynamics and Control of Process Systems (DYCOPS '10), pp. 431–436, 2010.
| 0.9868
|
FineWeb
|
The Jackson County Health Department is expanding its vaccine program to offer free shots to uninsured and under-insured adults.
The vaccines include:
- Zoster Vaccine (Shingles) for adults 60-64 years old
- HPV Vaccine for women 19-26 years old
- Varicella Vaccine (Chickenpox) for people born after 1980 with no evidence of immunity
- Pneumonia Vaccine for adults 19 years or older that meet risk criteria
- Twinrix Vaccine (HepA/HepB combination) for high-risk individuals
The vaccines are free for uninsured and under-insured adults. Patients who can afford it may be asked to pay an administrative fee of $16.75 per vaccine. Call the Jackson County Health Department's immunization clinic at (517) 788-4468 for more information.
| 0.7966
|
FineWeb
|
Alzheimer’s is a common disease affecting the brain. Alzheimer’s disease is the most common cause of dementia that is associated with brain functioning such memory, language, thinking and problem solving. Although these changes may seem very minor and insignificant at start but eventually they aggravate to affect everyday living of the person.
What is Alzheimer’s disease?
This medical condition gets its name from the doctor, Alois Alzheimer, who first discovered and described the disease to be a physical medical condition affecting the brain. In UK alone there are more than 5.20 lacs people living with this disease. The disease causes proteins to build up in the brain leading to formation of crystals called plaques and tangles. Due to this build up there occurs a loss of connection between the nerve cells which eventually causes the nerve cells to die.
Those suffering from the Alzheimer’s disease lack some essential chemicals in the brain which are messengers that help transmitting signals to and fro the brain. Due to this shortage the messages are not transmitted effectively which affects the brain functioning causing symptoms like loss of memory, confusion, lack of decision making ability and inability to decipher messages effectively.
Alzheimer’s is a disease that develops slowly and progresses further as more and more parts of the brain get damaged. The progression can be seen in the symptoms as they go on to becoming more severe gradually.
Are there specific signs and symptoms that help early diagnosis?
Alzheimer’s symptoms do not crop up all of a sudden and in most cases are very mild to begin with. In fact the symptoms are so mild that often they are taken as casual forgetfulness and so on. However, as they worsen they could bring about a drastic change in the life and lifestyle of the patient living with the disease. Like many other diseases two people can experience different symptoms of Alzheimer’s; however there are some common symptoms that are unique to the disease.
The most frequently experienced symptom in this disease is that of loss of memory. Also known as memory lapses, one tends to find it difficult to recall recent events and grasping new information. In Alzheimer’s there occurs damage to a part of the brain which is known as the hippocampus.
This part is the memory reservoir and processing unit of the brain. In the early symptoms one does not forget the memories that occurred earlier but finds keeping recent memory in store difficult. Small things that are generally associated with the memory or the remembering capacity of the brain include: remembering where you kept items and objects of everyday use like keys, wallet etc.; remembering names of people you deal with; recent conversations and events that may have occurred; familiar ways; appointments or important dates like birthdays and anniversaries. In the early signs of Alzheimer’s all or few of the above memory related functions may be forgotten by the brain.
As the disease progresses, the problems connected with the brain functioning go on becoming more complex involving other functions such as thinking, reasoning, language or communication. Following are the main difficulties faced by people suffering from Alzheimer’s symptoms:
- Finding it difficult to follow a conversation or repeating the same thing without realizing
- Finding it difficult to take judgmental calls such as analyzing the distances. People experiencing this can find it difficult to drive or park the vehicle
- Finding it difficult to take independent decision, problem solving or carrying out simple activities such as cooking or writing
- Finding it difficult to focus or concentrate that can make you lose track of dates or important days.
Mood swings, anxiousness, irritability and depression are also common symptoms that are experienced by patients as early signs of Alzheimer’s.
As one progresses with the disease, the symptoms become more intense and severe. Problems associated with thinking, remembering and orientation become more severe making it difficult for the patient and his loved ones to cope with the condition. In the later stages of the disease many people also begin to have hallucinations, imagining things and occurrences that are not real and only imaginary.
Alzheimer’s could also have one behaving strangely including aggressiveness and constant irritability. These changes make the condition highly challenging to deal with. Eventually, people suffering from Alzheimer’s tend to forget things and people around them and are susceptible to accidents due to sudden falls that can happen as they lose their sense of body balance. Most of these symptoms make the patient dependent on others even to perform everyday chores.
Who is at a risk of getting affected by the Alzheimer’s disease?
Age is an important factor resulting in the Alzheimer’s disease and mostly it’s a disease of the old and aged. However, cases of younger people developing this disease are not unheard off. Getting the disease early is known as early-onset Alzheimer’s disease which is a kind of young onset dementia. Developing the disease is a mix of a number of factors that can add on to the risk of developing the disease. Some of these factors are controllable but most of them are not. Here are the most common risk factors that are associated with the disease:
- Age: One of the most common factors that results in Alzheimer’s disease is the age factor. People above 65 years of age are most susceptible to getting affected by this disease and the risk increases as one grows older. According to a research, the risk of acquiring the disease doubles up in every five years of aging.
- Gender: There are more instances of women seen with the Alzheimer’s disease in comparison with men. Although there are significant proofs in this regard however, there is found some sort of a connection between menopause and the lack of estrogen hormone that may cause Alzheimer’s disease in women.
- Heredity and Genes: In a number of cases there is a clear lineage seen of the disease across various generations. However there are a number of studies being made by scientists in clearly establishing what genes could be responsible for Alzheimer’s in patients. Where Alzheimer’s is a clear genetic disorder, the disease could show up in patients well before the age of 65 falling under the younger bracket.
- Lifestyle & overall wellbeing: A number of medical conditions such as strokes, diabetes, obesity, hypertension etc. are associated risks with Alzheimer’s disease. Leading a healthy lifestyle with exercise, healthy eating and socializing helps a big way in reducing the risks of acquiring this disease.
Diagnosing Alzheimer’s disease:
If you have a history of Alzheimer’s disease running in your family or are experiencing any of the symptoms as discussed above, you should have an open discussion with your doctor. Early diagnosis goes a long way in managing diseases of these types that could impact quality of life in general. Depending upon your discussion with your GP, he may recommend you to see a specialist to confirm diagnosis who could be a psychiatrist particularly dealing with older people or a neurologist.
The primary basis of the diagnosis depends on the assessment of the symptoms, when they started and how they progressed. The biggest issue with Alzheimer’s is that one tends to forget the symptoms also due the basic nature of the disease. It helps if people caring for the patient accompany them to these doctor visits as they would be more aware of the changes and symptoms than the patient. Tests such as brain scans and MRI are also particularly helpful in diagnosing the disease. The brain scans could show images of a shrunken hippocampus or surrounding area that can help in confirming the diagnosis.
Can Alzheimer’ Disease Be Treated?
Unfortunately Alzheimer’s disease does not have a definitive cure, however a lot can be done to manage the disease to facilitate better living. There are a number of drugs that can be prescribed to the patient to help in coping with the symptoms or even slow down the progression of the disease.
Drugs such as donepezil (eg Aricept), rivastigmine (eg Exelon) or galantamine (eg Reminyl) can help in reducing symptoms like memory problems, concentration etc.
To help persons suffering from Alzheimer’s keep up with a routine, it is helpful to keep a list of activities written down so that they can refer to the same when loss of memory occurs. The most depressing part about Alzheimer’s is how the disease can make a patient dependent on others that is not only demoralizing but also very depressing for the patients. To cope with symptoms of depressing, talking either to a doctor or family members can really help the patients.
Alzheimer’s although is a physical disease, can really take a toll on the patient and the ones caring for him mentally and emotionally. Although the disease cannot be treated, timely medical intervention can help prevent the symptoms from worsening and help the patients lead a normal life as far as possible.
Matt Bailey is a noted writer, content marketer and Social Strategist at FindaTopDoc. Find a best Local Doctor by Specialty and Insurance.
| 0.8808
|
FineWeb
|
Although he has solid academic credentials and awards for his trilogy of historical works debunking the myth of the American West, Richard Slotkin gets only qualified respect from most academic historians.
The professional disdain may be the result of his staccato, direct writing style, which ranges from didactic to pedantic and often approaches arrogance. Though similarly flawed, his new volume about the Civil War battle of Antietam may be useful to the dedicated student of the war, but too argumentative for the general reader.
Most historians regard Antietam (or Sharpsburg, as it’s known to Southerners) as a linchpin in the formation of both Union and Confederate strategies after 1862. The military outcome of this daylong conflict thwarted Robert E. Lee’s attempt to carry the war out of Virginia and into the North.
The more important result was that it gave Abraham Lincoln a tactical victory over the Army of Northern Virginia, something George McClellan had been unable to deliver. Although Lee retreated with most of his army intact, the Union could declare a win, and this provided Lincoln with the impetus to issue the Emancipation Proclamation.
Slotkin traces the movement of both Lee’s and McClellan’s forces as they feinted and jabbed at each other through the summer of 1862, leading finally to Lee invading Maryland, defeating John Pope, who briefly replaced McClellan, at Second Manassas, and capturing Harper’s Ferry.
As Lee moved north, McClellan, restored to command, returned to his posture of grossly overestimating the size of Lee’s army and of refusing to give battle until the infamous discovery of Lee’s lost orders, which had been used to wrap a bundle of cigars found by Union soldiers. This gave McClellan knowledge of Lee’s intentions and the courage to act.
What followed was a series of maneuvers that led both armies to the banks of Antietam Creek on the outskirts of Sharpsburg, Md., and embroiled them in one of the bloodiest battles of the war.
Slotkin spends half of the book analyzing the political machinations that informed both Lincoln’s and McClellan’s actions. His study of documents, letters, dispatches and newspaper editorials reveals that McClellan’s ambitions went beyond battlefield victory; he sought political triumph. Sympathetic more to the Democratic Party and in possession of open contempt for Lincoln, McClellan vigorously opposed a general emancipation. He also advocated for a military dictatorship, a position that would free the military from civilian, particularly presidential, control.
As a moderate Republican, Lincoln was noncommittal on the question of emancipation, hoping to contain the conflict militarily and bring the rebellious states to a negotiated peace while sustaining the vast amount of property value represented by black slaves. McClellan’s dithering and reluctance to commit to full battle — even at Antietam — forced Lincoln to move his position to the political left, thereby freeing slaves in rebellious states and paving the way for eventual equality of all people of color, regardless of the economic and political risks.
Slotkin’s examination reveals how McClellan’s deluded political ambitions shaped his actions and led, ultimately, to allowing Lee’s forces to slip away after the battle, then to regroup and carry on the war for another three bloody years.
This study offers nothing new. Slotkin’s close examination of the tactical strategies of both sides is clear, and his suppositions of what might have happened if are interesting from a military history standpoint. His political analysis brings into candid focus the motivations of both McClellan and Lincoln as they sparred with each other politically and finally chose courses of actions that had lasting impact.
He also illustrates how the young and still formative nation narrowly escaped falling into dictatorship due to the crisis and shows how the battle and the proclamation it occasioned changed a war of secession into a formal revolution, one that had to be completely suppressed to save the republic.
Clay Reynolds is a professor of arts and humanities at the University of Texas at Dallas. His latest book is Hero of A Hundred Fights.
The Long Roadto Antietam
How the Civil WarBecame a Revolution
| 0.651
|
FineWeb
|
Sometimes web developers are unable to create a thank you or confirmation page for a form submission, this leaves you unable to easily track how many forms have been completed, as goals in Google Analytics require a URL. However, there is a nice and easy way round this. It just takes a small amount of code and you can create a pretend URL to load when the submit button is pressed.
This is done by using the Virtual Pageview method which, as you might have guessed, enables you to track a page that doesn’t actually exist. It will then appear in your content report and you can create a goal with this URL destination. An alternative would be to use Event tracking, however if you wanted this page created by web developers there is no harm in using Virtual Pageviews. If you didn’t want to have a Virtual Pageview in your content report you can always use Event tracking, details of how to do this can be found in my post about tracking clicks on a link in Google Analytics.
Here’s the code used in a typical form submission button, the second one here has the Virtual Pageview tracking code in:
<input name=”submit” type=”submit” id=”Submit” value=”Submit Form”/>
<input name=”submit” type=”submit” id=”Submit” value=”Submit Form” onClick=”_gaq.push([‘_trackPageview’, ‘/form/submit’]);”/>
So the area to customise is:
You can choose what you would like the URL to be, if you have more than one form you can name each one differently.
Once you have chosen your virtual URL and added the code within the button, the final step is to create a goal in Google Anlaytics to measure this.
Web Submit Button via BigStock
Sign up now and get our free monthly email. It’s filled with our favourite pieces of the news from the industry, SEO, PPC, Social Media and more. And, don’t forget - it’s free, so why haven’t you signed up already?
| 0.5058
|
FineWeb
|
The BMJ journal recently published a report that discussed how sugary beverages, specifically soda, are linked to higher cancer risk. Another study, that the Circulation journal published, found that sugary drinks increase risk of cardiovascular mortality.
Numerous researchers have shed light on the harmful effects of excess intake of sugary drinks. Not only do such drinks raise risk of obesity, they also damage tooth enamel. Diet soda is not safe either. Another recent study said that diet soda shares a link with higher risk of death. Sugary drinks also up the risk of developing metabolic syndrome and diabetes.
Now a new study has found that sugary beverages of any kind can moderately up the risk of type 2 diabetes. The Diabetes Care journal has published this research work. The odds of developing the ailment bump higher whether the consumption is of added sugar beverages or natural sugar drinks such as 100% fruit juices. Replacement of sugary drinks by artificially sweetened drinks didn’t reduce this risk.
Scientists only noted a decrease in risk when water, tea or coffee replaced one daily serving of a sugary drink. This study is the first of its kind. It looks at the long-term effects of consuming any type of sugary beverage. The report stresses on the importance of decreasing intake of sugary drinks and consuming tea and water instead.
Sugary Beverages And Risk Of Type 2 Diabetes
For the purpose of this study, researchers analyzed the data of 192,000 males and females. These individuals were participants of three long-term studies. The data was of about 22 to 26 years. In it, the changes of sugary drinks’ intake were measured through the questionnaires that the participants had filled every four years.
Scientists adjusted factors which could affect the results. These included the body mass index, lifestyle and dietary habits of participants. Following are findings scientists made:
- Drinking more than 4 ounces of any type of sugary beverage was linked to a 16% higher risk of diabetes in the next four years.
- Drinking more than 4 ounces of artificially sweetened beverages raised risk of type 2 diabetes by 18%. However, researchers did say that findings regarding the consumption of artificially sweetened drinks did have limitations.
- Replacing one daily serving of a sugary drink with tea, coffee or water lessened diabetes risk by 2 to 10%.
Senior author of the study, Frank Hu, Fredrick J. Stare said, “The study results are in line with current recommendations to replace sugary beverages with noncaloric beverages free of artificial sweeteners. Although fruit juices contain some nutrients, their consumption should be moderated.”
Time and again researchers have warned against the excess consumption of sugary beverages. From alcoholic drinks to soft drinks and fruit juices, there are health risks that can rise.
A new study has found that consuming both natural and artificially sweetened drinks shares a link with higher risk of developing diabetes type 2. However, this risk is lowered when a sugary beverage is replaced with water or tea.
| 0.8195
|
FineWeb
|
|Budget Amount *help
¥2,700,000 (Direct Cost: ¥2,700,000)
Fiscal Year 2002: ¥1,200,000 (Direct Cost: ¥1,200,000)
Fiscal Year 2001: ¥1,100,000 (Direct Cost: ¥1,100,000)
Fiscal Year 2000: ¥400,000 (Direct Cost: ¥400,000)
A present CAD system is considered to have some deficiencies. Namely, it has many elements that are inaccurate or non-robust, and that the system itself has become too complex. These problems have a direct bearing on a system's reliability. Over the author's course of study, he has come to realize that the major cause of such inherent problems of Euclidean Geometric Processing (EGP) lies in performing division operations, and thus he proposes "Totally Four-dimensional Geometric Processing (TFGP)," which enables us to dispense with the detrimental operations.
The present research is a theoretical and experimental comparison between EGP and TFGP.
(1') Exactness: It can be said that TFGP can perform exact computations as long as it deals with rational numbers. EGP, on the other hand, is obliged to deal with approximated data which are the results of division operations inherent in EGP. This was confirmed through various experimental results.
(2') Robustness: In TFGP, there is no such instabi
lity as division by zero, because a division operation is not ordinarily performed except at the very end of whole process. While the geometric Newton-Raphson method in EGP occasionally fails when it is applied to rational polynomial curves, i.e., the parameter value diverges and finally halts the algorithm, the TFGP method does not show such non-robustness because it treats a homogeneous curve which is expressed as an ordinary curve of dimension higher by one. This superiority is borne out by many experimental data.
(3') Compactness: A Euclidean geometry tends to be more complicated because it is considered to be a cut of homogeneous one (i.e., linear subspace) of which the dimension is higher by one than that of its counterpart. The increase of geometric types in EGP makes it much more complex. Each type must be mathematically represented individually, and the increased numbers of the combinations must be processed.
As seen above, TFGP is superior to EGP in the above three items and also in terms of generality, unifiability and duality. Less
| 0.8783
|
FineWeb
|
The early bird catches the worm.
In our culture, “early” tends to conflate with “good.” Nowhere is this more evident than in our aspirations for our children. We want them to develop not just normally, but ahead of schedule. We puff with pride when they identify a colour at 18 months or a shape at age two. And if they don’t reach those milestones early, we take pains to nudge them toward the goal.
Why do we do it? “It’s a competitive world and parents want their kids to have a leg up,” says Janette Pelletier, an associate professor of human development and applied psychology at the Ontario Institute for Studies in Education of the University of Toronto. “If a parent hears about a new educational gizmo, she may feel compelled to use it so her kids don’t miss out.”
But will they, in fact, miss out? On the one hand, with 85 percent of brain growth occurring in the first three years of life, “it makes sense to stimulate the rapidly growing brain,” says Janice Im, senior program manager at Zero to Three, a US non-profit organization devoted to improving the lives and development of infants and toddlers. But, she stresses, “the child has to be ready to make the cognitive leap. If the challenge is too great, the stress will make her anxious and unable to focus on learning.” It also bears noting that learning doesn’t begin and end with “academic” skills like letters, numbers and colours. Everyday play is essential. In fact, “playing is learning for young children,” says Im. They’re lifting, pouring, bouncing, floating, balancing, telling stories, using their imaginations — experiences rich with learning opportunities. “You don’t need formal lessons to provide such opportunities — on the contrary, too much structure could interfere with this natural process.”
The reading race
If there’s any area in which parents want their kids to excel, it’s reading. Any parent who’s spent time in a schoolyard can recall the mom who “casually” mentioned her five-year-old son “just couldn’t put down” the first Harry Potter book. When it comes to literacy, no age is too young seems to be the prevailing mantra. Companies are coming out with reading-instruction programs for increasingly younger kids — even babies.
Victoria Purcell-Gates, Canada Research Chair of early childhood literacy at the University of British Columbia, takes strong exception to this trend. “You can train a one-year-old to recognize individual words, but that’s not reading,” she says. Besides, “there’s no evidence that getting toddlers to read will make them better readers as adults. There are societies in which kids don’t start to read print until age seven, and they’re at no disadvantage.”
It’s true that some kids (about three percent of the population) pick up reading very early, seemingly without effort or instruction. Intuition might suggest otherwise, but “these self-taught readers are not smarter than average,” says Purcell-Gates. “Researchers have tested them and their IQs run the gamut.”
If your child shows no such inclination, Purcell-Gates believes that pushing early literacy on him is more apt to do harm than good. “The sense of pressure and tension will get transmitted to the child. He’ll either decide he wants nothing to do with it or that he’s unable to do it, so he can’t be very smart.”
Parental anxiety comes through loud and clear in this post on an early-learning discussion board from a parent of a 17-month-old girl: She is at the top of her class, but sadly she can’t read yet. How should I make her interested in flashcards? Should I start later or just keep trying? I’m worried — please advise.
Instead of interesting her daughter in flashcards, this parent would do better to “read lots of storybooks to the child, expose her to adults who are reading and writing, and playfully point out words on cereal boxes without trying to ‘teach’ the words,” says Purcell-Gates. “You prepare a young child for literacy not by ‘getting her to read,’ but by immersing her in the world of print.”
Kean Li Wong, owner of BrillKids, a Hong Kong company that sells early learning materials to parents worldwide, insists that ultra-early learning need not create tension. “The critics assume that teaching involves coercion and that it takes up the majority of a child’s time,” he says, and asserts that “the learning is joyful and pressure-free.”
BrillKids offers two software programs designed for babies, Little Readers and Little Math. Wong maintains such instruction can have long-lasting benefits. “Studies show that children who start ahead, stay ahead,” he says.
Ah yes, the famous “studies show,” says Purcell-Davis. “Commercial enterprises use the term rather liberally, and it means very little,” she maintains. “I would love for the public to learn some basic things about scientific research so that they could interpret such claims.”
For example, the BrillKids website touts a study of a precocious self-taught reader who continued to “stay ahead” in all aspects of literacy. What isn’t mentioned on the site is that, according to the Gifted Child Quarterly journal, this study’s findings contrasted sharply with those of similar studies.
Some savvy consumers have begun to call companies to task for making spurious educational claims. In 2006, a Boston-based advocacy group lodged a formal complaint against Disney-owned Baby Einstein, alleging the company made deceptive claims that it can give babies a leg up in learning; following the allegation, Baby Einstein stopped billing its videos as educational and now offers cash refunds to US consumers for some Baby Einstein DVDs.
The meaning of milestones
If there’s little use in pushing your child ahead of her peers, surely there’s reason for pride if she leads the pack on her own? It turns out even this assumption may be misguided. “Milestones are more useful for figuring out if your child lags behind than if she’s ‘ahead,’” says Purcell-Gates. “If a child has a significant delay in, say, speech acquisition (I’m talking about many months behind schedule), it tells the parent there may be a problem that needs fixing.”
For instance, most kids begin to learn pre-reading skills in kindergarten. If certain language-based aptitudes, like rhyming or pronouncing a word without its first letter, don’t develop at this time, “a child is at risk for dyslexia and we would start an intervention,” says Linda Siegel, a professor of educational and counselling psychology at the University of British Columbia. If a child develops these skills ahead of schedule, however, “it doesn’t follow that he’ll be a better reader as an adult.”
Striking a balance
So where does all this leave the parent of a young child? “There’s no doubt that providing early stimulation to a child is beneficial, but it has to be age-appropriate,” says Siegel. “Games involving sounds, rhymes, word and letter substitutions, and non-words all develop the critical pre-reading skill of phonemic awareness,” she says. “That’s why Dr. Seuss books appeal to young children so much. They’re silly, full of rhythm, and build on sounds the kids already know.”
Age-appropriate stimulation also sets science and math learning in motion. “Manipulating objects in three-dimensional space provides countless lessons about basic physics,” notes Siegel. It can also foster number sense. “To a small child, the concept ‘three’ starts to click when she lines up three blocks.” Adds Purcell-Gates: “We’ve known since [pioneering cognitive theorist] Piaget that abstract concepts cannot be the first step in child learning. Sure, you can get kids to parrot back anything, but it doesn’t mean they’ve absorbed the underlying principles.”
Is there a way to make kids smarter then? Most experts say it’s the wrong question to ask. According to Im, a more useful question might be: How do I help my child reach her potential? “What we know is that children with a nurturing and interactive early environment develop better, both physically and mentally,” she says. Purcell-Gates concurs: “The research consistently shows that early learning blossoms in the soil of human interaction — the more nurturing the interaction, the higher-quality the learning.” That’s why a cartoon character is no substitute for a loving, engaged parent.
So pull up a chair, says Pelletier, and “sing to your baby, rather than playing Mozart.”
| 0.7993
|
FineWeb
|
Online tools, access to experimental data and other services provided through "cyber-infrastructure" are helping to accelerate progress in earthquake engineering and science according to a new study.
The research comes out of the Network for Earthquake Engineering Simulation (NEES), based at Purdue University, which includes 14 laboratories for earthquake engineering and tsunami research, all tied together with cyber-infrastructure to provide information technology for the network.
Central to this is the NEEShub, a web-based gateway housing experimental results which are accessible for reuse by researchers, practitioners and educational communities. It contains more than 1.6 million project files stored in over 398,000 project directories and has been shown to have at least 65,000 users over the past year.
“It’s a one-stop shopping site for the earthquake-engineering community to access really valuable intellectual contributions as well as experimental data,” said Thomas Hacker, an associate professor in the Department of Computer and Information Technology at Purdue and co-leader of information technology for NEES. “It provides critical information technology services in support of earthquake engineering research and helps to accelerate science and engineering progress in a substantial way.”
A major element of the NEES cyber-infrastructure is a “project warehouse” that provides a place for researchers to upload project data, documents, papers and dissertations containing important experimental knowledge for the NEES community to access.
“A key factor in our efforts is the very strong involvement of experts in earthquake engineering and civil engineering in every aspect of our IT,” Hacker said.
According to Hacker, a cyber-infrastructure effort needs to address both the technology and social elements in order to be successful.
“The technological elements include high-speed networks, laptops, servers and software,” he said. “The sociology includes the software-development process, the way we gather and prioritize user requirements and needs and our work with user communities.”
The project warehouse and NEEShub collects “metadata,” or descriptive information about research needed to ensure that the information can be accessed in the future.
“Say you have an experiment with sensors over a structure to collect data like voltages over time or force displacements over time,” said Dr. Rudi Eigenmann of NEEShub. “What’s important for context is not only the data collected, but from which sensor, when the experiment was conducted, where the sensor was placed on the structure. When someone comes along later to reuse the information they need the metadata.”
The resources are curated, meaning the data is organized in a fashion that ensures it hasn’t been modified and are valid for reference in the future.
“We take extra steps to ensure the long-term integrity of the data,” Hacker said.
To help quantify the impact on research, projects are ranked by how many times they are downloaded. One project alone has had 3.3 million files downloaded.
The site also has a DOI, or digital object identifier, for each project.
“It’s like a permanent identifier that goes with the data set,” Hacker said. “It gives you a permanent link to the data.”
NEES researchers will continue to study the impact of cyber-infrastructure on engineering and scientific progress.
“The use and adoption of cyber-infrastructure by a community is a process,” Hacker said. “At the beginning of the process we can measure the number of visitors and people accessing information. The ultimate impact of the cyber-infrastructure will be reflected in outcomes such as the number of publications that have benefited from using the cyber-infrastructure. It takes several years to follow that process and we are in the middle of that right now, but evidence points to a significant impact.”
| 0.7135
|
FineWeb
|
The Barilla Center for Food and Nutrition in Parma, Italy has proposed a “double pyramid” to help guide food choices in an increasingly interconnected world where the impacts of culture, tradition, and family on food consumption patterns are waning.
One pyramid is driven by the nutrient content of foods relative to human needs, and reflects the contemporary USDA food pyramid. The second pyramid reflects life-cycle environmental impacts in terms of land, water, and greenhouse gas emissions.
The Barilla Center concludes that:
“…those foods with higher recommended consumption levels, are also those with lower environmental impact. Contrarily, those foods with lower recommended consumption levels are also those with higher environmental impacts.” (Page 8).
Foods that should be consumed the most for nutritional and environmental reasons include fruits and vegetables, bread, pasta, and rice, legumes, and olive oil. Those that should be consumed less because of low nutritional value and high environmental impacts include sweets, red meat, cheese, and white meat.
The free, 150-page report by the Barilla Center entitled “Double Pyramid: healthy food for people,
sustainable food for the planet” is beautifully laid out and contains full details on the methodology and data sources used to construct the two pyramids.
| 0.5427
|
FineWeb
|