Thursday, October 31, 2019

Stress reduction techniques Research Paper Example | Topics and Well Written Essays - 750 words

Stress reduction techniques - Research Paper Example Distress is a negative type of stress that can be short term or long term. This type of stress is caused by frequent undesired changes. Hyperstress occurs when a person is "pushed beyond what he or she can handle" (National Center). Hypostress occurs when a person is not pushed at all. Boredom is an example of hypostress. These types of stress can cause direct health problems, such as weakening the immune system and indirect health problems, such as leading a person to alcohol or drug abuse (Smyth, Joshua and Kelly Filipkowski, 272). There are many different ways that people cope with these types of stress. According to Band and Weisz, there are three main ways of coping with stress: primary control coping, secondary control coping, or relinquished control. With primary control coping, the individual attempts to change the circumstance that is causing the stress. With secondary control coping, the individual attempts to adjust to the current circumstance that is stressful. With relin quished control, the individual neither attempts to change the circumstance or adjust to it. The individual instead tries to ignore the problem. This paper will cover five popular techniques for reducing stress. One of the most popular techniques for reducing stress is meditation. This exercise is usually used with secondary control or relinquished control coping. There are three basic types of meditation: mantra meditation, sitting meditation, and breath-counting meditation (Davis, Eshelman and McKay). In mantra meditation, the individual settles into a comfortable position and repeats a special word or phrase to clear the mind of other thoughts and induce relaxation. Sitting meditation is the simplest type. The individual settles into a comfortable sitting position, then focusing on his or her breathing to induce relaxation. The last type is breath-counting meditation. This is similar to the sitting type, except the individual counts to a specified number, usually 5 or 10, for

Tuesday, October 29, 2019

Sigmund Freud Essay Example | Topics and Well Written Essays - 1000 words - 3

Sigmund Freud - Essay Example (Cherry, 2014) Freud’s theory states that all intuitive energy is produced by the libido. Freud proposed that our mental states were affected by two contending strengths: cathexis and anticathexis. Cathexis was portrayed as a financing of mental energy in an individual, thought or object. Through anticathexis ego prevents the id from performing actions that are not socially acceptable. In addition to this Freud believed that human behaviour was motivated by two gut feelings: life and death. Life is connected to basic needs such as survival and etc. Death instinct is related to self destructive behaviour. In the basic structure of personality the mind is organized in two ways, consciousness and unconsciousness. The conscious mind includes all those things that we are aware of. The unconscious mind consists of things like wishes, desires, memories and etc; our mind is not aware of these however they continue to have an influence on our mind. He compared the human mind to an iceberg. The tip that is visible represents the consciousness and the rest represents the unconsciousness. Freud also divides the mind into three different modules; the id, ego and superego. The stages of development state that as children grow they go through some psychosexual stages. At each stage the libido focuses on a different body part. If however there is a problem in one of the stages, the process of development might get stuck. There can be obsession with something that might be related to that stage. (Cherry, 2014) Sigmund Freud, in addition to his excellent and sweeping theories of the human mind, he left his imprint on various people who turned out to be some of psychology’s greatest researchers. Some of the well known names are Anna Freud, Alfred Adler, Carl Jung and Erik Erikson. (Cherry, 2014). However he was confronted by Otto Rank,

Sunday, October 27, 2019

The Effect of Enzyme Concentration on Reaction Rate

The Effect of Enzyme Concentration on Reaction Rate Determination of the effect of enzyme concentration on catalysis using starch an amylase. INTRODUCTION Enzymes are said to be catalytic proteins which increases the rate of a chemical reaction without being altered in the process of that reaction. [1] A substrate is a substance which an enzyme acts upon. No bond is formed between the enzyme and the substrate in the reaction thus the enzyme goes back to its original shape and can be used again.[2]An enzyme binds to a substrate via the active site thus forming an enzyme substrate complex They are very specific in their reaction and also to the substrate they are binding with. Enzymes function correctly when the shape of the substrate matches the enzymes active site and their functioning is dependent upon its three dimensional structure. They undergo catalysis by lowering the activation energy so that more molecules will be activated thus having the reaction occurring more easily [1] [2] In this experiment amylase is use to break down the starch molecules. Starch is the substrate used and amylase is the enzyme. There is a change when amylase reacts with starch. There is a release of a disaccharide maltose. As time increases there will less abundance of starch and more of the sugar present. So when this is added to iodine the blue/black colour will decrease to a light yellow shade.[4] The concentration of the enzyme is important in chemical reaction as it is needed to react with the substrate. Often a small amount of enzyme can consume a large amount of substrate. But as enzyme concentration increases so is the availability of active sites thus these will convert substrate molecules into products. What this is basically saying is that if the enzyme concentration is to be increased there needs to be an excess of substrate present which in other words means that the reaction must be independent of the concentration of substrate.[3] Apart from the concentration of substrate and enzyme there are other factors which can also influence the enzyme to function to its optimum capacity. These include temperature, pH, and inhibitors. Higher temperature would allow for more collisions to occur therefore allow substrate to bind to the enzymes active site more frequent. Since enzymes work at a certain temperature range activity would decline once this range would have been exceeded and the enzyme is denatured. Each enzyme has its own optimum where it functions best. Pepsin, an enzyme found in our stomach, works best in acidic conditions. Some enzymes becomes denatured thus deactivated when pH goes up down. I predict that the rate of the reaction will increase as the concentration increases and vice versa. The reaction will occur fast once the enzyme is added but it will slow down upon descending to the last test. I also believed that only a few of the test tube will produce a blue/black colour since the starch present in the solution will be hydrolyzed. Apparatus/Materials Water Buffer solution ( pH 6.8) 1% starch solution 1% amylase solution (Saliva) Dropper 3 beakers 3 10 ml measuring cylinders 12 test tubes Test tube rack Timer Method: Four test tubes were labeled A D 2 ml of water was measured and placed in test tube A. 2 ml of amylase (saliva) was measured and placed in the same test tube. Again 2 ml of water was measured and placed in a second test tube, test tube B, and to this 2 ml of the solution in test tube A was added. Another 2ml of water was added to a third test tube, test tube C , and to this , 2ml of the solution from test tube B was added. A further 2ml of water was added to test tube D, and to this 2 ml of solution from test tube C was added. Two milliliters of solution from test tube D was discarded so that all will have equal amounts of solution. Forty drops of buffer solution was added to test tube A . Eight (8) test tubes were collected and placed in a test tube rack. Two drops of iodine solution was placed into each using a dropper. To tube A 0.5 ml 1% starch solution was added. One drop of solution from tube A was immediately transferred to test tube #1 containing iodine solution. The dropper was properly rinsed. After 1 minute, one drop of solution from tube A was added using the dropper to the second tube containing iodine. The dropper was rinsed thoroughly. This was done for all the other test tubes that remained. The contents in all eight iodine test tubes were discarded. The tubes were thoroughly rinsed and dried for use in the next round of tests. Steps 6 11 was repeated for test tubes B,C,and D. RESULTS Test Tube Test Tube with Iodine Observations A B 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 Dark brown solution with small amounts of blue/black grains. These were apparent 17 seconds after adding solution A Dark brown grainy solution. Orange brown solution with particles which were also orange -brown Light orange brown solution. No grainy particles present Lighter orange brown solution Yellow brown solution Yellow brown solution. This was lighter than tube No. 6 Light yellow brown solution. This was exceptionally lighter than the others. Blue- black with coarse particles. Small traces ( 320 seconds) Orange brown solution Light orange brown solution with grains present Orange brown solution with tiny grains present Orange brown solution Orange brown solution Light orange brown solution Light orange brown solution C D 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 Dark brown with small traces of black particles (fewer than with tube B) (455 seconds) Orange brown solution Orange brown solution Orange brown Dark orange brown Dark orange brown Very dark brown solution with a few grainy particles Very dark brown with lots of grainy particles Dark brown solution with very small traces of black grains ( 560 seconds) Dark orange brown, no grainy particles present Dark orange brown solution Orange brown solution Orange brown solution Yellow/ orange- brown solution Yellow brown solution Light yellow brown solution The graph shows how the concentration of the enzyme affects the overall rate of the reaction. A higher concentration of the enzyme will produce a faster occurring reaction than a lower concentration. From the graph as time proceeds the reaction rate drops significantly. DISCUSSION: This lab exercise demonstrated the ability of an enzyme to hydrolyze the substrate molecule. The enzyme used was amylase and the substrate was starch. The starch is what the amylase actually acts upon to give the end products i.e amylase breaks down starch. Substrate ENZYME Products Enzyme concentration and substrate concentration play a vital role in enzymatic activity. The more enzymes available, the quicker the reaction will occur until the substrate is all used up More substrates will also mean quicker activity, until the enzyme is fully saturated so that it cannot continue increasing its activity.[1] Based on the results obtained from tube A, a blue/black colouration was noted. This indicated that there was significant amount of starch present. Iodine is an indicator for the presence of starch. This same colour was noted for tubes B- D but the traces of blue /black colour decreased from tube A -D. As the tests proceeded to the last tube, the colour of the solution for each set changed from a dark brown solution to light yellow and in some cases to a light orange brown solution. A reasonable explanation for this is that there are fewer enzymes present as you move from tube A-D thus the starch will not be broken down. When there is an insufficient amount of enzyme present the reaction will not progress as quick as it would because the active sites present are occupied. If the concentration or amount of enzymes is increased then this would make provision for an increase in reaction rate. Reaction rate would increase due to the fact that there will be more active sites that are unoccupied. However, if there is an excess of enzyme molecule, the rate would not increase if more is added but it would reach at a point where it would level off.[2] Another reasoning behind the colour change in that after the amylase reacted with the starch there will be a discharge of maltose which is a disaccharide. Less starch will be present as time proceeds and more maltose will be present. In addition less starch will be available to react with iodine thus the blue/black colour will decrease. The predictions made were moderately correct since a lower concentration of enzyme produced a reaction which was slow and one that had less products being formed. Various factors could have affected the results of the lab which may have given some amount of inaccuracy. These include temperature and pH. The enzyme perhaps would have functioned better in a certain temperature range instead of normal room temperature. CONCLUSION Based on the results obtained from the experiment it can be concluded that the concentration of enzymes influences the rate of a chemical reaction. If enzyme concentration is decreased then the reaction rate will also decrease. If there is sufficient enzyme to bind with substrate then the reaction will proceed fast and if there are insufficient enzymes present then the reaction will slow down

Friday, October 25, 2019

Legalization of All Drugs Essay -- Legalizing Drugs Narcotics Argument

Legalization of All Drugs Legalize Drugs! I know what you’re thinking, are you crazy! The debate over the legalization of drugs continues to disturb the American public. Such an issue stirs up moral and religious beliefs, beliefs that are contrary to what Americans should believe. I ask all of you to please keep an open mind and hear me out on this very controversial subject. All of us have in some way or another been affected by drug, whether it is a family member or the economic burden on society. Americans often take at face value the assumptions that drugs cause addiction, which leads to crime. This is true but abundant evidence exists to support the view that legalizing illicit drugs can help solve the drug problem in America. There is not a way to stop drug use, however there are two ways to combat the problem, like we have been or to legalize them, the legalization of drugs would help the United States in the areas of crime, increase revenue, elevate over-crowed prisons and decelerate the use of drugs in American society. There is one fact society agrees on: drugs are everywhere in America. The so-called â€Å"War on drugs† has taken over the streets, back alleys, and the suburbs of America. It has caused a problem that mirrors the prohibition days of the 1920’s and early 1930’s. A Fact that alcohol prohibition did fail and the prohibition on drugs is not only doomed to fail but it has already failed miserably. It has created more of a social cost than if there was never a â€Å"War on drugs†. The anit-drug policies have created an underground drug-trade, in which modern drug-dealers have taken the place of the bootleggers of the prohibition age. The fabled "War on Drugs" has not made even a dent in the problem, even though we arrest people and we stuff them into prisons as fast as we can build them. If one thinks otherwise, just look in newspapers and you will see that this war has failed miserably. To understand why prohibitions are doomed to fail one must look at the main reason: money. As long as there is someone out there that wants a product and is willing to any price for that product, there will always be someone to supply the product, for the right price of course, we call him or her the â€Å"Pusher†. And there is where the problem lies, it does not matter if the product is legal or illegal there is money to be made and someone is going to make ... ... the pusher. We are losing many people to falsified war on drugs. There is not a way to stop drug use, it has crated an underground trade, and people are dieing for it every day, the legalization of drugs would help the United States in revenue, elevate over-crowed prisons and reduce the drug problem that is present in our society. But in America today people will continue to die of drug related crime because of the people that take the issues at face value. They do not look into the problem at hand they only look at the notion that drugs are bad for you. But is not it true that alcohol and cigarettes are bad for you? The government leaves that choice up to you, whether to engage in that legal activity. If an American wants to do something that has heath risk, is it not his or her choice to do so set forth by the constitution? The reason why this country is like it is today is because of freedom, freedom to do and choose anything we want to do, as long as it doesn’t infrin ge on the rights of others. Should not we as Americans have the right to choose what we want to do? All I ask is that you please keep an open mind about this issue do your own reach and then make your decision.

Thursday, October 24, 2019

A Short History of Nearly Everything Essay

A Short History of Nearly Everything is a popular science book by American author Bill Bryson that explains some areas of science, using a style of language which aims to be more accessible to the general public than many other books dedicated to the subject. It was one of the bestselling popular science books of 2005 in the UK, selling over 300,000 copies.[1] instead describing general sciences such as chemistry, paleontology, astronomy, and particle physics. In it, he explores time from the Big Bang to the discovery of quantum mechanics, via evolution and geology. Bryson tells the story of science through the stories of the people who made the discoveries, such as Edwin Hubble, Isaac Newton, and Albert Einstein. Background Bill Bryson wrote this book because he was dissatisfied with his scientific knowledge — that was, not much at all. He writes that science was a distant, unexplained subject at school. Textbooks and teachers alike did not ignite the passion for knowledge in him, mainly because they never delved in the whys, hows, and whens. â€Å"It was as if [the textbook writer] wanted to keep the good stuff secret by making all of it soberly unfathomable.† —Bryson, on the state of science books used within his school.[2] [edit] Contents Bryson describes graphically and in layperson’s terms the size of the universe, and that of atoms and subatomic particles. He then explores the history of geology and biology, and traces life from its first appearance to today’s modern humans, placing emphasis on the development of the modern Homo sapiens. Furthermore, he discusses the possibility of the Earth’s being struck by a meteor, and reflects on human capabilities of spotting a meteor before it impacts the Earth, and the extensive damage that such an event would cause. He also focuses on some of the most recent destructive disasters of volcanic origin in the history of our planet, including Krakatoa and Yellowstone National Park. A large part of the book is devoted to relating humorous stories about the scientists behind the research and discoveries and their sometimes eccentric behaviours. Bryson also speaks about modern scientific views on human effects on the Earth’s climate and  livelihood of other species, and the magnitude of natural disasters such as earthquakes, volcanoes, tsunamis, hurricanes, and the mass extinctions caused by some of these events. The book does, however, contain a number of factual errors and inaccuracies.[3] An illustrated edition of the book was released in November 2005.[4] A few editions in Audiobook form are also available, including an abridged version read by the author, and at least three unabridged versions. [edit] Awards and reviews The book received generally favourable reviews, with reviewers citing the book as informative, well written and highly entertaining.[5][6][7] However, some feel that the contents might be uninteresting to an audience with prior knowledge of history or the sciences.[8] In 2004, this book won Bryson the prestigious Aventis Prize for best general science book.[9] Bryson later donated the GBP £10,000 prize to the Great Ormond Street Hospital children’s charity.[10] In 2005, the book won the EU Descartes Prize for science communication.[11] It was shortlisted for the Samuel Johnson Prize for the same year. Unremitting scientific effort over the past 300 years has yielded an astonishing amount of information about the world we inhabit. By rights we ought to be very impressed and extremely interested. Unfortunately many of us simply aren’t. Far from attracting the best candidates, science is proving a less and less popular subject in schools. And, with a few notable exceptions, popular books on scientific topics are a rare bird in the bestseller lists. Bill Bryson, the travel-writing phenomenon, thinks he knows what has gone wrong. The anaemic, lifeless prose of standard science textbooks, he argues, smothers at birth our innate curiosity about the natural world. Reading them is a chore rather than a voyage of discovery. Even books written by leading scientists, he complains, are too often clogged up with impenetrable jargon. Just like the alchemists of old, scientists have a regrettable tendency to â€Å"vaile their secrets with mistie speech†. Science, John Keats sulked, â€Å"will clip an Angel’s wings, / Conquer all mysteries by rule and line.† Bryson turns this on its head by blaming the messenger rather than the message. Robbing nature of its mystery is what  he thinks most science books do best. But, unlike Keats, he doesn’t believe that this is at all necessary. We may be living in societies less ready to believe in magic, miracles or afterlives, but the sublime remains. Rather as Richard Dawkins has argued, Bryson insists that the results of scientific study can be wondrous and very often are so. The trick is to write about them in a way that makes them comprehensible without crushing nature’s mystique. Bryson provides a lesson in how it should be done. The prose is just as one would expect – energetic, quirky, familiar and humorous. Bryson’s great skill is that of lightly holding the reader’s hand throughout; building up such trust that topics as recondite as atomic weights, relativity and particle physics are shorn of their terrors. The amount of ground covered is truly impressive. From the furthest reaches of cosmology, we range through time and space until we are looking at the smallest particles. We explore our own planet and get to grips with the ideas, first of Newton and then of Einstein, that allow us to understand the laws that govern it. Then biology holds centre-stage, heralding the emergence of big-brained bipeds and Charles Darwin’s singular notion as to how it all came about. Crucially, this hugely varied terrain is not presented as a series of discrete packages. Bryson made his name writing travelogues and that is what this is. A single, coherent journey, woven together by a master craftsman. The book’s underlying strength lies in the fact that Bryson knows what it’s like to find science dull or inscrutable. Unlike scientists who turn their hand to popular writing, he can claim to have spent the vast majority of his life to date knowing very little about how the universe works. Tutored by many of the leading scientists in each of the dozens of fields he covers, he has brought to the book some of the latest insights together with an amusingly gossipy tone. His technique was to keep going back to the experts until each in turn was happy, in effect, to sign off the account of their work he had put together. In short, he’s done the hard work for us. Bryson enlivens his accounts of difficult concepts with entertaining historical vignettes. We learn, for example, of the Victorian naturalist whose scientific endeavours included serving up mole and spider to his guests; and of the Norwegian palaeontologist who miscounted the number of fingers and toes on one of the most important fossil finds of recent history and wouldn’t let anyone else have a look at  it for more than 48 years. Bryson has called his book a history, and he has the modern historian’s taste for telling it how it was. Scientists, like all tribes, have a predilection for foundation myths. But Bryson isn’t afraid to let the cat out of the bag. The nonsense of Darwin’s supposed â€Å"Eureka!† moment in the Galapagos, when he spotted variations in the size of finch beaks on different islands, is swiftly dealt with. As is the fanciful notion of palaeontologist Charles Doolittle Walcott chancing on the fossil-rich Burgess Shales after his horse slipped on a wet track. So much for clarity and local colour. What about romance? For Bryson this clearly lies in nature’s infinitudes. The sheer improbability of life, the incomprehensible vastness of the cosmos, the ineffable smallness of elementary particles, and the imponderable counter-intuitiveness of quantum mechanics. He tells us, for example, that every living cell contains as many working parts as a Boeing 777, and that prehistoric dragonflies, as big as ravens, flew among giant trees whose roots and trunks were covered with mosses 40 metres in height. It sounds very impressive. Not all readers will consider it sublime, but it’s hard to imagine a better rough guide to science.  · John Waller is research fellow at the Wellcome Trust Centre for the History of Medicine and author of Fabulous Science: Fact and Fiction in the History of Scientific Discovery (OUP) What has propelled this popular science book to the New York Time’s Best Seller List? The answer is simple. It is superbly written. Author Bill Bryson is not a scientist – far from it. He is a professional writer, and hitherto researching his book was quite ignorant of science by his own admission. â€Å"I didn’t know what a proton was, or a protein, didn’t know a quark from a quasar, didn’t understand how geologists could look at a layer of rock on a canyon wall and tell you how old it was, didn’t know anything really,† he tells us in the Introduction. But Bryson got curious about these and many other things: â€Å"Suddenly, I had a powerful, uncharacteristic urge to  know something about these matters and to understand how people figure them out.† All of us should be lucky to be so curious. Young children are. That’s why they’re called â€Å"little scientists.† New to the world and without inhibitions, they relentlessly ask questions about it. And Bill Bryson’s curiosity led him to some good questions too: â€Å"How does anybody know how much the Earth weighs or how old its rocks are or what really is way down there in the center? How can they [scientists] know how and when the Universe started and what it was like when it did? How do they know what goes on inside an atom?† The Introduction also tells us that the greatest amazements for Bryson are how scientists worked out such things. His book is a direct result of addressing these issues. It is superbly written. Popular science writers should study this book.| A Short History of Nearly Everything serves a great purpose for those who know little about science. The deep questions may not necessarily be explicitly presented but many of the answers are. The reader gets to journey along the paths that led scientists to some amazing discoveries – all this in an extremely simple and enjoyable book. The prose is extraordinarily well written with lively, entertaining thoughts and many clever and witty lines. Consider, for example, Chapter 23 on â€Å"The Richness of Being.† It begins: â€Å"Here and there in the Natural History Museum in London, built into recesses along the underlit corridors or standing between glass cases of minerals and ostrich eggs and a century or so of other productive clutter, are secret doors – at least secret in the sense that there is nothing about them to attract the visitor’s notice.† This opening sentence really captures the atmosphere of a natural history museum. It is full of vivid descriptions and contains the cleverly constructed, paradoxical phrase â€Å"productive clutter.† The next paragraph begins to make the point: â€Å"The Natural History Museum contains some seventy million objects from every realm of life and every corner of the planet, with another hundred thousand or so added to the collection each year, but it is really only behind the scenes that you get a sense of what a treasure house this is. In cupboards and cabinets and long rooms full of close-packed shelves are kept tens of thousands of pickled animals in bottles, millions of insects pinned to squares of card, drawers of shiny mollusks, bones of dinosaurs, skulls of early humans, endless folders of neatly pressed plants. It is a little like wandering through Darwin’s brain.† And later: â€Å"We wandered through a confusion of departments where people sat at large tables doing intent, investigative things with arthropods and palm fronds and boxes of yellowed bones. Everything there was an air of unhurried thoroughness, of people being engaged in a gigantic endeavor that could never be completed and mustn’t be rushed. In 1967, I had read, the museum issued its report on the John Murray Expedition, an Indian Ocean survey, forty-five years after the expedition had concluded. This is a world where things move at their own pace, including the tiny lift Fortey and I shared with a scholarly looking elderly man with whom Fortey chatted genially and familiarly as we proceeded upwards at about the rate that sediments are laid down.† Often Bryson ends a paragraph with an amusing line. You find very few popular science books so well written. With the exception of Surely You’re Joking, Mr. Feynman, it is hard to think of even one that is witty. Popular science writers should study this book. â€Å"I [Bryson] didn’t know a quark from a quasar . . . â€Å"| Sometimes even quoting writers rather than scientists and original sources, Bryson draws extensively from other books. For example, most of Chapter 21, whose focus is largely on the Burgess Shale fossils and the Cambrian explosion, is taken from Stephen Jay Gould’s Wonderful Life. And much of the rest of Chapter 21 is based on works by Richard Fortey and Gould’s other books. The author does not hide this. Titles are cited in the text, chapter notes provide quotes from books, and there is a lengthy bibliography. Given that Bryson in not a scientist, it is surprising how few errors there are in A Short History of Nearly Everything. Here are a couple that the staff at Jupiter Scientific uncovered: On what would happen if an asteroid struck Earth, Bryson writes, â€Å"Radiating outward at almost the speed of light would be the initial shock wave, sweeping everything before it.† In reality, the shock wave would travel only at about 10 kilometers per second, which, alt hough very fast, is considerably less than the speed of light of 300,000 kilometers per second. Shortly thereafter, one reads â€Å"Within an hour, a cloud of blackness would cover the planet . . . â€Å" It would take a few weeks for this to occur. The book gives the number of cells in the human body as ten-thousand trillion, but the best estimates are considerably less – about  50 trillion. Here’s how one might determine the number. A typical man and a typical cell in the human body respectively weigh 80 kilograms and 4 Ãâ€"10-9 grams. So there are about (80,000 grams per human)/(4 Ãâ€"10-9 grams per cell) = 2 Ãâ€"1013 cells per human, or twenty-trillion cells. By the way, since the number of microbes in or on the human body has been estimated to be one-hundred trillion, people probably have more foreign living organisms in them then cells! In the Chapter â€Å"The Mighty Atom†, it is written, â€Å"They [atoms] are also fantastically durable. Because they are so long lived, atoms really get around. Every atom you possess has almost certainly passed through several stars and been part of millions of organisms on its way to becoming you. We are each so atomically numerous and so vigorously recycled at death that a significant number of our atoms – up to a billion for each us, it has been suggested – probably once belonged to Shakespeare.† Most of this paragraph is correct, but because atoms are stripped of there electrons in stars, Bryson should have said, â€Å". . . the nuclei of every atom you possess has most likely passed through several stars . . . † One might be shocked that each of the 6 trillion or so humans on Earth have so many of Shakespeare’s atoms in them. However, Jupiter Scientific has done an analysis of this problem and the figure in Bryon’s book is probably low: It is likely that each of us has about 200 billion atoms that were once in Shakespeare’s body. Bryson also exaggerates the portrayals of some scientists: Ernest Rutherford is said to be an overpowering force, Fred Hoyle a complete weirdo, Fritz Zwicky an utterly abrasive astronomer, and Newton a total paranoiac. Surely the descriptions of these and other scientists are distorted. From a scientific point of view, most topics are treated superficially. This renders the book of little interest to a scientist.| Here are some examples of witty lines that finish paragraphs: The concluding remarks on Big Bang Nucleosynthesis go: â€Å"In three minutes, 98 percent of all the matter there is or will ever be has been produced. We have a universe. It is a place of the most wondrous and gratifying possibility, and beautiful, too. And it was all done in about the time it takes to make a sandwich.† On the Superconducting Supercollider, the huge particle accelerator that was to be built in Texas, Bill Bryson notes, â€Å"In perhaps the finest example in history of pouring money into a hole in the ground, Congress spent $2 billion on the project, then canceled it in 1993  after fourteen miles of tunnel had been dug. So Texas now boasts the most expensive hole in the universe.† Chapter 16 discusses some of the health benefits of certain elements. For example, cobalt is necessary for the production of vitamin B12 and a minute amount of sodium is good for your nerves. Bryson ends one paragraph with â€Å"Zinc – bless it – oxidizes alcohol.† (Zinc plays an important role in allowing alcohol to be digested.) On Earth’s atmosphere, the author notes that the troposphere, that part of the lower atmosphere that contains the air we breathe, is between 6 and 10 miles thick. He concludes, â€Å"There really isn’t much between you and oblivion.† In talking about the possibility of a sizeable asteroid striking Earth, Bryson at one point writes, â€Å"As if to underline just un-novel the idea had become by this time, in 1979, a Hollywood studio actually produced a movie called Meteor (â€Å"It’s five miles wide . . . It’s coming at 30,000 m.p.h. – and there’s no place to hide!) starring Henry Fonda, Natalie Wood, Karl Malden, and a very large rock.† From a scientific point of view, most topics are treated superficially. This renders the book of little interest to a scientist, but has certain advantages for the layperson. In some cases, emphasis is not given to the most important issue. Bryson simply lacks the insight and judgement of a trained scientist. Chapter One on the Big Bang is particularly difficult for the author. There is too much discussion on inflation and on the many-universe theory. Inflation, which is the idea that the space underwent a tremendous stretching at a tiny fraction of a second after â€Å"the beginning†, is consistent with astronomical observations, is theoretically attractive but has no confirming evidence yet. The multi-universe theory, which proposes that our universe is only one of many and disconnected from the others, is complete speculation. On the other hand, Bryson neglects events that have been observationally established. Big Bang Nucleosynthesis, in which the nuclei of the three lightest elements were made, is glossed over in one paragraph. Recombination, the process of electrons combining with nuclei to form atoms, is not covered – an unfortunate omission because it is the source of the cosmic microwave background radiation (When nuclei capture electrons, radiation is given off). Bryson simply refers to the cosmic microwave background radiation as something â€Å"left over from the Big Bang†, a description lacking true insight. As another example of misplaced emphasis, much of the chapter entitled  Ã¢â‚¬Å"Welcome to the Solar System,† is on Pluto and its discovery and on how school charts poorly convey the vast distances between planets. Although the Sun is not even treated, Bryson ends the discussion with â€Å"So that’s your solar system.† Here is another example in which Bryson’s lack of scientific training hurts the content of the book. In Chapter 27 entitled â€Å"Ice Time, he discusses as through it happened with certainty the â€Å"Snowball Earth.† It, however, is a very controversial proposal in which the entire planet was engulfed in ice at the end of the Proterozoic Era. The book says, â€Å"Temperatures plunged by as much as 80 degrees Fahrenheit. The entire surface of the planet may have frozen solid, with ocean ice up to a half mile thick at high latitudes and tens of yards thick even in the tropics.† While it is true that this period was the most severe ice age ever to transpire on Earth, it is unlikely that the weather became so cold as to create the conditions described in the above quote. Then the chapter on hominid development does the opposite by presenting the situation as highly unknown and debatable. It is true that the fossil record for the transition from apes to Homo sapiens is quite fragmentary and that anthropologists are dividerd over certain important issues such as how to draw the lines between species to create the family tree, how Homo sapiens spread over the globe and what caused brain size to i ncrease. However, the overall pattern of homonid evolution is understood. The reader gets to journey along the paths that led scientists to some amazing discoveries – all this in an extremely simple and enjoyable book.| Bryson has a nice way of summarizing atoms: â€Å"The way it was explained to me is that protons give an atom its identity, electrons its personality.† The number of protons in the nucleus of an atom, also known as the atomic number, determines the element type. Hydrogen has one proton, helium two, lithium three and so on. The electrons of an atom, or more precisely the outermost or valence electrons, determine how the atom binds to other atoms. The binding properties of an atom determines how it behaves chemically. Every important topic in A Short History of Nearly Everything can be found in Jupiter Scientific’s book The Bible According to Einstein, which presents science in the language and format of the Bible. Jupiter Scientific has made available onlin e many sections of this book. This review, which has been produced by Ian Johnston of Malaspina University-College, is in the public domain, and may be used by anyone, in whole or in part, without permission and without charge, provided the source is acknowledged–released October 2004. For comments or questions please contact Ian Johnston. A Short History of Nearly Everything The first thing one notices about a new Bill Bryson book in recent years is the disproportionately large size of the author’s name on the cover—bigger  than the title by a few orders of magnitude. That’s appropriate, I suppose, for an author who has emerged as North America’s most popular writer of non-fiction, with legions of fans around the world, perhaps even something of a cult figure, who can sell anything on the strength of his name alone. Bryson’s recently published book, A Short History of Nearly Everything, is certainly a departure from what he has written so far. It’s a bold and ambitious attempt to tell the story of our earth and of everything on it. Initially motivated by the most admirable of scientific feelings, intense curiosity about something he admits he knew virtually nothing about, Bryson spent three years immersing himself in scientific literature, talking to working scientists, and travelling to places where science is carried on, so that he might â€Å"know a little about these matters and . . . understand how people figured them out† and then produce a book which makes it â€Å"possible to understand and appreciate—marvel at, enjoy even—the wonder and accomplishments of science at a level that isn’t too technical or demanding, but isn’t entirely superficial either.† The result is a big volume recapitulating the greatest story ever told, from the beginnings of the universe, to the physical history of the Earth, to the development and evolution of life here—an attempt to provide, as the title indicates, an all-encompassing and continuous narrative, crammed with information on everything from particle physics to plate tectonics, from cloud formations to bacteria. For all the obvious natural clarity and organization within science, writing well about the subject is not as easy as it may appear. It demands that the writer select an audience and then deliver what he or she has to say in a style appropriate to that readership, in the process risking the loss of other potential readers. Bryson has clearly thought about this point and introduces into writing about science a style very different from, say, the brisk omniscience of Isaac Asimov, the trenchant polemics of Richard Dawson, the engaged contextual scholarship of Stephen Jay Gould, or the leisured and fascinating historical excursions of Simon Winchester (to cite some recent masters of the genre). He brings to bear on science his impressive talents as a folksy, amusing, self-deprecating spinner of yarns, assuming considerable ignorance in his readers and inviting them to share his newly discovered excitement at all the things he has learned, obviously trying with an atmosphere of cozy intimacy and friendship to ease any fears  they may bring to a book about so many unfamiliar things. This feature will almost certainly irritate a great many people who already know a good deal about science (who may feel they are being patronized) and charm many of those who do not. The information is presented here in an often off-beat and amusing and certainly non-intimidating way. Bryson sticks to his resolve not to confront the reader with numbers and equations and much complex terminology. So he relies heavily on familiar analogies to illustrate scientific theories, and these are extremely effective—inventive and illuminating. There is a wealth of interesting and frequently surprising facts about everything from mites to meteorites, conveyed with a continuing sense of wonder and enjoyment. Bryson delivers well on his promise to provide an account of what we know and (equally important to him) of the enormous amount we still do not know. Bryson is not all that interested, however, in the second part of his announced intention, to explore how we know what we know. He pays little to no attention to science as a developing system of knowledge, to its philosophical underpinnings (hence, perhaps, the omission of any treatment of mathematics) or to the way in which certain achievements in science are important not merely for the â€Å"facts† they confirm or reveal but for the way in which they transform our understanding of what science is and how it should be carried out. So for him â€Å"how we know† is simply a matter of accounting for those who came up something that turned out to be of lasting value (no wonder he is somewhat baffled by Darwin’s delay in publishing his theory of natural selection—the notion that Darwin’s theory may have presented some important methodological difficulties of which Darwin was painfully aware does not seem nearly as important as Darwin’s mysterious illness). Bryson is at his very best when he can anchor what he has to say on a particular place and on conversations with particular working scientists there. Here his considerable talents as a travel writer and story teller take over, and the result is an often amusing, surprising, insightful, and always informative glimpse into science as a particular activity carried on by interesting individuals in all sorts of different places. The sections on Yellowstone Park, the Burgess Shale, and the Natural History Museum in London, for example, are exceptionally fine, mainly because we are put in imaginative touch with science in action, we hear directly from the scientists themselves, and our understanding of  science is transformed from the knowledge of facts into a much fuller and more satisfying appreciation for a wonderfully human enterprise taking place all around us. Here Bryson provides us with a refreshingly new style in writing about science. Indeed, these passages are so striking in co mparison with other parts of the book that one suspects that Bryson’s imagination is far more stimulated by scientists at work than by the results their work produces. This impression is reinforced by Bryson’s habit of plundering the history of science for amusing anecdotes about interesting characters, obviously something which he finds imaginatively exciting. He’s prepared to interrupt the flow of his main narrative in order to deliver a good story, and routinely moves into a new section with a narrative hook based on a memorable character, a dramatic clash of personalities, or an unexpected location. Many of these stories and characters will be familiar enough to people who know a bit about science already (e.g., the eccentricities of Henry Cavendish, William Buckland, or Robert FitzRoy, the arguments between Gould and Dawkins, the adventures of Watson and Crick, and so on), but Bryson handles these quick narrative passages so well that the familiar stories are still worth re-reading, and there are enough new nuggets to keep reminding the more knowledgeable readers just how fascinating the history of science can be. Not that Bryson is very much interested in linking developments in science to any continuing attention to historical context. He’s happy enough to refer repeatedly to the context if there’s a good yarn to be had—if not, he’s ready to skim over it or ignore it altogether. This gives his account of developments a distinctly Whiggish flavour, a characteristic which will no doubt upset historians of science. At times, too, this habit of frequent quick raids into the past encourages a tendency to flippant snap judgments for the sake of a jest or some human drama. But given the audience Bryson is writing for and his desire to keep the narrative full of brio, these criticisms are easy enough to overlook. And speaking from my own limited experience in writing about the history of science, I can attest to the fact that once one begins scratching away at the lives of the scientists themselves, the impulse to draw on the wonderful range of the extraordinary characters one discovers is almost irresistible. Bryson’s narrative gets into more serious difficulties, however, when he cannot write from his strengths, that is, when he cannot link what the  subject demands to particular people and places. Here the prose often tends to get bogged down in summaries of what he has been reading lately or inadequate condensations of subjects too complex for his rapid pace. Thus, for example, the parts where his prose has to cope with systems of classifications (for example, of clouds, or bacteria, or early forms of life) the sense of excitement disappears and we are left to wade through a dense array of facts, without much sense of purpose. At such times, Bryson seems to sense the problem and often cranks up the â€Å"golly gee† element in his style in an attempt to inject some energy into his account, but without much success. And not surprisingly, the world of particle physics defeats his best attempts to render it familiar and comfortable to the reader, as Bryson concedes in an unexpectedly limp and apologetic admission: â€Å"Almost certainly this is an area that will see further developments of thought, and almost certainly these thoughts will again be beyond most of us.† It’s very curious that Bryson makes no attempt to assist the reader through such passages with any illustrative material, which would certainly have enabled him to convey organized information in a much clearer, more succinct, and less tedious manner. Early on, he lays some of the blame for his ignorance about science on boring school text books, so perhaps his decision to eschew visual aids has something to do with his desire not to produce anything like a school text (although, as I recall, diagrams, charts, and photographs were often the most exciting things about such books). Or perhaps he’s simply supremely confident that his prose is more than enough to carry the load. Whatever the reason, the cost of that decision is unnecessarily high. I suspect reactions to this book will vary widely. Bryson fans will, no doubt, be delighted to hear the master’s voice again and will forgive the lapses in energy and imaginative excitement here and there in the story. By contrast, many scientists and historians of science will find the tone and the treatment of the past not particularly to their liking. I’ll value the book as a source of useful anecdotes and some excellent writing about scientists at work, but turn to less prolix and better organized accounts to enrich my understanding of our scientific knowledge of the world and its inhabitants. But then again, if my grandchildren in the next few years begin to display some real interest in learning about science, I’ll certainly put this book in front of them.

Wednesday, October 23, 2019

Ethernet as a Network Topology

Ethernet is the most widely used network topology. You can choose between bus and star topologies, and coaxial, twisted-pair, or fiber optic cabling. But with the right connective equipment, multiple Ethernet-based LANs (local area networks) can be linked together no matter which topology and/or cabling system they use. In fact, with the right equipment and software, even Token Ring, Apple Talk, and wireless LANs can be connected to Ethernet. The access method Ethernet uses is CSMA/CD (Carrier Sense Multiple Access with Collision Detection). In this method, multiple workstation access a transmission medium (Multiple Access) by listening until no signals are detected (Carrier Sense). Then they transmit and check to see if more than one signal is present (Collision Detection). Each station attempts to transmit when it â€Å"believes† the network is free. If there is a collision, each station attempts to retransmit after a preset delay, which is different for each workstation. Collision detection is an essential part of the CSMA/CD access method. Each transmitting workstation needs to be able to detect that simultaneous (and therefore data-corrupting) transmission has taken place. If a collision is detected, a â€Å"jam† signal is propagated to all nodes. Each station that detects the Collision will wait some period of time and then try again. The two possible topologies for Ethernet are bus and star. The bus is the simplest (and the traditional) topology. Standard Ethernet (10BASE5) and Thin Ethernet (1OBASE2), both based on coaxial cable systems, use the bus. Twisted-Pair Ethernet (10BASE-T), based on unshielded twisted pair, and Fiberoptic Ethernet (FOIRL and 10BASE-FL), based on fiberoptic cable, use the star. In the following document we will try to explain what switched, Fast and Gigabit Ethernet are and make comparison of these three. LAN segments can be interconnected using bridges or routers. This works well when the traffic between segments is not high, but the interconnecting devices can become bottlenecks as the inter-segment traffic increases. Until recently, there were few ways to alleviate this problem. Now, however, a new class of interconnect products has emerged that can boost bandwidth on overburdened, traditional LANs while working with conventional cabling and adapters. These are known as LAN switches and are available for Ethernet, token ring, and FDDI. Switching technology is increasing the efficiency and speed of networks. This technology is making current systems more powerful, while at the same time facilitating the migration to faster networks. Understanding this technology is important; only then can we design and implement switched networks from the ground up. Switching directs network traffic in a very efficient manner – it sends information directly from the port of origin to only its destination port. Switching increases network performance, enhances flexibility and eases moves, adds and changes. Switching establishes a direct line of communication between two ports and maintains multiple simultaneous links between various ports. It proficiently manages network traffic by reducing media sharing – traffic is contained to the segment for which it is destined, be it a server, power user or workgroup. It is a cost-effective technique for increasing the overall network throughput and reducing congestion on a 10-Mbps network. Other than the addition of the switching hub, the Ethernet network remains the same the same network interface cards, the same client software, the same LAN cabling. There are three basic types of switches on the market at this time. They all perform the same basic function of dividing a large network into smaller sub-networks, however the manner in which they work internally is different. The types are known as Store and Forward, Cut Through, and Hybrid. A description of each type is shown below: A Store and Forward switch operates much as its name implies; first it stores each incoming frame in a buffer, checks it for errors, and if the frame is good it then forwards it to its destination port. A Cut Through switch operates differently than a Store and Forward type. In a Cut Through switch, the switch begins forwarding the frame immediately upon recieving the Destination Address. A Hybrid switch is an attempt to get the best of both Store and Forward switches and Cut Through switches. A Hybrid switch normally operates in Cut Through mode, but constantly monitors the rate at which invalid or damaged frames are forwarded. Designing A Switched Ethernet Network Designing a switched Ethernet network is actually a fairly straightforward process. The first step is to evaluate the traffic flow through you expect each user or group of users to generate. Analysis of the network will most likely find that you have a large number of users who are not going to place a heavy load on the network, and a smaller number of users who will place a large load on the network. We now group the Undemanding Users together on a hub and connect each hub to a switch port. Our more demanding users will usually be either directly connected to the switch, or if they are on hubs, fewer of them will be sharing each switch port than on the Undemanding User portion. One point which should be kept in mind regarding the design of a switched network is that traffic patterns vary by user and time. Therefore, just taking a â€Å"snapshot† of network usage patterns may lead to the wrong conclusions and result in a design, which is not optimal. It is always advisable to monitor usage patterns over a period of several days to a week to decide how to allocate network bandwidth optimally. Also, in almost all cases, a process of trial and error may be required to fully optimize the design.  · It is most important to get a switch that doesn't drop frames.  · Latency is a concern, but take it with a grain of salt. It will not make that much of a difference.  · Deciding between cut-through and store-and-forward depends on the application. Time-sensitive applications may need the former.  · Multimedia stations need dedicated switched ports.  · Most switch implementations consist of a switch with many stations (demand) and few servers (resources). It is best to keep a 1:1 ratio between demand and resource. Or, as mentioned earlier, increase the number of access pipes to the resource. (i.e., multiple lines into one server)  · Baseline your network prior to installing switches to determine the percentage of bad frames that already exist on the network.  · RMON (Remote Monitor) capability embedded in switch ports is may be costly, but it may save time and money in the long run.  · Certain switches support a flow control mechanism known as â€Å"back pressure.† This spoofs collision detection circuitry into thinking there is a collision and subsequently shifting to a back-off algorithm. This throttles back the sending station from transmitting any further data until the back-off process is complete. Switches with this feature need to be placed into the network carefully. What is 100baseT and Why is It Important? 100baseT, also known as Fast Ethernet, is simply a new version of Ethernet that runs at 100 million bits per second, which is ten times the speed of the existing Ethernet standard. 100baseT is becoming very popular because networks need more bandwidth due to more users and to demanding applications like graphics and networked databases. In fact, for many applications, standard Ethernet is simply too slow. For this reason, most experts believe that 100baseT will eclipse Ethernet as the dominant standard for Local Area Networks (LANs) during the next few years. A major advantage of all variants of 100baseT is software compatibility with standard Ethernet. This means that virtually all existing operating systems and application programs can take advantage of 100baseT capabilities without modification. One way fast Ethernet helps network managers make incremental upgrades at relatively low cost is by supporting most wiring and cabling media. The 100-Mbit/s specification can run over the Category 3 and Category 5 wiring already in place. It also runs over fiber optic cabling already installed. Fast Ethernet offers three media options: 100Base-T4 for half-duplex operation on four pairs of Category 3 UTP (unshielded twisted pair) or Category 5 UTP, 100Base-TX for half- or full-duplex operation on two pairs of data-grade Category 5 UTP or STP (shielded twisted pair), and 100Base-FX for half- or full-duplex transmission over fiber optic cable (the specification should be completed by year's end). As with other high-speed LAN technologies, fast Ethernet operates most efficiently on higher-grade media, such as Category 5 cabling or fiber. For Category 3-based installations, the 100Base-T4 media option uses four pairs of Category 3 UTP cabling. Data is transmitted on three pairs of wires, utilizing standard 8B/6T coding, which allows a lower signal frequency and decreases electromagnetic emissions. However, because the 100Base-T 4 standard uses the three pairs of wires for both transmission and reception, a 100Base-T4 network cannot accommodate full-duplex operation, which requires simultaneous dedication of wire pairs to transmission and reception. Work is still in progress on 100Base-FX fast Ethernet over fiber, but trials show it to be stable and capable of sustained 100-Mbit/s throughput at distances over 100 meters. Essentially, as a second means of transmitting data over fiber, 100Base-FX will be an alternative to FDDI. Moreover, because it will support full-duplex operation, 100Base-FX has the potential to become a significant backbone technology. 100BASE-T Fast Ethernet represents the best choice for customers interested in high speed networking for many reasons. There are 40 million 10 Mbps (Mega-bit per second) Ethernet users in the world today. 100BASE-T technology has evolved from this 10 Mbps world. By keeping the essential characteristics of the Ethernet technology (known as CSMA/CD) unchanged in the 100Mbit world, customers and installers can benefit from the body of Ethernet expertise developed over the years. The Ethernet industry expects that 100BASE-T will offer ten times the performance for twice the price of 10BASE-T. This improvement is made possible by advances in integrated circuit chip technology. As chips get smaller, they run faster, use less energy and are cheaper to produce. Early Ethernet controllers were made in 1.2 micron chips. State-of-the-art technology uses 0.45 micron chips. This represents an almost eight-fold reduction in chip size. 100BASE-T technology offers unparalleled ease of migration. You can decide how fast to upgrade, in what steps and stages, without massive â€Å"fork-lift† upgrades. Most 100BASE-T network cards will run as 10BASE-T and 100BASE-T cards. You will be able to buy cards now and run them at 10BASE-T speeds. Later when you are ready to upgrade to 100BASE-T you will not need to change your network cards. 100BASE-T is widely supported by many different companies. These include networking, systems, semiconductor, computer, integrator and research companies. Many of these companies have been supporting the industry effort through the Fast Ethernet Alliance. Wide support is essential for network users, ensuring a ready supply of interoperable products at competitive prices. The transmission systems of the 100BASE-T standard have high data integrity. It was shown that if 100 million 100BASE-T networks were run at maximum speed it would take over a billion times the age of the universe before there would be an undetected error. These error rates are significantly better than for 10BASE-T, Token Ring and FDDI. Recently, PC LAN adapter card manufacturers like 3Com and SMC have made very aggressive moves to further accelerate the adoption of 100baseT by pricing their 100baseTX products at only a slight premium compared to their standard Ethernet products. For example, a 3Com 100baseTX card is priced at $149, compared to $129 for their Ethernet card. Because virtually all 100baseTX cards also support 10baseT, this means that customers are being encouraged to buy the 100baseT capability even if they don't need it today. In other words, you can buy the 100baseTX card today and use it on your existing 10baseT network; when you upgrade your network to 100baseTX, you won't have to throw away your adapter cards. By all accounts, this strategy has been very successful. Gigabit Ethernet is an extension of the highly successful 10Mbps (10Base-T) Ethernet and 100Mbps (100Base-T) Fast Ethernet standards for network connectivity. IEEE has given approval to the Gigabit Ethernet project as the IEEE 802.3z Task Force. Gigabit Ethernet is fully compatible with the huge installed base of Ethernet and Fast Ethernet nodes. The original Ethernet specification was defined by the frame format and support for CSMA/CD (Carrier Sense Multiple Access with Collision Detection) protocol, full duplex, flow control, and management objects as defined by the IEEE 802.3 standard. Gigabit Ethernet will employ all of these specifications. In short, Gigabit Ethernet is the same Ethernet that managers already know and use, but 10 times faster than Fast Ethernet and 100 times faster than Ethernet. It also supports additional features that accommodate today's bandwidth-hungry applications and match the increasing power of the server and desktop. To support increasing bandwidth needs, Gigabit Ethernet incorporates enhancements that enable fast optical fiber connections at the physical layer of the network. It provides a tenfold increase in MAC (Media Access Control) layer data rates to support video conferencing, complex imaging and other data-intensive applications. Gigabit Ethernet compatibility with Ethernet preserves investments in administrator expertise and support staff training, while taking advantage of user familiarity. There is no need to purchase additional protocol stacks or invest in new middleware. Just as 100Mbps Fast Ethernet provided a low-cost, incremental migration from 10Mbps Ethernet, Gigabit Ethernet will provide the next logical migration to 1000Mbps bandwidth. This section discusses the various topologies in which Gigabit Ethernet may be used. Gigabit Ethernet is essentially a â€Å"campus technology†, that is , for use as a backbone in a campus-wide network. It will be used between routers, switches and hubs. It can also be used to connect servers, server farms (a number of server machines bundled together), and powerful workstations. Essentially, four types of hardware are needed to upgrade an exiting Ethernet/Fast Ethernet network to Gigabit Ethernet :  · Gigabit Ethernet Network Interface Cards (NICs)  · Aggregating switches that connect a number of Fast Ethernet segments to Gigabit Ethernet  · Gigabit Ethernet repeaters ( or Buffered Distributors) The five most likely upgrade scenarios are given below : 1.Upgrading server-switch connections Most networks have centralized file servers and compute servers A server gets requests from a large number of clients. Therefore, it needs more bandwidth. Connecting servers to switches with Gigabit Ethernet will help achieve high speed access to servers. . This is perhaps the simplest way of taking advantage of Gigabit Ethernet. 2.Upgrading switch-switch connections Another simple upgrade involves upgrading links between Fast Ethernet switches to Gigabit Ethernet links between 100/1000 Mbps switches. 3.Upgrading a Fast Ethernet backbone A Fast Ethernet backbone switch aggregates multiple 10/100 Mbps switches. It can be upgraded to a Gigabit Ethernet switch which supports multiple 100/1000 Mbps switches as well as routers and hubs which have Gigabit Ethernet interfaces. Once the backbone has been upgraded, high performance servers can be connected directly to the backbone. This will substantially increase throughput for applications which require high bandwidth. 4.Upgrading High Performance Workstations As workstations get more and more powerful, higher bandwidth network connections are required for the workstations. Current high-end PCs have buses which can pump out more than 1000 Mbps. Gigabit Ethernet can be used to connect such high speed machines. Gigabit Ethernet will be an ideal solution for many of the networking challenges confronting MIS departments today. With businesses implementing more powerful technologies like super-fast servers and data-intensive applications such as video streaming, videoconferencing, or high-speed file backups, the new Gigabit Ethernet standard will go a long way toward adding significant bandwidth at reasonable costs. The following explains some of the key advantages Gigabit Ethernet will provide. Gigabit Ethernet will offer a dramatic increase (as much as a hundredfold) in pure bandwidth to help organizations meet the challenges of overburdened or growing network infrastructures. Gigabit throughput will greatly relieve pressures on LAN backbones while providing both the scalability and speed users need to run data-intensive applications productively. When gigabit data rates become available, firms will be able to greatly expedite large file transfers between servers and other devices. Mirroring the price and performance benefits that Fast Ethernet brought to Ethernet networking, Gigabit Ethernet will offer ten times greater performance than today†s Fast Ethernet at two to three times the cost. The working groups are selecting technologies, such as the Fibre Channel physical layer for fiber, with these specific cost targets in mind. Gigabit Ethernet will maintain the 802.3 and Ethernet standard frame format, as well as 802.3 managed object specifications. As a result, organizations can easily upgrade to gigabit speeds while preserving existing applications; operating systems; protocols such as IP, IPX, and AppleTalk; and network management platforms and tools. Managing Gigabit Ethernet networks upgraded from Fast Ethernet backbones will be simple and easy because the new technology requires no learning curve or training for MIS staffs. By offering backward compatibility with existing 10/100 Ethernet standards, Gigabit Ethernet will provide the same outstanding investment protection that Fast Ethernet offered. When upgrading to gigabit performance, companies will maintain existing wiring, operating systems, protocols, drivers, and desktop applications. No training is required for users or network managers, and network management tools and applications will remain intact. Administrators will be able to keep existing tried-and-tested hardware, software, and management practices while providing-with minimal risk and cost-the networking functionality and performance their organizations require. Gigabit Ethernet is the third generation Ethernet technology offering a speed of 1000 Mbps. It is fully compatible with existing Ethernets, and promises to offer seamless migration to higher speeds. Existing networks will be able to upgrade their performance without having to change existing wiring, protocols or applications. Gigabit Ethernet is expected to give existing high speed technologies such as ATM and FDDI a run for their money. The IEEE is working on a standard for Gigabit Ethernet, which is expected to be out by the beginning of 1998. A standard for using Gigabit Ethernet on twisted pair cable is expected by 1999.

Tuesday, October 22, 2019

Theorist of Choice

Theorist of Choice In an effort to explain sociology based on how relevant it is in daily life, many sociologists today find the term sociological imagination coined by C. Wright Mills inevitable in their discussions. Through his work under a similar name, Wright Mills stands out as the most appealing sociological theorist to me.Advertising We will write a custom essay sample on Theorist of Choice specifically for you for only $16.05 $11/page Learn More Therefore, as pointed out by Mills, sociological imagination gives me a deep insight of the nature of sociology. It goes further to shed light on how it directly connects with the lives of individuals in the contemporary society. However, one might want to know what sociological imagination entails. Sociological imagination through Mills’ own words is the capacity to shift from one perspective to another. In other words, it is the capacity to range from the most impersonal and remote transformations to the intimate featu res of the human self, and to see the relations between the two of them (Mills, 1959, p. 3). The ability to connect between the two, according to Mills, is the driving force behind sociological imagination. The key to this theory is the reality of the existence of public issues, as well as private troubles. Public issues originate from the society. They go down to individuals who take them as being a result of their personal failures rather than seeing them for what they ought to be. On the other hand, private troubles arise because of a personal character. For instance, in a society where jobs are hard to get, circumstances may force a person to accept that he is not working simply because he is lazy. This, however, may turn out to be a public issue when many people cannot find anywhere to work, hence forced to stay idle. Wright Mills, through sociological imagination theory, gives an emphasis that sociology mostly focuses on the manner in which social institutions and forces shape the individual behaviors of people in the society. It shows how the affected people respond to the influence. By being able to see the bigger picture, and derive connections between public issues and private troubles, a person is more enlightened and aware of the happenings in his society. According to Brewer (2005, p.134), people will always be interested if their personal problems can be addressed merely by solving other external factors. It is pertinent to note that, when addressing issues in the society, it is indispensable to do so with the inspiration from Mills’ sociological imagination to see the issue in totality. By so doing, one stands a better chance to address it comprehensively.Advertising Looking for essay on social sciences? Let's see if we can help you! Get your first paper with 15% OFF Learn More It is only through the ability to see the larger picture, as put forth by Mills, that people can derive sociological solutions and explanat ions. Other theorists under the inspiration of Mills came up with explanations on how some things happen in the society (Vissing, 2011, Para.3). For instance, Mills’ work formed a basis for theorists such as Emile Durkheim who came up with a theory to explain suicide in societies. The sociological imagination theory has, therefore, proven concrete since other scholars in the discipline view it as the only elaborate theory that brings forth reliable sociological explanations. In conclusion, a sociological theory should aim at giving details on sociological problems. By providing an avenue for one to relate issues in his/her private lives with the happenings in the society, Mills’ theory ensures that the real causes of problems in the society are analyzed. It provides accurate solutions to avoid future occurrence of such problems. Reference List Brewer, J. (2005). â€Å"The Public and the Private in C. Wright Mills Life and Work†.  Sociology, 39(4), 661-677. Mil ls, W. (1959). The Sociological Imagination. Oxford: Oxford University Press. Vissing, Y. (2011). Introduction to Sociology. San Diego, CA: Bridgepoint Education, Inc. Web.

Monday, October 21, 2019

Free Essays on Inducstrial Revolutions Effects

Social Studies 20 – Module 2 Section 1 Assignment: The Industrial Revolution and it’s Effects on the Modern World The world is better off today because of the Industrial Revolution. The Industrial Revolution not only changed manufacturing and technology, but also the way that people live their everyday lives. The Industrial Revolution started in England in the late 1700s. British people started to use machines to make textiles and steam engines to run machines. Sometime after that locomotives were invented. By 1850 most Englishmen had left their farms and small towns and were laboring in industrial cities. Eventually the Industrial Revolution spread throughout Europe and to North America. The important changes of the Industrial Revolution were 1. The invention of machines to do work previously performed with the use of hand tools. 2. The use of steam in place of muscles. 3. The invention of the factory system. These inventions helped people in many ways. The workweek prior to the Industrial Revolution was around 84 hours; today the average person works about forty hours a week. Also, the work is not as physically demanding today, at work or in the home. Today pe ople purchase goods like clothing and food at stores, where as prior to the revolution all these items had to be made in the home. Before the Industrial Revolution people who worked outside or inside the home worked much harder and longer than we do today. For example, a mother who stayed at home during the day would have spent most of her time preparing meals and staple foods like bread and cheese, as well as making and cleaning clothing, and gathering wood for the fire. Peoples’ homes are more spacious today, with emanates like electricity and running hot and cold water. We also have entertainment and other luxury items, which make our everyday lives more enjoyable. Because of the Industrial Revolution, people today have more material items as well as more free tim... Free Essays on Inducstrial Revolution's Effects Free Essays on Inducstrial Revolution's Effects Social Studies 20 – Module 2 Section 1 Assignment: The Industrial Revolution and it’s Effects on the Modern World The world is better off today because of the Industrial Revolution. The Industrial Revolution not only changed manufacturing and technology, but also the way that people live their everyday lives. The Industrial Revolution started in England in the late 1700s. British people started to use machines to make textiles and steam engines to run machines. Sometime after that locomotives were invented. By 1850 most Englishmen had left their farms and small towns and were laboring in industrial cities. Eventually the Industrial Revolution spread throughout Europe and to North America. The important changes of the Industrial Revolution were 1. The invention of machines to do work previously performed with the use of hand tools. 2. The use of steam in place of muscles. 3. The invention of the factory system. These inventions helped people in many ways. The workweek prior to the Industrial Revolution was around 84 hours; today the average person works about forty hours a week. Also, the work is not as physically demanding today, at work or in the home. Today pe ople purchase goods like clothing and food at stores, where as prior to the revolution all these items had to be made in the home. Before the Industrial Revolution people who worked outside or inside the home worked much harder and longer than we do today. For example, a mother who stayed at home during the day would have spent most of her time preparing meals and staple foods like bread and cheese, as well as making and cleaning clothing, and gathering wood for the fire. Peoples’ homes are more spacious today, with emanates like electricity and running hot and cold water. We also have entertainment and other luxury items, which make our everyday lives more enjoyable. Because of the Industrial Revolution, people today have more material items as well as more free tim...

Sunday, October 20, 2019

The ADL Matrix, Gap Analysis, and the Directional Policy Matrix

The ADL Matrix, Gap Analysis, and the Directional Policy Matrix Continuation. Read the beginning of the article to see the full picture. Here are three lesser-known strategic planning tools that are primarily used for determining a  large-scale  competitive strategy for an organization or a strategic business unit. These particular tools are fairly simple environmental analysis methods, and like other better-known tools such as SWOT  or PEST analysis, do not suggest actions the business should take to reach its objectives. They are best used as a first step in strategy planning, with other more complex tools such as Balanced Scorecards or Key Performance Indicators used to develop and carry out strategic objectives. All the notions listed below may be rather confusing and you should be ready to spend much time on writing. In case you need help with  ADL Matrix, Gap analysis or Directional Policy Matrix turn to our writers and  get professional assistance. The Arthur D. Little (ADL) Strategic Condition Matrix The Arthur D. Little Strategic Condition Matrix was developed by the well-known consulting firm of the same name in the  1970s and is a life cycle-based analysis similar to the Boston Matrix. Unlike the Boston Matrix, which considers a single dimension – product or SBU competitiveness – the ADL has two: competitive position and industry maturity. It was designed mainly for use in assessing SBUs in a large enterprise, but can be easily adapted for use as an analysis covering the entire company or smaller units. The ADL Matrix Competitive position is relatively easy to identify accurately if one thinks of it in terms of product and place: What does the company or SBU offer, and how extensive and diversified are the markets in which it can offer it? Product and place together define the business unit to be assessed. This does not, however, necessarily follow the organizational structure. For example, the sales division of an auto manufacturer provides a product in terms of the cars it sells, but also provides a product in terms of the marketing message supporting the sales effort, customer relations, and value-added components such as service warranties; thus, several organizational units, or parts of them, might make up an SBU for the purposes of strategic analysis with the ADL matrix. Industry maturity is fairly straightforward, and could describe not only an entire industry but a relevant segment of it; for example, our auto manufacturer might consider different vehicle classes such as sports cars, luxury sedans, and light trucks. Once the competitive position and industry maturity are determined, the SBU is assigned the appropriate place in the matrix, from where the company can begin to make strategic decisions. In some guides to the ADL, the 20 potential positions on the matrix are identified with specific generic strategies. In general, the positive strategies involving holding and growing SBUs increase as one moves from bottom to top and right to left across the matrix; the lower-right position representing a weak SBU in an aging market always suggests abandoning or otherwise divesting from the SBU. It is important, however, not to be too strictly bound by predetermined generic strategies. The actions and choices available to the organization depend on the organization’s circumstances and available resources, and may not match generic strategy prescriptions. The biggest weakness of the ADL is that it cannot account for uncertainty about the length of industry life cycles. In an organization’s current industry conditions, it can be difficult to foresee when those conditions might change, since the life cycle is not only affected by external forces but by the activities of competitors as well. Because effective planning requires a definite timeframe, a rapid change in the industry life cycle can make a chosen course of action obsolete and harm the company’s competitive position. Gap Analysis Gap analysis is usually associated with marketing strategy planning, but it can be applied to other types of strategic planning. It is one of the simplest planning tools ever devised, which gives it some distinct advantages and disadvantages. The first step in a gap analysis is to select relevant, measurable indicators that will describe the â€Å"gap†. The fewer the indicators chosen, the less complicated the subsequent analysis and plan development will be; examples of indicators might be gross revenues, profit margin, total sales, or production figures. The â€Å"gap† is the difference between the objectives and the current situation in terms of the selected indicators. Generally, the gap is visualized as a chart: The obvious question is, â€Å"Why would anyone want to conduct a gap analysis?† because the simplicity of the tool suggests it might not be of much use. As a practical tool, it really isn’t. The steps the company needs to take are entirely dependent on the indicators it uses to measure the gap, and their underlying factors; at best, the gap analysis can only tell the company how far off the mark it is in reaching its objectives, not how to reach them. It does have some value, however, as a way to impose some structure on planning processes and give them a clear direction. For example, if the company decides net profit is the indicator that defines the gap, subsequent planning activity will be more effectively focused on factors that contribute to net profit. The Shell Directional Policy Matrix The Shell Directional Policy Matrix is a variation of the Boston Matrix, but is somewhat more detailed and provides clearer generic strategies for SBUs. It relies on two variables, the outlook for sector profitability and the company’s or SBU’s competitive capability, and is arranged in a three-by-three matrix. Knowledge is powerful; we hope that through this article, we have empowered you. If you would like an article like this written for you, we can do that for a token. Our team of professional writers  has  a track record that speaks excellence and perfection! For an article in a related area, simply  place an order here  and get your unique article in no time! At , we provide high quality and  well-written  articles.

Saturday, October 19, 2019

Throw away All Fears Except the Fear of God Personal Statement - 4

Throw away All Fears Except the Fear of God - Personal Statement Example I actually hated myself for this, but I just could not help myself being what I am, a silly fool perhaps to others, but for me, I am just doing my best to live up to God’s expectations. In a span of twenty years, I helped my sister pay her debts, I rescued my brother, also from his debts, I made his children my scholars, one in high school and one in college, taking up Nursing, I loaned two friends to the tune of $14,000.00, and I had not been paid up to now, I contributed to the weekly dialysis of my brother for almost two years, and many more dole-outs, that I should say, they are countless. The bottom line is, my total debt had reached a staggering high of 50 thousand Dollars, which I figured, were already impossible to erase, considering I have no extra income, and the value of the assets that I had acquired is not even close to 30 thousand Dollars. My faith in the Good Lord Jesus Christ kept me going. Every night I still get a restful sleep, because I believe tomorrow is another day. The only process I have used, to deal with the most difficult situation in my life was to throw away all my fears, except my fear of hurting my God. When we truly understand how mu ch God loves us, what can we be possibly afraid of? For God has not given us a spirit of fear and timidity, but of power, love and self-discipline (2 Timothy 1:7, NLT). When I said I should not be afraid, I meant there must be solutions to all our problems. I had to throw away my fear of facing my problem After having decided to throw away my fear of not being able to pay everyone, I resolved to change. I realized that I can always help people, if not financially, then in other ways, such as spending the time to listen to their worries and help them find solutions. I can still prove to my God that He can use me to bring about His Glory to everyone.  

Friday, October 18, 2019

Impressionism painting appearance Research Paper

Impressionism painting appearance - Research Paper Example Impressionism was a technique of emblematic art that was not essentially dependent on practical representations. At that time, the scientific thinking was just starting to understand that what the eye perceived and what was understood by the brain did not match and that they were two diverse entities (Nineteenth Century French Art, 1819-1905: From Romanticism To Impressionism, Post-Impressionism, And Art Nouveau). The Impressionists artists wanted to capture the visual lights effects to communicate time passage, weather changes and other changes in the environment.Impressionist artists relaxed their brushwork and blanched their palettes to incorporate pure primary colors. They deserted their old linear point of view and stayed away from the clearness of form that had in the past served to differentiate the most vital rudiments of a painting from the minor ones. It is mainly for this fact that numerous reviewers criticized impressionist's works for their uncompleted look and on the fa ce of it substandard quality. Impressionism takes note of the consequences of the immense mid-nineteenth century overhaul of the city of Paris spearheaded by the civic structural designer Georges Eugene Haussmann that comprised Paris's freshly built railway lines and stations, wide streets that served in the place of the narrowly constructed pathways and huge luxurious houses. Many times putting more emphasis on public leisure features, particularly cafà © sights and cabarets, the impressionism artists.

Assessment 2 (6 assignment into 1) Essay Example | Topics and Well Written Essays - 3000 words

Assessment 2 (6 assignment into 1) - Essay Example The formula for computing surface area of pyramids shall be provided, with the teacher demonstrating how to substitute values in the formula. The students will be given exercises to practice computing for surface area of pyramids which will be checked as a class at the end of the lesson. the concept of solid shapes and their surface areas. They should be aware that there are formulae to be followed in computing for the surface areas of various shapes and know how to substitute values and finding missing values using the formula. The Lateral Area is surprisingly simple. Just multiply the perimeter by the side length and divide by 2. This is because the sides are always triangles and the triangle formula is base times height divided by 2 Using the formula: 1/2 Ãâ€" Perimeter Ãâ€" [Side Length] + [Base Area]. The teacher illustrates to the class how to use the formula and substitute the necessary values on the blackboard, then does a few exercises with some students she may call on. Later on, students are grouped according to their abilities, given various number problems to solve for the surface areas of various pyramids. Kozioff, et al (2000) contends that the teacher can more easily monitor the progress of the students when they work in smaller groups with more or less the same guidance needs. Killen (2003) emphasized the need to practice students’ newfound skills and in this particular case, it is the computation of the surface area of a pyramid following a prescribed formula. At the end of the session, everyone comes together to compare their answers. It will be necessary to call on students to demonstrate on the blackboard how they came up with their answers to check if they followed the correct proced ure in using the formula. Should there be errors, the teacher throws the question to the class as to where it went wrong, but if students cannot figure it out, then she shows

WHAT ARE THE STRENGTHS AND WEAKNESS OF THE VIEW THAT WE ARE LIVING IN Essay

WHAT ARE THE STRENGTHS AND WEAKNESS OF THE VIEW THAT WE ARE LIVING IN A GLOBALISED WORLD - Essay Example nomies into an international economy through trade, capital flows, migration, spread of technology and many other factors contributing to it (Bhagwati, 2004). Globalisation is usually recognized as being caused by a combination of economic, socio-cultural, technological, political and biological factors. It can also refer to the dissemination of ideas, languages or popular culture between nations (Croucher, 2004). Living in a globalised world has its share of negative effects on the average citizen, and globalisation has been one of the most hotly-debated issues in international economics in the past years. One of the causes for this opposition to globalisation is the concern that globalisation has increased inequality and environmental degradation (Hopkins, 2004). Fears for inequality arise in situations in which companies take advantage of cheap labour force in backward countries and use employees for their own needs without taking care of their working conditions. Also, as a result of the industrial nature of factories and that are responsible for manufacturing goods, the environment suffers damages in its land, in bodies of water (including rivers, lakes, oceans, seas) and in the air as well (as poisonous materials are released to the air). Poorer countries suffer more disadvantages because of globalisation. As some countries try to save their national markets, they sometimes subsidise their main export, which is agricultural goods. This lowers the poor farmers crop prices in the poor countries compared to what it would have been if countries had not subsidised their goods. (Hurst, n.d) One other negative effect of globalisation in the economic field is the increase in child labour. The conditions in the poorer countries of the world along with the "enticements" offered by large corporations in them cause even children to go to work in order to help support their families. These children often work in sweatshops and in terrible conditions. The increases in

Thursday, October 17, 2019

Final Response Paper Essay Example | Topics and Well Written Essays - 750 words

Final Response Paper - Essay Example In college writing the readers target to encourage an interesting claim, evidence to the claim as well as the analysis of limits and claim objections. The provision is different from those provided in the secondary level since the secondary level does not prompt utmost rationality and demands for simplicity in their conformation. The synthesis necessary in secondary school writing targets the dummies a converse scenario to the college writing (Sommers 382). Even though many discourses involve writing and reading, some do not involve the outlined practices (Gee 11). Academic writing entails the format, type, and the language. The writing requires the use of formal language throughout to abide by the conventional demands. Under such provisions, the writing discourages the use of short forms of words and abbreviations. The writing has to be in a specific format such as the APA, MLA, Harvard or any other format (Hyland 96). Through the formation of the paper, it has to maintain a type such as an essay, coursework, and dissertation amongst other types as well. The dominant discourse is however contrary to the academic writing since it mostly promotes the passive voice and does not consider formats. Academic writing is different since it also involves instructions that guide all its aspects including the themes to be written about (Sommers 376). The main controversy is the time of engagement into the discourses and the effectiveness of their impacts. There is a group of students considered â€Å"special† in this aspect and cannot be compared to those who get engaged in discourse at tender ages (Gee, 14). Another trouble around teaching academic writing is the conflict of the daily experiences in communication and the requirements of academic writing. The common language and communication tends to be informal and appears not to abide by the demands of academic writing.

Wednesday, October 16, 2019

Supplier Evaluation Processes Essay Example | Topics and Well Written Essays - 2250 words - 1

Supplier Evaluation Processes - Essay Example It is evidently clear from the discussion that in order to improve the management of wider supply chains, academic writers have suggested a number of new and modified managerial practices and tools, which a multitude of practitioners are implementing. Further suggestions by numerous authors point out that previous performance measurements which solely focused on internal factors now need broad and drastic changes otherwise they might limit the possibility of optimizing dyadic relations or rather the supply chain of every organization. Ideally, investigating how performance information travels between the evaluating supplier and the evaluated buyer and how the shaping, as well as reshaping of information in the evaluation process, is imperative. Relying on longitudinal and multiple case researches as the methodology to obtain findings, this essay will bring out the practical implications, originality, and value of evaluation of performance measurement in a supply chain. Several studie s brought forward that studies based on the development of systems aimed at addressing performance measurement outside legal company boundaries are three in classification. They are supply chain evaluation, buyer-supplier relationship evaluation, and supplier evaluation. Technical rationale particularly applied by econometrics tends to dominate in cases where improved systems and measures align with the strategies set by an organization. Additionally, this dominance goes ahead to appear in other areas where the set systems align strategically with optimum performance measurement as well as in areas where it results in improved performance especially in the activities measured.

Final Response Paper Essay Example | Topics and Well Written Essays - 750 words

Final Response Paper - Essay Example In college writing the readers target to encourage an interesting claim, evidence to the claim as well as the analysis of limits and claim objections. The provision is different from those provided in the secondary level since the secondary level does not prompt utmost rationality and demands for simplicity in their conformation. The synthesis necessary in secondary school writing targets the dummies a converse scenario to the college writing (Sommers 382). Even though many discourses involve writing and reading, some do not involve the outlined practices (Gee 11). Academic writing entails the format, type, and the language. The writing requires the use of formal language throughout to abide by the conventional demands. Under such provisions, the writing discourages the use of short forms of words and abbreviations. The writing has to be in a specific format such as the APA, MLA, Harvard or any other format (Hyland 96). Through the formation of the paper, it has to maintain a type such as an essay, coursework, and dissertation amongst other types as well. The dominant discourse is however contrary to the academic writing since it mostly promotes the passive voice and does not consider formats. Academic writing is different since it also involves instructions that guide all its aspects including the themes to be written about (Sommers 376). The main controversy is the time of engagement into the discourses and the effectiveness of their impacts. There is a group of students considered â€Å"special† in this aspect and cannot be compared to those who get engaged in discourse at tender ages (Gee, 14). Another trouble around teaching academic writing is the conflict of the daily experiences in communication and the requirements of academic writing. The common language and communication tends to be informal and appears not to abide by the demands of academic writing.

Tuesday, October 15, 2019

Causes of World War II Essay Example for Free

Causes of World War II Essay World War II was the biggest, deadliest, and scariest war of all time. It was obvious that it was coming to. Hitler was taking over Germany. He was sentencing Jews to concentration camps. He was plotting to rid the world of Jews and eventually take over the world. War was coming and everyone knew it. Everyone wanted to do something to stop it, but it was no use. As stated in Document 9, â€Å"neither the people nor the government of Britain and France were conditioned to the idea of war. † Keith Eubank said that in the Origins of World War II. Britain knew war was coming, France knew war was coming, and Germany even knew war was coming. World War II was inevitable. Many things built up to the eventual outcome; war. As stated in Document 5, British Prime Minister Neville Chamberlain explains why he favored peace, but he knew eventually it would end up as war. He says, â€Å" If we have to fight, it must be on larger issues than that. . . . I am a man of peace. . . . Yet if I were sure that any nation had made up its mind to dominate the world by fear of its force, I should feel that it must be resisted. . . . But war is a fearful thing. Hitler was trying to take over the world, and that is exactly what Chamberlain feared. It is also a reason why World War II began. In 1939, the world was plunged into World War II. Nobody wanted it to happen. But nobody could prevent it from happening. Hitler was continuing with his dream of taking over the world while at the same time ridding the world of Jews. Hitler was taking over one country at a time with the help of Italy. As stated in Document 2, Haile Selassie, emperor of Ethiopia, asked the League of Nations for help during Italy’s invasion of their country. The League of Nation’s response was ineffective. Selassie then said these words, â€Å"God and history will remember your judgement. . . . . It is us today. It will be you tomorrow. † What he means by that is that Hitler will not stop there. He will keep pushing and pushing until he has what he wants. He is saying that the League of Nations will to lose control to the Nazis. Different nations had different ways to handle the Nazis though. There were two responses to the aggression caused: collective security and appeasement. Collective Security is a system by which states have attempted to prevent or stop wars. Under a collective security arrangement, an aggressor against any one state is considered an aggressor against all other states, which act together to repel the aggressor. Appeasement was basically giving Germany whatever they wanted to not start a war. Appeasement did not work because Hitler agreed to take the Sudetenland, and Hitler promised to recognize Czech’s new boundary lines, but six months later, he took over all of Czech. As stated in Document 4, â€Å"There is to be no European war. . . . the price of the peace is. . . the ceding by the Czechoslovakia of the Sudeten territory to Herr Hitler’s Germany. † Hitler was not a man of his word because a very short time later, he took over Czech. World War II was undoubtedly, the biggest and costliest war of all time. People were dying all the time. Hitler had enslaved the Jews in concentration camps. Everything was falling apart. If the U. S. hadn’t stepped in to help, who knows where we would have been right now. Europe could all be Germany. There might be no Jewish people left. We are lucky because this entire war could have gone a completely different way.