Saturday, August 31, 2019

Chess – the game for everyone

Chess is a board game for everyone. Chess is played on a square board of eight rows and eight columns. The color of the sixty-four squares alternate and is referred to as light squares and dark squares. The light squares are at the right hand of the rank nearest to each player, and the pieces are set out as shown in the diagram, with each queen on its own color. The pieces are divided by convention, into white and black sets. The game starts with 2 kings, 2 queens, 4 rooks, 4 bishops, 4 knights and 16 pawns. Chess demands you to think about which way to go to win against another side. In this writing I want to discuss the passages on the chessboard that players consider and calculate to win and when we consider life in the universe people do their best to obtain the same goals. Life is life. The specialness of the game is played by large numbers of ordinary and not so-ordinary people. Chess may even provide unusually clear examples of these various aspects of life. Because chess is an arena in which the tasks are entirely mental, where complete information is available to both players and their moves it can be recorded accurately. In this sense, chess may even illuminate aspects of life. When you play chess, all moves are up to you like life, you will find out your own limitations. A passionate claim without any evidence or argument can never support more than a statement of faith, but if it too is insistent it may betray a doubt about the real value of the game. Chess is quite reflective of the dimensions. The board, a finite realm of two dimensions, is similar to a finite view of the universe. Chess has two basic types of movements on this field: the finite players and the infinite players. The finites are the king, the pawn and the horse; who move in single bounds of a pre-established length. The infinites are the queen, the bishop and the rooks; who move in bound at any length, theoretically able to escape the two-dimensional limits imposed by the board. In life the queen is a good manager who always finds out the best way to achieve her key targets lead to checkmate. We'll now examine the directions in which they may move. A pawn is biased. The fawn may only go straight unless altered in course by removing another piece at either of ts' forward diagonals. The pawn starts out with the option of a two-square move, as if running out into battle, but then continues at a single square pace. In life the pawn is staff or employees who are loyal and dedicated to help the manager achieve his or her goal. A rook moves infinitely either forwards or sideways; the bishop is similar in movement to the rook, but is offset by 45 degrees. The queen is a precise super imposition of th e rook and the bishop. The king is a queen with a single square limit, or, simply put, a ring around itself. In fact, life is like a chessboard. People can justify their moves all they want, but they will be concerned and checkmated if they do not checkmate their opponent first. People live in a community and have to know how to exist in it. For example, to have a good life, an employee works hard and has the good relationship with colleagues. He or she has to follow the rules of a company as well as a society, and he or she also desires to have a better position in the workplace. As a result of this, he or she is a good player. Chess also has rich symbolism which the imaginative may develop, and it has often been used for the purpose of authors of improving essays. Chess moralities of this sort were abundant in the medieval era, but one feels that people are normally reading into chess the values they already possess. In life, there is no bad staff in the good manager’s eyes, he or she understands and grasps the employee’s ability to lay work properly. A slightly stronger claim is to say that chess is not just another part of life, but is a particularly worthy, rewarding or exemplary part. All chess players know its rewards, and its best players are more enthusiastic like Tarrasch’s famous quote: â€Å"Chess is a form of intellectual productiveness and intellectual productiveness is one of the greatest joys of human existence. † Because chess presents complex but unambiguous problems, psychological researchers have been very interested in chess. Chess is a key field for research in psychology, although chess players have not yet felt the benefit of many insights. The game of chess is not merely an idle amusement. Circumspection which surveys the whole chessboard, or scene of action; the relations of the several pieces and situations, the dangers they are respectively exposed to the several possibilities of their aiding each other. Caution is not to make our moves too hastily. This habit is best acquired by observing strictly the law of the game. For example, if you touch a piece, you must move it somewhere, if you set it down, you must let it stand. And it is therefore, best that these rules should be observed as the game. Look at chess, and you may find their truths about life: â€Å"The chess-board is the world, The pieces are the phenomena of the Universe, The rules of the game are what we call the laws of Nature, The player on the other side is hidden from us. † Thomas Huxley(1825-1895). The statements of Thomas Huxley and the illustrations of life above are as the powerful weapons/techniques to conquer this game of life, so cunningly complex, yet , my movement is centered, flowing and letting go. Without a doubt, it is my turn to move. I am the chess player not the chess piece. I have myself as my sole opponent in this chess of life. I am the sole barrier to my success if I will not do anything.

Friday, August 30, 2019

Coso Risk Management Plan

COSO Risk Management Plan LAW/531 BUSINESS LAW March 18, 2013 Nicole Harrison COSO Learning Activity Beasley, Hancock and Branson (2009) have mentioned that â€Å"Many senior executives and their organization’s board of directors are working to strengthen risk oversight so that they are better informed about emerging risk exposures, particularly those impacting strategy† (p. 01). This statement clarifies that companies are looking for better ways to manage risk and they are using techniques to help achieving this goal.The Committee of Sponsoring Organizations of the Treadway Commission (COSO) is an organization leading the way on providing frameworks and guidance on enterprise risk management, internal control and fraud deterrence designed to improve organizational performance and governance and to reduce the extent of fraud (COSO, 2013). It is a joint initiative of five private sector organizations including the American Accounting Association, the American Institute o f CPAs, the Financial Executives International, the Association of Accountants and Financial Professionals in Business and the Institute of Internal Auditors.This paper has the objective of identify recommendations about how it would be useful for an organization to adopt COSO as the structure for its own corporate compliance plan. According to Steinberg (2011) â€Å"In recent years, to complement the use of key performance indicators, which focus primarily on past performance, more organizations have adopted forward-looking key risk indicators to further enhance risk management effectiveness† (p. 01). Corporations monitor their performance based on indicators (KPIs) that provide a trend from a time in the past to date.This performance trend can be compared to others, such as competitors and general industry performance to assume how the business is moving ahead. But that is not enough. Risk-management specialists and organizations like COSO suggest that corporations start lo oking at Key Risk Indicators (KRIs). Those indicators are looking to the future of the business and its industry and enable management to deal with risk events more quickly (Steinberg, 2011). The KRIs can be part of the strategic plan of a corporation and help to create a more precise SWOT analysis by using real ratios instead of mere market assumptions.Beasley, Hancock and Branson (2009) say that â€Å"Risk management and strategy-setting activities are often viewed as separate and distinct, with risk management sometimes stigmatized as being a non-value adding, compliance, or regulatory function with no visible or clearly articulated connection to the organization’s strategy† (p 13). Corporations should review this outdated concept and start using the power of risk management as an essential element of their strategy.COSO presents their own definition of Enterprise Risk Management (ERM) and summarizes important elements to a successful implementation. The organizatio n defines ERM on Beasley, Hancock and Branson’s article (2009) as â€Å"A process, effected by the entity’s board of directors, management, and other personnel, applied in strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within the risk appetite, to provide reasonable assurance regarding the achievement of objectives† (p. 4). COSO is great source of knowledge and experience for all sizes of companies. A financial crisis, a simple change in the market, the complexity of business transactions, advances in technology, globalization, and the speed of product cycles can be fatal for any business and, in order to avoid that, managers, executives, and boards should strength risk management in their organizations. ReferencesCOSO Committee of Sponsoring Organizations of the Treadway Commision (2013). About Us. Retrieved from http://www. coso. org/ Beasley, Mark S. , Hancock, Bonnie V. , an d Branson, Bruce C. (2009). Strengthening Enterprise Risk Management for Strategic Advantage. Committee of Sponsoring Organizations of the Treadway Commision (COSO). Steinberg, Richard M. (2011). Using the New COSO Risk-Management Guidance. ERM & Internal Controls. Haymarket Media, Inc.

Thursday, August 29, 2019

Lesson Learned Coursework Example | Topics and Well Written Essays - 1500 words

Lesson Learned - Coursework Example The company also focused on expanding its market share, increasing the net revenue, shareholder's earning per share, return on equity and stock price. Furthermore, we put our effort forth in ensuring that our credit rating is maintained at "A," a rating above the expectations of investors, and a product rating quality of 3.5 stars. Global best strategy, also referred to as the "more value for money" approach, was been used by Alpha DigiCam in search for competitive advantage. This would see its products have appealing attributes to the customer and at the same time retain affordable pricing. PRODUCT DESIGN The managers had an eight-year plan to achieve the 3.5 stars rating on image quality. As such, the managers embarked on improving the quality and attributes of products for both the multi-featured and entry-level models of cameras every year. By the end of the eight years, our image rating had surpassed the expectations of the investors. In the ninth and tenth years, our image rati ng incrementally improved to reach the 3.5 rating. Below is a table showing our rating with regards to overall investor expectation, I.E., best-in-industry, B-I-I, and a combination of I.E. and B-I-I scores against that of our rival companies.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   MARKETS AND DISTRIBUTIONS Varied direct and indirect channels of distribution have been used in Alpha DigiCam including local camera shops, online retailers and multi-store chains. The markets covered included Latin America, Asia-Pacific, Europe-Africa and North America. The simulation reveals that we achieved competitive advantage in North America over the eight years, specifically with regard to the entry-level cameras distributed through local camera shops, online retailers and multi-store chains and additionally due to multi-featured warranty period, budgeting for advertisi ng and multi-featured P/Q rating. However, in the ninth year, we lost 2.4% of the market share in the region for entry-level camera models but managed to maintain an industry average with the multi-featured models. In the Europe-Africa market, we achieved 18.1% market share within the eight years for entry-level camera models. However, this was not sustained through to the tenth year. On the other hand, the Latin America and Asia-Pacific regions frequently reported market share loss with regard to both the multi-featured and entry-level camera models. The managers observed the autonomous action in each region that saw each of them adjust prices aggressively according to the specific region. Furthermore, managers increased warranty periods and promotions to enable them gain market share in their respective regions. COMPETITION The camera products market in the regions where Alpha DigiCam operates is extremely competitive. Alpha DigiCam faces stiff competition from rivals in this mark et who have vast experience in the industry. Thus, the company resorted to competing in pricing of products. The participant's guide clearly states that the competitiveness of the company largely depends on the prices with which it sells the cameras to its retail dealers. Additionally, a myriad of other factors play a major role in determining the competitiveness of this company: the number and duration of quarterly promotions, advertising expenditure, the amount of price discounts given to retailers during promotions, the

Wednesday, August 28, 2019

Pick any of Emerging markets - India or China and write a research Paper

Pick any of Emerging markets - India or China and write a regarding them - Research Paper Example 8). Clothing and footwear also accounts for 10.5% of the total sales in retail. The â€Å"entertainment, books and sports goods equipment† registered a CAGR of 21.3% in the financial year 2007 and 2012. According to Research and Markets, the retail industry in India has grown at CAGR of 14.6% in the financial year 2007 to 2012. The growth is influenced by the growing economic rate, change in the consumption pattern of the populace which are driven by higher standard of living, greater proportion of women, growth in middle class population and the increased level of penetration of organized retail segment. Despite of the prevailing growth rate, the retail segment of India remains fragmented with the organized sector still accounting for a miniscule percentage of the total market size of Indian retail market. The organized retail segment however accounted for a CAGR of 26.4% of the total retail. However with global and Indian economy reviving from post recession, the organized r etail segment has witnessed a gradual increase in the footfall during FY2011 (Research and Markets, 2012). ... The Indian retail segment is pegged at 500billion US dollar and is further expected to attain US $1.3trillion by 2020 and the organized retail sector is expected to reach about 25% by 2012. Indian e retailing market also has high potential for future growth with estimates to be US 1.26billion by 2015 and currently the e-retail segment accounts for US$ 361.66million (IBEF, 2012). In 2011, the Central Government of India had announced reform with respect o retail for both single brand and multi brand stores. The market reform have paved reform for competition and retail innovation with the multi brands such as Carrefour, Tesco and Wal-Mart and also for the single brands like Nike, Apple and IKEA. India had approved reform in January 2012 providing with the opportunity to innovate in the retail market with 100% ownership but for single brand imposed 30% of requirements of goods from India (Gupta, 2012). The Indian retailing structure can be divided into two groups, the organized and the unorganized retail sector. The unorganized retail sectors are the vendors, handcart, kiranas and others. This sector contributes to 98% of the total retail value. But with FDI in picture, the retail sector is expected to shrink employment in the unorganized sector and latter expand in the organized sector. The organized sector is undertaken by licensed retailers who are registered under income tax and sales. Another form of retailers is the in store retailers also known as brick and motor formats and are designed in order to lure its customers. Different kind of stores such as branded stores which appear in the form of executive showrooms, multi brand specialty store, department stores, convenient stores, supermarkets and shopping malls (Economy Watch, 2010). Liberalization and

Tuesday, August 27, 2019

Discuss the benefits and environmental implications of applying Essay

Discuss the benefits and environmental implications of applying composts and other organic amendments to agricultural land - Essay Example These include sewage sludges, municipal solid wastes, urban yard refuses, food industry residues, wood processing wastes, and agricultural crop residues; these are produced in considerable quantities by the human community, particularly in urban, highly populated areas, state Senesi et al (1996). Besides their application to agricultural land after appropriate treatment, other alternatives for their disposal are incineration, land filling, and discharge to water bodies. However, the most environmentally safe and economically satisfactory solution is the application of composts and other organic amendments to agricultural land. â€Å"This choice also provides advantages which may result in soil fertility and agricultural production benefits† (Senesi et al, 1996, p.533). Organic wastes and residues of any nature require appropriate treatment before soil application. ... conomic benefits to agriculture, the measures to prevent adverse environmental outcomes, alternative options, and whether benefits outweigh negative effects will be examined. BENEFITS OF APPLYING COMPOSTS AND OTHER ORGANIC AMENDMENTS TO AGRICULTURAL LANDS The application of compost benefits the biological, chemical and physical properties of soil. Biologically, compost promotes the development of fauna and microflora, reduces plants’ susceptibility to attack by parasites, and supports the faster root development of plants. Chemically, compost has beneficial outcomes on soil in several ways. It â€Å"increases nutrient content, turns mineral substances in soil into forms available to plants, and regulates the addition of minerals to soil, particularly nitrogenous compounds† (EPA, 1994, p.87). Additionally, compost serves as a buffer in making minerals available to plants, and provides a source of micronutrients. Moreover, compost improves numerous physical characteristic s of the soil including the soil’s â€Å"texture, water retention capacity, infiltration, resistance to wind and water erosion, aeration capacity, and structural and temperature stability† (EPA, 1994, p.87). In Tigray Region of Ethiopia, the Bureau of Agriculture and Rural Development undertook since 1998 the production of compost as a part of its extension package. By 2007, at least 25% of the farmers were making and using compost. The success of this approach is emphasized by the doubling in the quantity of grain yield between 2003 and 2006, from 714 to 1,354 thousand tonnes. At the same time, since 1998, there has also been a steady decrease in the use of chemical fertiliser from 13.7 to 8.2 thousand tonnes (Asmelash, Araya, Egziabher et al, 2007, p.19). Other regions of Ethiopia are also promoting

Monday, August 26, 2019

Thinkpiece Essay Example | Topics and Well Written Essays - 500 words

Thinkpiece - Essay Example It does not matter who exactly who would plan the PR for as long as the person who will do the PR plan knows that he or she is doing and that it is responsive to the customer’s needs. The person who could plan the PR plan could be the PR manager or marketing manager. He or she probably has the best qualification for the job because PR after all is either a function of marketing or corporate communication. Or, if such department does not exist (PR department), the company can appoint someone who is going to do job. It is important to note that just because a PR department does not exist, a PR person that will address the customer will also be ignored. It is important to respond and address customer’s concern once they talk back. It is important because customers are the lifeblood of the company, without them, the company will also not exist. Of course one would become curious to ask about the person who would be appointed if a PR department does not exist. It is important to underscore here that customers can talk and when they talk back, it means they have something important to say. The PR function is not just to communicate, but also as a customer function. It comes to us then that the person who would be appointed to plan PR when customers can talk back should have a customer service background in addition to being a great communicator both in written and oral language. If possible, the person should be a customer service manager or supervisor who has years of experience so that the appointed person has both the training and experience to adequately plan the PR plan for the customer. We have to cite caution here that if an unqualified and untrained person will be appointed to plan the PR plan when customers talk back just for the sake of having a PR person, it would become disastrous to the company. It is disastrous to the company because instead of responding adequately to the customer that they will be satisfied,

Sunday, August 25, 2019

V for vendetta Movie Review Example | Topics and Well Written Essays - 1000 words

V for vendetta - Movie Review Example Throughout the movie, V hides in a mask and carries his terroristic activities by blowing up buildings, murder and subterfuge. He rescues Evey form the hands of corrupt policemen who tries to rape her and this is a sign of how the government is rotten (Melnick, 6). This paper analysis the this film focusing on the stage of insurgency, type of insurgency in the movie, the reason behind the employment of guerrilla welfare in the movie, the insurgent strategies and tactics used in the movie and the counter-surgency strategy the government or the occupiers used. The State of Insurgency in the Film The reason behind the people in the movies forming guerrilla insurgent attacks is because they live under an oppressive government which has driven then to dire despair. The film shows several scenes of people in this state for instance some are at their front rooms or in the pub watching helplessly propaganda on the televisions. The British society in the movie is controlled by the government to extent that simple things such as butter and work of art such as painting and music which should bring pleasure to the people have been outlawed. This is meant to create a safe and peaceful society with absolutely no chaos. The government has an absolute control on the media and thus defines what news is released to the public (Melnick, 10). The people then launch several Guerrilla attacks after being instigated by V who has some grudge against the government who had tortured him in fire and also wishes to bring the dictatorial power to an end. V, through his strong rhetoric raises rebellion among the British citizens. He says that the government should fear the people and not the other way. He urges the people to remove the tyrannical government in power since they are responsible for that government being there in the first place. The attacks are therefore not directed to a presidential regime but it is to the whole system of governance. Stages in the Insurgency The insurgency n the film can be seen to be in two stages. Initially, the insurgency is in the mobile stage. This is where V carries out several independent attacks on the government without using the existing government structures. He uses this as a way of causing a revolution in the government. V walks like a shadow causing mayhem. He wears the mask of Guy Fawkes who was a well known conspirator in the 1605 Gunpowder plot which was used to bring about a revolution of the government of that time. The attack had happened on November the fifth which was remembered by many and symbolizes the 9/11 attack. He attacks the old Bailey during the seam date as the government commemorates the 605 attacks (Melnick, 7). Later, he mobilizes the other citizens against the government by showing them that it is their right to define the kind of government that rules them. The public is again inv0lved in a series of Guerilla attacks using the Guy Fawkes masks and this makes it hard for the police to track V. This stage is now called the guerilla attacks stage. It can thus be concluded that the insurgency in the movie moved from mobile war to guerrilla war stage. Type of Insurgency The type of insurgency in this movie is liberation insurgency. This is because the people seek to be liberated form a suppressive form of government not necessarily the

Literature Survey for - What are the benefits and costs of worker Essay

Literature Survey for - What are the benefits and costs of worker training, and who should pay for training - Essay Example Moreover, Acemoglu and Pischke argue that worker training is important especially with the ever changing technology in organizations (1999, p.2). In a bid to increase productivity and retain relevance in competitive industries, organizations have to constantly change and adopt to new technology. As a result, workers need to attend trainings on how to effectively use new technology. Studies show that highly skilled workers easily and effectively adapt to new tasks and technology compared to low skilled workers. Furthermore, highly skilled workers were found to be more innovative hence yielding better performance (Blundell, Dearden, Meghir, and Sianesi, 1999, p. 14). This implies that worker trainings ensure that the entire workforce is always in conversance with organization equipment and technology thereby maintaining high productivity. In addition, the basic education attained in institutions of higher learning and other basic education providers is arguably not enough to produce optimum results. As a result, there is need for continued worker training to impact the essential knowledge required for maximum productivity. According to Preffer and Fong a consultancy firm can produce a two-year learning experience in colleges in three weeks (qtd. in Xie and Steiner 2013, p.3). This implies that worker training can be considered more effective than the basic education learnt in schools since people already have first-hand experiences in the course of work. However, this is not to say that basic education is irrelevant but it is to lay emphasis on the need to promote and incorporate worker training in organizations. This is further emphasized by Acemoglu and Pischke who argue that ‘most lines of business require a set of skills that cannot be impacted by the basic education (1999, p. 2). This implies that for maxi mum efficiency, worker education should be incorporated

Saturday, August 24, 2019

The Joe Salatino President of Great Northern American Case Study Research Paper

The Joe Salatino President of Great Northern American Case Study - Research Paper Example Perception and attribution are of greater significance in comprehending and administering of the organizational behavior, since all the decisions along with behavior in Great Northern American tend to be impacted by the way in which its members interpret and make meaning out of them. When an individual observes people, he/she tries to identify the reasons behind the persons behaving in certain manners. The basis of the attribution theory is that an individual wants to understand the reasons behind the actions that he/she is taking and others take. An individual further wants to attribute reasons to behaviors he/she observes instead of assuming that these behaviors are random. It is quite significant to mention the fact that attributions are crucial for Great Northern American since perceived reasons of behavior may impact managers’ and employees’ judgments and actions. Joe Salatino perceives the fact that in order to keep the sales force motivated and dedicated, it is q uite significant to spend money on commissions and bonuses. It is worthy of mentioning the fact that there are two forms of attributions. They are internal attribution and external attribution. The internal attribution is likely to take place when an individual believes that a particular behavior has been chosen freely, it was deliberate in nature and was quite low in terms of social desirability. When the salespersons at Great Northern American are not capable of handling the self-starting selling intensity, they may leave the company because of their dissatisfaction level. Such behavior is generally called internal attribution. On the other hand, an external attribution takes place when it is believed that the behavior was not freely selected and it was unintended. For instance, despite Joe Salatino offering his employees with bonus and commission for good job done, if the employees demonstrate the tendency to leave the organization, then it can be identified as external attributi on (Martinko, 2004). It is found that Joe Salatino felt that his employees were not much effective. Some employees were functioning very effectively and were earning more than the salary of the top producers because they were capable of generating huge sales for the company. On the other hand, most of the employees were leaving the company since they were not capable of handling the ‘self-starting selling intensity’ and disorder. In addition to these, Joe Salatino noted that it took for an employee a year to develop good account base. When Joe Salatino confronted with these issues he tried to realize what caused those events. When Joe Salatino experienced unpleasant outcomes, attributions could assist him in identifying and avoiding the behaviors as well as other factors that led them to happen. On the contrary, when Joe Salatino experienced pleasant outcomes, he would prefer understanding the behavior that led to such actions of the customers as well as employees. It c an be noted that such attribution and perception theory can assist Joe Salatino to identify what actions the company must take in order to bring significant improvements in the behaviors of the employees whose performances have been below satisfactory level in comparison to those employees whose performances are satisfactory. Therefore, it is the proper understanding of the behavior of

Friday, August 23, 2019

Ukraines Transition from Socialism to Capitalism Essay

Ukraines Transition from Socialism to Capitalism - Essay Example Following the revolt, the rulers of Ukraine turned to Russia for protection and hence laid down the path towards Russian imperialism. This colonialism made a new distinction among the workforce in Ukraine. A large-scale labor migration from Russia occurred which acquired high skill and better pay job opportunities while the domestic workers suffered from a low wage and bad working conditions. The protests against such unjust and exploitative attitude lead to the upheaval of the 1917-1920 and 1942-1947 revolution. The struggle weakened because of the withdrawal of the Bolshevik members of Ukraine. In October 1917, the revolutions of Russia and Ukraine fused but the leaders in the parliament, Rada, who were against the notion of a Russian workers’ republic, decelerated the progress towards a Ukrainian socialist uprising. The Rada had diverged so much from the objectives of the Ukrainian mass that during its disposition in 1918 (by the Red Army) it had already lost its ground of support. In this so-called defense of sovereignty, the incident that took place was that these Rada leaders gave Ukraine away to German, Austrian, and Polish occupations. The year 1920 saw another upsurge to dispose of Russian colonialism by the Ukrainian Communist protesters. However, with the strengthening of the powers of Stalin and Russia, the dynamics of centralism shattered the rest of the hopes of national equality. In the 1930’s, a mixture of rapid industrialization and enforced collectivization sowed the seeds of a mass aggression. Millions of people died in the false famine of 1932-33 and a considerable number were deported to Siberia. Those who thought to venerate, analyze, or dissent these tragic incidents were either imprisoned or tortured.

Thursday, August 22, 2019

Cognitive Effects of Early Bilingualism Essay Example for Free

Cognitive Effects of Early Bilingualism Essay The American educational system has fallen behind other leading nations in the world in many respects, one of which is in bilingual instruction. This has traditionally been overlooked in the United States until the high school level. Children in today’s society should be made more prepared for the growing globalism and technological advances throughout the world instead of losing educational opportunities due to economic downfall and lack of resources. This includes a second language acquisition introduced earlier in the program. On top of political reasons, the positive effects to the cognitive development of the brain when introduced to a second language are many. The age of acquisition is crucial due to the plasticity of the brain which, according to the critical period hypothesis, begins to plateau after five years of age. The current policy in early education limits greatly the amount of extracurricular lessons provided in accordance with government policies such as No Child Left Behind, which restricts school funding based on standardized testing only in certain subject areas. School programs, realistically beginning in elementary education, should include foreign language study due to the strong evidence that bilingualism in children can develop higher cognitive abilities which can be enhanced with proficiency and positively influence skills in other areas. Old arguments suggest that, â€Å"children who are instructed bilingually from an early age will suffer cognitive or intellectual retardation in comparison with their monolingually instructed counterparts† (Diaz 24). Much of the research from the past supporting this argument focused on older bilinguals, mostly adults who may have shown competent abilities in a second language but who had much later ages of acquisition and who usually acquired the second language outside of the home. Many early studies in this field worked with children of immigrants who showed lower abilities in cognitive tasks most likely because of the lack of proficiency in the second language (L2) and lack of proper schooli ng in relation to this deficiency (Kovà ¡cs 307). In correlation with poorly chosen test subjects, the studies were typically done with orthographic representations of words that would have been more difficult for younger test subjects to work with. For example, a study done by Ton Dijkstra, Professor of Psycholinguistics and Multilingualism at the Donders Institute, which focused only on adult English/Dutch bilingualsthe youngest being fifteen years old, all of whom studied their L2 in a middle or high school level. This study included only written examples of words and had the subjects determine if the word was English or Dutch. The results were able to somewhat prove Dijkstra’s theory of Bilingual interactive activation (BIA) which underlines the effects orthography has on L1 and L2 word retrieval that is â€Å"assuming, of course, that the same orthography is used in the input† (Dijkstra 217). If this study were done on younger children, it is sure they would not have performed as well since children are typically less familiar with the written language than with the spoken. Older language learners would make more use of the written approach to learning, such as a textbook, while younger learners typically lear n more from a speech-based approach, like conversationally in the home. The textbook approach is a symbolic processing which differs from the more embedded cognitive retrieval of the speech-based learning approach utilized by younger children to understand the two languages. There have been many studies over the past few years that have proven the opposite of these older arguments. Many of the studies have tested the cognitive abilities of young children, usually aged six and under in accordance with the critical period hypothesis, with both monolingual and bilingual proficiency. These experiments are concerned with cognitive tasks including false-belief tasks and grammar testing to determine the ability to hold abstract thought in the L2 as well as phonemic testing in order to find if there is an ability to distinguish between the phonemes of the different languages. The majority of these studies have tested subjects using visual representations and vocal experiments with proctors who have experience working with children and are trained in both languag es being tested. The more useful subjects are usually taught implicitly, or passively in the home. Although some make use of explicitly taught subjects, meaning they learned actively in a class setting. It has been proven that an infant of four months has the incredible linguistic discrimination abilities to distinguish languages with different prosody and phonemes (Kovà ¡cs 303). An infant is then better equipped to attain more native-like proficiency later in life when exposed this early to the sounds and rhythm of the L2. Doctor in Communication Sciences, Karsten Steinhauer explains, â€Å"that late L2 learners stabilize at some point short of native-like attainment [which] most recently has been discussed in terms of phonological/prosodic interference from L1† (Steinhauer 15). When a young child is introduced to two separate languages, the mechanisms of attention, selection, and inhibition become more fine-tuned due to the experience of attending to one language and ignoring the other (Kovà ¡cs 303, 308). The training in encoding and the association of two correspondi ng words with a common concept underlines the superior representational abilities a bilingual retains especially when the L2 is entrenched in the brain the way early acquisition allows. Linguist à gnes Melinda Kovà ¡cs presents research proving that monolinguals typically attain these abilities at the age of four years while young bilinguals gain these skills much earlier (Kovà ¡cs 316). The brain’s plasticity allows the young child to hold and use the two languages without interference and with continued usage the child will be more likely to attain full native-like proficiency in both languages. Kovà ¡cs also explains that since the brain remains active during demanding tasks, the brain may take on the extra load of two languages as a constructive challenge. The young, malleable brain may possibly â€Å"greatly adapt to [the challenge], for example, by changing its morphology† (Kovà ¡cs 308). A type of adaptation has been proven in studies done by neuroscientist Andrea Mechelli, which were concerned with the grey matter surrounding the left inferior parietal cortex, the general area associated with language use containing the Broca’s are a. These studies confirmed that the grey matter in this area is denser in early-acquired bilinguals. The density decreases in correlation with proficiency in the L2 with monolinguals having the least dense matter (Mechelli 757). This may be the case because a later acquired L2 is held at a more surface level of the brain and requires the use of the declarative memory instead of the procedural memory. Many tests have been done to determine the amount of brain activity associated with language in the left inferior parietal cortex through the use of event-related brain potential, or ERPs. Dr. Steinhauer describes ERPs as â€Å"reflecting the real-time electrophysiological brain dynamics of cognitive processes with an excellent time resolution in the range of milliseconds,† and that ERPs â€Å"have been hypothesized to be linked to rule-based automatic parsing† (Steinhauer 16). Measurements of ERPs are taken while subjects perform syntactically poignant tasks. Since it is thought that syntactic processes are generally automatic or a part of â€Å"implicit grammar processi ng† (Steinhauer 17), the ERP components would be more difficult to elicit in later acquired bilinguals. Steinhauer et al. performed several studies in this area, working with many real and one artificial language labeled BROCANTO 2. In each case, the subjects were given grammaticality judgment tasks in the given language, such as subject-verb agreement violations and lexical anomalies. For each group, the early acquired or implicitly taught subjects educed the same type of ERP responses as native speakers. Late-acquired or explicitly instructed subjects showed more shallow responses, if any at all in this area. These findings show that â€Å"syntactic processes appear to be sensitive to delays in L2 acquisition† (Steinhauer 19). One of the most prominent issues in L2 proficiency is attaining the phonemic boundary between the two languages. Monolinguals are usually unable to distinguish the sounds of a language other than their own. The more proficient a bilingual is in their L2, the more able they are to perceive the two types of phonemes and to determine which is correct in a given phonological circumstance. The phonemic boundary is the least likely area to be fossilized in a late-acquired bilingual. There have been several studies done which have proven this, including a 2008 study done by Adrian Garcia-Sierra, professor of Communications at the University of Texas. In this study, the voice onset time, or VOT, of thirty college students was tested. Half of the students were English monolinguals while the other half were English/Spanish bilinguals who described themselves as fluent speakers of both languages and who learned their L2 at home. This study was done in Austin, Texas where some Spanish is integrated into the daily culture. The results showed that the more fluent bilinguals were more apt to â€Å"a perceptual shift†¦associated with high level of confidence in English and Spanish†¦[and] that highly confident L2 bilinguals are more likely to possess a double phonemic boundary† (Garcia-Sierra 378). This shows that more proficient bilinguals will have a stronger ability to determine different phonemes, which also underlines the effects bilingualism has on a dvanced discrimination and attention skills. Another recent study performed on early bilinguals was done by a group of psychologists headed by Michael Siegal. The experiments tested the pragmatic skills of 41 children in northeastern Italy. All were between the ages of three and six years old, with 19 Italian monolinguals and 22 Italian/Slovenian bilinguals who attended the same preschool taught only in Italian. The children were tested on the Gricean maxims of conversational understanding. These are four basic rules which provide a foundation for pragmatic competence including quality, quantity, relevance, and politeness. The groups of children were shown cartoons with characters having conversations that contained one response created in order to break one of the maxims. The children were then asked which of the characters said something strange or rude and to provide a more appropriate response when the statement was positively identified. The main thesis in this study was that bilingualism requires â€Å"the capacity for flexibility in the representation of language and objects [which] suggests that early bilingualism should be accomp anied by advanced meta-pragmatic skills† (Siegal 115). This theory was upheld by the results of these tests in which the bilingual children outperformed the monolinguals by much more than a chance margin, especially in the maxims of politeness and quality even though many bilinguals had a delayed vocabulary in their L2. The psychologists behind this study suggest that bilingualism can be â€Å"accompanied by an enhanced ability to appreciate effective communicative responses† (Siegal 115). The results of this research seem to highlight the idea that the acquisition of a second language allows a child to remove themselves from the comfortable context of their native language and to realize that it is more necessary to provide useful information and use polite tones for more a successful exchange in both languages. Recently, studies have been performed concerning the effects and importance of early-acquired bilingualism in patients with neuropsychiatric disorders such as Parkinson’s and Alzheimer’s diseases. Research in this area shows that it is less likely for a bilingual individual to be affected by these types of diseases. The majority of the hypotheses behind this statistic pertain to the activity in the brain that is needed to think and speak bilingually. This constant activity exercises the brain in a way that is counterintuitive to the deterioration involved with these disorders (Paradis 216). The research behind Parkinson’s disease explains that the procedural memory is affected greatly sometimes causing a loss of the L1. This is partnered with a tendency to â€Å"produce a smaller portion of grammatical sentences†¦and exhibit deficits in comprehension of complex syntactic forms† (Paradis 217). This is likely linked to the deterioration of the left inferior parietal cortex, the same area in the brain discussed earlier, which is associated with syntactic processes and holding the L1. On the other hand, bilingual patients with Alzheimer’s show a loss in t heir L2 as well as in semantic abilities and a gradual loss of pragmatic, phonological, and syntactic structures. More common in this type of dementia is a puzzlingly inappropriate mixture of the two languages (Paradis 222). This is due to the break down of the declarative memory caused by the dementia. The declarative memory is involved with metacognition, which is why it affects such things as the less familiar language, pragmatic skills, and the selective attention abilities of bilinguals. The major finding in these studies is that â€Å"the differences observed in psychotic conditions as well as in dementias are caused by the increased reliance on declarative-memory-based (and hence consciously controlled) explicit metalinguistic knowledge† (Paradis 222). The advances made in early bilingual research have been great over the past few decades. Through these studies and so many more, it has been made clear that bilinguals with early ages of acquisition not only achieve more native-like proficiency but also tend to have more advanced cognitive abilities than their monolingual peers. These include but are not exclusive to increased analytical, representational, selective, and control abilities. Bilingualism also implies more developed metalingustic awareness and mental flexibility. Early bilinguals have also shown greater abilities in pragmatics and phonemic discrimination. In opposition to old arguments, Kovà ¡cs writes, â€Å"The bilingual condition could be stimulating for the highly plastic developing mind of the child, and induces specific changes in the brain and cognitive systems† (Kovà ¡cs 317). The higher development has been seen in ERP testing and in the density of grey matter involved in the linguistically apt area of the brain. Educators and policy makers should consider this information when planning early education programs. Those enriched with the benefits of a bilingual education are not only better off cognitively, but in the modern world, would be more prepared for the global society and workplace. Works Cited Diaz, R â€Å"Thought and Two Languages: The Impact of Bilingualism on Cognitive Development.† Review of Research in Education 10 (1983): 23-54 Dijkstra, Ton. â€Å"Task and Context Effects in Bilingual Lexical Processing.† Cognitive Aspects of Bilingualism (2007): 213-235. Garcia-Sierra, Adrian, Randy L. Diehl, and Craig Champlin. â€Å"Testing the double phonemic boundary in bilinguals.† Speech Communication 51 (2009): 369-378. Kovacs, Agnes Melinda. â€Å"Beyond Language: Childhood Bilingualism Enhances High- level Cognitive Functions.† Cognitive Aspects of Bilingualism (2007): 301-323. Mechelli, A., Crinion, J. T., Noppeney, U., O’Doherty, J., Ashburner, J., Frackowiak, R. S., and Price, C.J. 2004. Structural plasticity in the bilingual brain. Nature. 431: 754. Siegal, Michael, Laura Iozzi, and Luca Surian. â€Å"Bilingualism and conversational understanding in young children.† Cognition 110 (2009): 115-122.

Wednesday, August 21, 2019

The Role Of Proprioceptive Neuromuscular Facilitation Stroke

The Role Of Proprioceptive Neuromuscular Facilitation Stroke INTRODUCTION Stroke is a rapidly developing clinical signs of focal disturbance of cerebral function, lasting more than 24 hours or leading to death with no apparent cause other than that of vascular origin (Aho K Harmsen 1980). Stroke is a disease of developed nation and its the third leading cause of death and long term disability all over the world with an incidence rate of 10 million per year (Sudlow and Warlow 1996). Stroke occurs at any age but it is more common in elderly between 55 to 85 years of age (Boudewejn Kollen and Gert Kwakkel 2006). Stroke is classified into two types based on the pathology and cause, Ischemic stroke, occurs when the blood supply to part of the brain is decreased, leading to dysfunction of the brain tissue in that area. The ischemia results when there is Thrombosis, Embolism, Systemic hypoperfusion and venous thrombosis. Hemorrhagic stroke occurs when there is accumulation of blood anywhere within the skull vault. These hemorrhage results when there is microaneurism, arterio venous malformation and inflammatory vasculitis (Capildeo and Habermann 1977). Normal cerebral blood flow is approximately 50 to 60 ml/100g/ Minutes and varies in different parts of the brain. When there is ischemia, the cerebral auto-regulatory mechanism will compensate for the reduction in the cerebral blood flow by local vasodilatation and increase the extraction of oxygen and glucose from the blood. When the Cerebral Blood Flow is reduced to below 20 ml/100g/min, an electrical silence occurs and synaptic activity is greatly diminished in an attempt to preserve energy stored. Cerebral blood flow of less than 10ml/100g/min results in irreversible neuronal injury. These neuronal injuries occurs when there is formation of microscopic thrombi, these microscopic thrombi are triggered by ischemia induced activation of destructive vasoactive enzymes that are released by endothelium, platelets and neuronal cells. These result in the development of hypoxic ischemic neuronal injury which is primarily induced by overreaction of some neurotransmitters like glutamate and aspirate. Within an hour of hypoxic-ischemic insult there will be ischemiec penumbra where auto- regulation is ineffective. This stage of ischemia is called window of opportunity, where the neurological deficit created by ischemia can be partly or completely reversed. After this stage is a stage of neuronal death, in which the deficit is irreversible (Heros 1994). Functional restrictions resulting from stroke are paralysis of upper limb lower limb function, cognitive deficit, visual disturbances, disturbance of gait and mobility, spasticity of muscle, loss of co-ordination and speech problems. The loss of upper extremity control is common after stroke with 88% of survivors having some level of upper extremity dysfunction. Basic Activities of Daily Living (ADL) skills are compromised in acute stroke, with 67% to 88% of patients demonstrating partial or complete independence (Amit Kumar Mandall 2009). Muscle weakness, or the inability to generate normal levels of force, has clinically been recognized as one of the limiting factors in the motor rehabilitation of patients with stroke. Following stroke, some patients lose independent control over select muscle groups, resulting in coupled joint movements that are often inappropriate for the desired task. These coupled movements are known as synergies and, for the upper limb flexor synergy: shoulde r flexion, adduction, internal rotation, elbow flexion, wrist flexion and finger flexion. Upper limb extensor synergy: shoulder, elbow, wrist and finger extension. The rehabilitation of upper extremity is quite challenging. Many therapeutic approaches are currently available in the rehabilitation of upper extremity function. Most commonly used treatment approaches are ROODs approach, Sensory motor approach, PNF, Brunnstroms movement therapy, Bobaths technique and neuro developmental therapy. In this Proprioceptive Neuromuscular Facilitation (PNF) is widely used in the rehabilitation of upper extremity function in stroke patients. (Amit Kumar Mandall 2009). PNF is a therapeutic intervention used in rehabilitation which was originally developed to facilitate performance in patients with movement deficits. PNF exercises are based on the stretch reflex which is caused by stimulation of the Golgi tendon and muscle spindles. This stimulation results in impulses being sent to the brain, which leads to the contraction and relaxation of muscles. When a body part is injured, there is a delay in the stimulation of the muscle spindles and Golgi tendons resulting in weakness of the muscle. PNF exercises help to re-educate the motor units which are lost due to the injury. A variety of methods fall under the rubric of PNF, including the exploitation of postural reflexes, the use of gravity to facilitate movement in weak muscles, the use of eccentric contractions to facilitate agonist muscle activity, hold relax, contract relax, rhythmic stabilization, rhythmic initiation and the use of diagonal movement patterns to facilitate the activation of bi-art icular muscles (Etnyre Abraham L D, 1987; Hardy Jones, 1986 Osternig, Robertson, Troxel, Hansen, 1987). Tomasz  Wolny, Edward  Saulicz and RafaÅ‚Â  Gnat in 2009 conducted a randomized control study on the efficacy of proprioceptive neuro-muscular facilitation in rehabilitation for activities of daily living in late post-stroke patients. In this study sixty four stroke patients were recruited from the neurological rehabilitation centre Subjects for this study were recruited based on some inclusion criteria. The patients with loss of sphincter control, loss of mobility, locomotion and communication were included in this study and patients with grade 5 or 6 Repty Functional lndex scale were included in this study. After the recruitment of patients, all the 64 patients were randomly divided into two groups, group A (control group) and group B (experimental group). Group A will receive conventional treatment like strengthening, gait training etc. Group B will receive PNF based exercise. A pre and post assessment of the functional status of the stroke patients was done using R epty Functional lndex scale. The treatment will be continued for 21 days for both the groups in the neurological rehabilitation centre. . The data were analyzed using chi-square test. Chi-square was used to study associations between the treatments and changes in the criterion measurements. ANOVA was used to compare the average changes among the two groups. The result of this study showed that PNF-based rehabilitation exercise of late post-stroke patients significantly improved in their ADL functional performance and in locomotion when compared to the control group treated with conventional therapy. Kuniyoshi Shimura.A, Tatsuya Kasai. B in 2002 conducted a study on Effects of proprioceptive neuromuscular facilitation on the initiation of voluntary movement and motor evoked potentials in upper limb muscles activity. In this study author investigated the effect of PNF limb positions and neutral limb positions on the initiation of voluntary limb movement and motor evoked potentials in upper limb muscles. In this experimental study the patients were divided into two groups, in experimental group 1 they investigated the effectiveness of PNF by considering the effects of limb position changes on the initiation of voluntary movement in terms of electromyographic reaction times. In experimental group 2 they investigated the effectiveness of no (neutral limb position) movement by considering the effect of limb position changes on the initiation of voluntary movement with electromyographic reaction times. After signing the consent the experiment was conducted on the patients. Two upper ar m positions used in this study, a neutral position (N) and a position facilitating activity of the upper extensor muscles (PNF). The effects of these positions are observed in the EMG. The subject could passively adopt the two upper arm positions using his right (affected) arm by means of especially made arm holders. For each arm position, six blocks of 10 trials were performed. All trials of the first block and the first trial of each of the following blocks were excluded from the analysis to eliminate start-up effects. In addition, a few trials were discarded because of obvious mistakes in the recording. EMGs were recorded simultaneously from three muscles (Brachioradialis, triceps brachii and deltoid) using 3 cm diameter, bipolar, silver surface electrodes connected to an EMG-unit. The result of this study showed that the EMG discharge order differed between the two positions. PNF position improves movement efficiency of the joint by inducing changes in the sequence in which the muscles are activated. Hence PNF has an effective role in the initiation of voluntary movement and motor evoked potential in upper limb muscle activity. Pamela Duncan and Lorie Richards et al., in 1998 conducted a study on the effect of Home-Based Exercise Program for Individuals with Mild and Moderate Stroke. In this randomized controlled pilot study, 20 individuals with mild to moderate stroke who had completed acute rehabilitation program and those who were 30 to 90 days after onset of stroke were randomized to a 12-week (first 8-week will be therapist-supervised program and the next 4-week will be independent program) rehabilitation program. After signing the consent form, patients were selected based on some inclusion criteria like (1) 30 to 90 days after stroke; (2) minimal or moderately impaired sensorimotor function (3) ambulatory with supervision and/or assistive device; (4) living at home; and (5) living within 50 miles of the University. The exclusion criteria for this study are (1) a medical condition that interfered with outcome assessments or limited participation in sub maximal exercise program, (2) a Mini-Mental State score The participants for this study were selected and evaluated by a therapist based on the inclusion and exclusion criteria. If the subjects agreed to participate in this study, then the basic assessment is done after getting the informed consent. The severity of the stroke were assessed using Orpington Prognostic Scale (Sue-Min Lai and Pamela W. Duncan 1998) and Fugl-Meyer Motor Score (Pamela W Duncan 1982) that includes assessment of motor function of the arm, upper extremity proprioception, coordination, balance, and 10 cognitive questions. The functional assessments are performed using Barthel Index Activities of Daily Living (Fricke and Unsworth 1997) Lawton Instrumental Activities of Daily Living and Medical Outcomes Study-36 Health Status Measurement (Colleen and John 1992). Functional assessments of balance and gait of the participants were assessed using 10-Meter Walk, 6-Minute Walk (Kosak and Smith 2005) and Berg Balance Scale (Berg, Wood-Dauphinee and Williams 1995). Upper extremity hand function was evaluated with the Jebsen Test of Hand Function.The Jebsen is a standardized assessment to measure the time taken to perform hand activities. These includes: writing a short sentence, turning over 35 cards, picking up small objects, stacking checkers, simulated eating, moving empty large cans, and moving weighted cans(Jebsen, Taylor, Trieschmann 1969). After baseline assessment the subjects were randomly assigned into two groups, experimental group and control group. In experimental groups the PNF exercise were taught to the patients on day one as an home exercise and they were asked to continue the same exercise as an home program for eight weeks with three visits to the physical therapy department every week. The exercise includes assistive and resistive exercises using Proprioceptive Neuromuscular Facilitation Patterns and Theraband exercise to the major muscle groups of the upper and lower extremities. Subjects in the control group received usual care as prescribed by the physicians. The subjects of this group were assessed by the research assistant. The demographic data of both the groups were statistically compared using Wilcoxon rank sum tests. The results of this study showed that there is no difference in the pre and post exercise treatment. There is no change in the upper extremity function and the functional health status in both the experimental group as well as in control group after the treatment interventions. Ruth Dickstein, Shraga Hochman, Thomas Pillar, and Rachel Shaham in 1992 conducted a study on Stroke Rehabilitation with Three Exercise Therapy Approaches. One hundred and ninety-six hemiplegic patients were randomly selected for this study. All subjects were referred to the physical therapy department of a geriatric-rehabilitation hospital over a period of 18 months were admitted to the study. All patients had a recent cerebrovascular accident and came for a rehabilitation program after an average stay of 16 days in a general hospital. Sex distribution was equal with a mean age of 70.5 years. Thirteen physiotherapists were enrolled in the study for exercise administration and the subjects were assigned randomly to each therapist. The data were collected in a separate form, which has two parts; first part was used to collect the basic information like age, gender, side affected and location of the damaged artery. The second part was used to record the variable data. Each therapist tr eated their first five patients with conventional method, next five with PNF method and the last five with Bobath method. All patients were treated for five days a week for six weeks, and each treatment sessions were last for 30 to 45 minutes. The outcomes of each patient are measured before the treatment and every week thereafter. The functional independence is measured with Barthal index. Muscle tone of the involved extremities was checked by passive movements of the extremities with the patients in supine position. Muscle tone was graded using an ordinal scale composed of five points: a) flaccid, b) low, c) normal, d) high, and e) spastic. Ambulatory status of the patient was assessed and classified with a nominal four category scale: a) patient does not walk, b) patient walks with an assistive device and persons help, c) patient walks with an assistive device, and d) patient walks independently. The treatment was continued for 6weeks in both the groups. The data were analyzed using chi-square test. Chi-square was used to study associations between the treatments and changes in the criterion measurements. The Kruskal-Wallis one-way analysis of variance (ANOVA) was used to compare the average changes among the three groups. The results of this study showed that there is no significant difference in the improvement of activities of daily living and in the walking ability. But there is significant difference in the improvement of muscle tone in PNF group and in Bobath group when compared to the conventional treatment group. CONCLUSION: The poor quality of the trials reviewed severely limits the conclusions that can be drawn. However, it seems that currently there is no evidence, that interventions based on the Proprioceptive Neuro-muscular Facilitation (PNF) are more effective than other approaches. One Study done by Ruth Dickstein on PNF vs. Bobath concluded that PNF exercise given in conjunction with Bobath technique are more effective in improving wrist strength and upper limb function than giving PNF alone. But the outcomes used in these studies are ordinal rating scales, which may not be sensitive enough to differentiate the effect of the two techniques. The number of subjects recruited for these studies is very less. We cannot come to conclusion on the effect of PNF in upper limb function with these less number of studies. Stroke patients may vary widely on factors such as physical impairments, speech impairments, severity of impairments, cognitive impairments, and also in the individual personality and learning styles. So, we cannot assume that this PNF technique is superior to all other techniques, because we cannot say this technique can be used in individuals with stroke and at every stage of recovery. For example one approach may be effective in initial stage of stroke, but the same approach may not be effective for chronic stroke patients. Factors such as depression, spatial awareness, cognition, comprehension and sensory loss could also have an impact on the response of a technique. In most of the studies there is no exact clinical finding about the problem, size of lesion and the site of lesion. Characteristics of the lesion may explain the variability in responsiveness to the intervention. There is no ideal timing of the interventions, whether the technique should be given in the initial stage or late stage of stroke. In this review on the effect of PNF in upper limb function in stroke, evidence on the current practice is lacking. Because of the lack of evidence on current practice it is very difficult to make a conclusion. Evidence of support and treatment used in these articles is not standard to use in todays health care practice. It is suggested that further studies comparing the effect of PNF with other approaches using sensitive, reliable outcome measures and with homogenous sample size should be done. Therefore it is important that future studies clarify the analysis and interventions used within the PNF technique to enable accurate evaluation of the study. No studies on this review assessed the efficacy and the effectiveness adequately, so further studies should be done to get an effective and optimal approach in the rehabilitation of upper limb function in stroke patients.

Tuesday, August 20, 2019

Oxygen Product and Recycling in Artificial Ecosystem

Oxygen Product and Recycling in Artificial Ecosystem Discovering Terrestrial and Aquatic life The Ecosystem Simulation Purpose/ Hypothesis The purpose of this experiment was to create an artificial ecosystem in order to observe the natural changes in life. The column was put together including a terrestrial and an aquatic section to see how the two interact as one. Plants, insects, and fish were added to the column in order to observe how oxygen is produced, used and recycled. The eco-column experiment was done in order to familiarize us with testing water for pH, temperature, and dissolved oxygen levels. Along with familiarizing the participants with the process and meaning of certain environmental tests the eco-column simulation helped to show how life and nature works. It gave insight to how one element affects another in nature. The eco-column simplified the mast works of nature. Methods Two liter bottles were brought in and the bottoms were cut out of all but one and the tops out of all. After cut, the bottles were assembled together and taped. The eco-column was composed of three different sections; aquatic, decomposition and terrestrial. There was a filter inserted between the decomposition and the terrestrial chambers in order to catch the soil that would try to make its way down to the aquatic chamber. The eco-column was first assembled September 24th. For the aquatic chamber water was brought in, nearly a gallon, from local lakes, ponds, and creeks. For the decomposition and terrestrial the soil was taken from local forest. After assembling the column and inserting water and soil there were instructions to insert rocks, sticks, and insects. After assembly was complete test were done. The aquatic chamber went through various test including turbidity, dissolved oxygen, pH and temperature, along with subjective test such as odor and color. Observations were comple ted, as well as soil test. At first the columns were tested every week, but after 3 weeks the teacher instructed the class to complete test and observations every two weeks versus every week. The teacher gave out aquatic plants in order to help with dissolved oxygen levels. Once the dissolved oxygen levels and temperature became constant and safe fish were placed into each of the aquatic chambers of the eco-columns. The eco-column experiment lasted around three months; from September to December. The tests were completed five times. Dissolved oxygen and temperature were both tested using a probe in which was placed in the water. The pH levels were tested using a a special paper stick pH tester. In order to test turbidity water samples were taken from the aquatic chamber and put in a machine which read the level. The soil test were completed by taking out a cup of soil from the eco-column the week before. They were then tested for various elements such as; pH, nitrogen, potassium, an d phosphorus by putting them in the directed containers in which powder was added to test for the specific element. The eco-column was taken down on December 3rd. The water and soil was dumped outside of the school and the bottles were given to our teacher in order to be used again. Results The table below shows how the dissolved oxygen, temperature, and pH levels changed throughout the experiment. It is visible that the pH levels and temperature remained fairly constant over time. The temperature remained around 21 degrees Celsius and the pH levels neutral, 7. The dissolved oxygen levels however were constantly changing. The first day of our experiment, September 24th, the dissolved oxygen level was 1.0. At that level the water was unsafe for marine life, such as fish. There was barely any oxygen circulating throughout the chamber. A week later the level was up to 7.6. Our teacher stated that the range of 7 is a safe number. She ensured her class that they would receive plants and fish when the levels were suitable. About the second week in she added a plant into the aquatic chamber which really helped with the dissolved oxygen levels. Once suitable (about the third week) the fish were added and one can see from the table that the tested fields remained fairly constant . Water Quality (figure 1) The table below shows the observations of the aquatic, decomposition, and terrestrial chambers over time. When the eco-column was first assembled, the water was not in very good condition. It reeked of sewage, was yellow and from the chart above the dissolved oxygen levels were as low as they could be. Not only was the aquatic chamber bad, but the decomposition and terrestrial habitats smelled fowl, were full of mold, and life did not survive. From the chart one can easily see that over time the conditions greatly improved and by the end was an ecosystem sustainable for life. By October 22nd the eco-column had greatly improved. There were signs of growth, clear water and the mold was nearly gone. By the last day of the experiment there was no smell, no algae and no signs of mold. From observations and data it is clear that the presence of plants and animals helped to improve water and soil quality. They helped to minimize bacteria and fungus while improving the state of the air and o xygen levels. Observations of Biomes (Table 1) Discussion Identify two Food Chains or Food Webs in each of your habitats (chambers). Use arrows to illustrate these food chains and food webs; complete sentences are not required. Use extra paper if needed. Aquatic Chamber Decomposition Chamber (top soil chamber) Terrestrial Chamber On separate sheet Identify and briefly discuss the biogeochemical cycles which are taking place/which are present in your EcoColumns. Do not merely state that â€Å"they are all present†; instead, provide more specific information The sunlight brings in warmth , energy, and oxygen. While the animals ( fish and insects) breathe in oxygen CO2 is produced. The CO2 is then taken in by the plants and oxygen is released. The cycle then repeats. Is your ecosystem column a closed or open system? or is it something in between a closed or open system? Explain how this (closed, open or other) influences the ecosystem column overall. The eco-column is in between an open and a closed system. It is closed in the sense that it is isolated from the rest of nature. It is open because it has all the regular cycles and interactions of an ecosystem but just in a smaller, and confined. Although it is technically a closed system it is open because it has natural cycles. What kind of niches are available/present for the various organisms in the column? Be specific, descriptive, and use terminology that is pertinent to the topic. The fish niche is to clean up the algae present in the aquatic habitat. While the aquatic plants niche is to take in the CO2 produced from the fish and produce oxygen in order to keep the fish alive and dissolved oxygen levels high. Discuss evidence of ecological succession taking place in your column (or in the column of another lab group if you have not observed any signs of succession in your column). Our eco-column started out lifeless. The water was dark, the smell was unbearable, the chemical levels were high, and the dissolved oxygen levels were low. Over time the water began to clear, the smell went away the chemicals leveled out and the dissolved oxygen levels rose. The presence of plants cleared up the water and made it livable. After the first plant other plants were able to grow and the ecosystem was able to support life (fish). Discuss the stability and sustainability of the ecosystem columns in the lab, including your own. After the first week my groups eco-column became stable, the levels remained constant from that point forward, ours was also capable of sustaining life. However, everyone’s eco-columns weren’t as stable. Several groups struggled with clearing up their water and raising their dissolved oxygen levels. Because of this they were unable to have fish. One groups water turned black due to a fungus and eutrophication occurred. Discuss three trends or patterns which stand out as you think back on the data which you have been recording for 6 weeks. These trends or patterns should apply to the water quality tests or other observations which you have made over this multi-week time period. Briefly discuss these three trends or patterns, providing possible explanations based on environmental science principles. My group’s pH, dissolved oxygen, and temperature all follow the same pattern. They started out very low, rose quickly, dropped, and then leveled back out. Many of our terrestrial insects died so this could have possibly affected the levels, as well as lack of sunlight. Explain what eutrophication refers to and how this occurs. Apply this explanation to your ecosystem column. How might eutrophication take place in your column? Explain fully. Eutrophication refers to the increase in nutrients in water such as nitrates and phosphates; it depletes the oxygen and turns the water different colors. Eutrophication happened in one group’s column but not ours. Eutrophication could happen by nutrients from the soil in the terrestrial chamber dropping down to the aquatic chamber and polluting the water. Once the water is polluted the oxygen depletes and the water changes colors and becomes unsafe. Pick another group in your class. How do your data compare to theirs? Brainstorm some causes/reasons for any differences. Since we worked at lab stations other groups were always around. I observed that most people had similar results to us. Good temperatures, steady levels of pH and dissolved oxygen with rather clear water. Some groups however were not similar. Some had bad levels, could never get oxygen levels to healthy state and had vast amounts of mold and algae. Some eco-columns were lifeless because insects and plants were unable to survive. Finally, address any sources of error in this lab. This should be narrated in a â€Å"cause and effect† manner and talk about specific problems. A good example would be â€Å"water did not drain from the terrestrial chamber so †¦Ã¢â‚¬  while a bad example would be â€Å"we messed up the measuring one day.† The only error my group could find in the lab was the soil test. We could never get enough soil to do the test, so our data is very scarce and not one week could we actually complete the task. The only time we had enough soil was the last time and the results did not seem to be very accurate. I believe something could be done to improve the soil test and raise the accuracy. Conclusion Before this experiment I was clueless on the various water and soil test; as well as how to conduct them. I now feel confident that I could complete each test on my own and I am aware of the temperature, pH level, and dissolved oxygen number needed to sustain life. This experiment was very helpful in demonstrating how an ecosystem works and how everything needs and plays off one another. The eco-column gave us the opportunity to experience biogeochemical and life cycles. We learned what is necessary to sustain life and I feel as if that was the most important thing learned from the eco-column experiment. References Botkin, D. B., E. A Keller (2011). Environmental Science (8th ed.). Hoboken, NJ: John Wiley Sons. The EcoColumn. (2013). Retrieved December 12, 2013, from Annenburg Learner website: http://www.learner.org/courses/essential/life/bottlebio/ecocol/ EcoColumn Lab. (2013, February 7). Retrieved December 14, 2013, from Teaching Real Science website: http://teachingrealscience.com/2013/02/07/eco-column-lab/

Monday, August 19, 2019

Character Development In Sense And Sensibility :: essays research papers

Book Review 1 Development of Major Characters Sense and Sensibility   Ã‚  Ã‚  Ã‚  Ã‚  The first of Jane Austen’s published novels, Sense and Sensibility, portrays the life and loves of two very different sisters: Elinor and Marianne Dashwood. The contrast between the sister’s characters results in their attraction to vastly different men, sparking family and societal dramas that are played out around their contrasting romances. The younger sister, Marianne Dashwood, emerges as one of the novel’s major characters through her treatment and characterization of people, embodying of emotion, relationship with her mother and sisters, openness, and enthusiasm.   Ã‚  Ã‚  Ã‚  Ã‚  Marianne is in the jejune business of classifying people- especially men- as romantic or unromantic (Intro II). Marianne’s checklist mentality is observed by Elinor: â€Å"Well, Marianne†¦for one morning I think you have done pretty well†¦. You know what he thinks of Cowper and Scott; you are certain of his estimating their beauties as he ought, and you have every assurance of his admiring Pope no more than proper.† (Chapter 10) To site a specific incident, Marianne describes her opinion of Edward Ferrars- her sister’s interest- as being very amiable, yet he is not the kind of man she expects to seriously attach to her sister. She goes on to find, what in her opinion are flaws, that Edward Ferrars reads with little feeling or emotion, does not regard music highly, and that he enjoys Elinor’s drawing, yet cannot appreciate it, for he is not an artist (15). In a man, Marianne seeks a lover and a connoisseur, whose tastes coincide with her tastes. He must be open with feelings, read the same books, and be charmed by the same music (15). Marianne seeks a man with all of Edward’s virtues, and his person and manner must ornament his goodness with every possible charm (16). Marianne’s mother relates Marianne’s maturity beyond her years by reminding Marianne â€Å"Remember, my love, that you are not seventeen. It is yet too early in life to despair of such an happiness (16).† Marianne’s brand of free expression sometimes has little else to recommend it (Intro, I). What is true of Marianne’s classification system is true of her manners in general: In her refusal to place social decorum and propriety above her own impulses and desires, she is absolutely unbending (Intro, II). Marianne is also characterized as being very charming. For example, she believes her poetic effusions to be striking in themselves as well as accurate expressions of her inner life (Intro, VII).

Sunday, August 18, 2019

Greed :: essays papers

Greed Greed Greed is a selfish desire for more than one needs or deserves. Greed can make honest men murderers. It has made countries with rich valuable resources into the poorest countries in the world. We are taught it is bad and not to practice it. But consider a world without greed, where everyone is as sharing as Mother Theresa was. The progress of humankind would be at a standstill. Greed has given our society faster travel, better service, more convenience, and most importantly, progress. Greed has created thousands of billionaires and millions of millionaires. But why is greed associated with evil? In their day, most capitalists like Cornelius Vanderbilt and John D. Rockefeller were depicted as pure evil. Vanderbilt stole from the poor. Rockefeller was a snake. But the name-calling did not come from the consumers; it was the competing businesses that complained. The newspapers expanded on these comments, calling them "robber barons." These are inaccurate terms for these busine ssmen. They were not barons because they all started penniless and they were not robbers because they did not take it from anyone else. Vanderbilt got rich by making travel and shipping faster, cheaper, and more luxurious. He built bigger, faster, and more efficient ships. He served food on his ships, which the customers liked and he lowered his costs. He lowered the New York to Hartford fare from $8 to $1. Rockefeller made his fortunes selling oil. He also lowered his costs, making fuel affordable for the working-class people. The working-class people, who use to go to bed after sunset, could now afford fuel for their lanterns. The people, who worked an average 10-12 hours a day, could now have a private and social life. The consumers were happy, the workers were happy, and they were happy. Bill Gates, CEO of Microsoft Corporation is another example of a greedy person. He is the richest man in the world with about $40 billion and he continues to pursue more wealth. Just because he has $40 billion does not mean the rest of the world lost $40 billion, he created more wealth for the rest of the world. His software created new ways of saving time and money and created thousands of new jobs. Bill Gates got rich by persuading people to buy his product. His motive may have been greed, but to achieve that, he had to give us what we wanted. Greed :: essays papers Greed Greed Greed is a selfish desire for more than one needs or deserves. Greed can make honest men murderers. It has made countries with rich valuable resources into the poorest countries in the world. We are taught it is bad and not to practice it. But consider a world without greed, where everyone is as sharing as Mother Theresa was. The progress of humankind would be at a standstill. Greed has given our society faster travel, better service, more convenience, and most importantly, progress. Greed has created thousands of billionaires and millions of millionaires. But why is greed associated with evil? In their day, most capitalists like Cornelius Vanderbilt and John D. Rockefeller were depicted as pure evil. Vanderbilt stole from the poor. Rockefeller was a snake. But the name-calling did not come from the consumers; it was the competing businesses that complained. The newspapers expanded on these comments, calling them "robber barons." These are inaccurate terms for these busine ssmen. They were not barons because they all started penniless and they were not robbers because they did not take it from anyone else. Vanderbilt got rich by making travel and shipping faster, cheaper, and more luxurious. He built bigger, faster, and more efficient ships. He served food on his ships, which the customers liked and he lowered his costs. He lowered the New York to Hartford fare from $8 to $1. Rockefeller made his fortunes selling oil. He also lowered his costs, making fuel affordable for the working-class people. The working-class people, who use to go to bed after sunset, could now afford fuel for their lanterns. The people, who worked an average 10-12 hours a day, could now have a private and social life. The consumers were happy, the workers were happy, and they were happy. Bill Gates, CEO of Microsoft Corporation is another example of a greedy person. He is the richest man in the world with about $40 billion and he continues to pursue more wealth. Just because he has $40 billion does not mean the rest of the world lost $40 billion, he created more wealth for the rest of the world. His software created new ways of saving time and money and created thousands of new jobs. Bill Gates got rich by persuading people to buy his product. His motive may have been greed, but to achieve that, he had to give us what we wanted.

Tree Imagery in Hurston’s Novels, Their Eyes Were Watching God and Sera

Tree Imagery in Hurston’s Novels, Their Eyes Were Watching God and Seraph on the Suwanee Hurston uses the fruit tree as an important image in both of the texts: the blossoming pear tree for Janie and the budding mulberry tree for Arvay. Each holds a unique meaning for its counterpart. In looking at Janie’s interaction with her tree, I chose to focus on the passage on page 11, beginning with â€Å"She was stretched on her back beneath the pear tree†¦Ã¢â‚¬ . For Arvay, I chose the passage on page 37, beginning with â€Å"They entered the place under the tree†¦Ã¢â‚¬ . The two tree passages have many similarities and differences. The most obvious difference is that Hurston first introduces us to the pear tree with Janie alone, whereas we have our first experience of the mulberry tree with both Arvay and Jim. This in itself is symbolic of important aspects of both of the characters. For Janie, it points to her independence and strength. For Arvay, it seems to show her dependence and frailty. Another difference lies in the position and shape of the tree itself. In Their Eyes, â€Å"the gold of the sun†, â€Å"t...

Saturday, August 17, 2019

Genicon: a Surgical Strike Into Emerging Markets. Essay

Genicon is a company with 10 years of experience domestic and some real international success , Genicon was successful in USA, but it quickly realized that it would be difficult for them to have sustainable growth, because the health care purchases medical equipment through GPOs. And as a small company it was so hard to obtain a contract from GPOs because their financial structure encourages them to purchase equipment from giants companies.So Genicon decided to go international and capture increasing demand there. It became smallest company to sell product to European markets with the assistance of BSI.Genicon was already in over 30 international markets and was looking in particular at the rapidly emerging markets – Brazil, Russia, India and China – as potential new opportunities for growth.So the question facing Genicon where it should go next? I. Case key players/Contributors: a.Gary Haberland,president and founder of GENICON. b.small development team of Genicon. c.MEDICA in Germany large tradeshows for medical devices. d.employee of British Standards Institution(BSI) e.Genicon shareholder. II. Problem Identification a.Domestic business i.Lack of a favorable channel in the US. ii.High bargaining power of buyers through group purchases (GPO) . iii. High regulatory costs. b.International business. i.Sales of medical devices associated with the number of tenders which have different regulatory than US and its just for short term. ii.Due to GENCION’s limited resources it was hard to decide which country to invest in, depend on: 1-Regulation/ Compliance. 2-Bargaining power of buyers. III. Suggestions a.Uncertainty Avoidance i. Haberland and his company appear to be strong Uncertainty Avoidance very structure and having conservative investments.Hesitance toward new products in medical devices. ii.Should go with weak UAI and make some risky investments.and to be more flexible and adaptable to any chaos. b.Short and Long term orientation i.Almost all the contract that Haberland had with the international country are being traditional and current short term . ii.Try to find international country Long term orientation and sacrifice present for futere. IV. Recommendation a.Should go with Brazil due to international accepted! Fast process, zero tariff. Easy Regulation/ Compliance.

Friday, August 16, 2019

U.S. Dollar Exchange Rate And Oil Price

Both U.S. Dollar exchange rate and the oil monetary value are foremost variables which coerce the patterned advance of the universe economic system. Fluctuations in these variables deeply affect international trade and economic activities in all the states. Determination of the nexus between these cardinal variables is one of the critical issues, whether they are correlated or non. Is at that place any empirical grounds on the nexus between the variables? In this literature, I initiate by appraising all theoretical grounds that could clarify the relationship between U.S. Dollar exchange rate and oil monetary values. To get down with, as oil monetary value and oil trade is denominated in United State ‘s Dollars, motions in the effectual exchange rate of U.S. Dollar impact the monetary value of oil as alleged by all states outside United States. Therefore, fluctuation in the dollar exchange rate can arouse alterations in demand and supply of oil, which cause alterations in the oil monetary value. Second, the opposite tendency can besides be found, i.e. , oil monetary value fluctuation trigger alterations in effectual exchange rate. The ground can be found in the literatures on the effectual exchange rates. In the theoretical account proposed by Farquee ( 1995 ) , if a state stocks foreign assets, its effectual exchange rate appreciates and this motion occurs without hindering its current history balances. This is due to the ground that capital income absorbs the loss in trade grosss induced by the deteriorated fight. Change in oil monetary value affects all the universe instabilities and this induced alteration in international assets may hold an impact on effectual exchange rates of different states of the universe. Last but non the least, I take aggregation of different portfolio theoretical accounts, most significantly the 1s by Golub ( 1983 ) and Krugman ( 1983a ) which are developed to account for trade and fiscal interactions such as assistance and grants between United States, oil manufacturer states and the remainder of the universe particularly Europe. The comprehensive study of theoretical and empirical interactions between the two cardinal variables opens the manner for every possible nexus between the two variables either negative, positive and in both waies of causality. If there are some theoretical grounds for every possible nexus, so one has to be stronger than others. Therefore, the inquiry is to unknot the alternate theoretical account by facing to the informations. I hence, conduct an empirical survey of the relationship between dollar existent effectual exchange rate and the oil monetary values over the period straddling from 2007 to till day of the month. Prime focal point is on the long term relationship between these two vital variables. Among the possible account reviewed, the one affecting the equilibrium exchange rate is the exclusive account which fit the found relationship. The possible continuance of a long-run relationship between the dollar effectual exchange rate and oil monetary value assume causality between these variables. Earlier surveies show a causality way from oil monetary values to the U.S. dollar ( Amano and van Norden, 1995 among others ) . However, there are some statements which justify opposite way of causality i.e. , from U.S dollar to the oil monetary value. In this literature, I study the two types of causality and seek to measure the resulting of the relationship which determines the tendency of motion. The effectual dollar exchange rate has significant impact on the demand and supply of oil since it had influence on the monetary value of oil. The depreciation in the dollar reduces the monetary value of oil in the local markets of the states holding their several currencies under drifting exchange rate like Japan or Euro Zone. The states which have pegged their currency with the dollar have impersonal affect such as China. Generally, a lessening in the dollar exchange rate reduces the oil monetary value in the local markets of the consumer states. The lessening in monetary value of oil finally increases the demand for oil monetary value. This can be stated that dollar depreciation has positive impact on demand for oil and this addition demand contributes towards the rise in the monetary value of the oil. Oil companies use local currencies of manufacturer currencies to pay the fiscal liabilities and current fiscal duties such as rewards, revenue enhancements and other runing cost. These currencies are frequently linked or pegged to the dollar due to the fact that they fall in fixed-exchange rate governments adopted by most manufacturer states ( Frankel, 2003 ) . The alterations in monetary value of oil due to the alteration in the dollar exchange rate is less as estimated by the manufacturer states than estimated by the demander or consumer states. Necessary boring activities are linked straight to the oil monetary value. When oil monetary value addition, oil production besides addition by the manufacturer states to gain extra net incomes. This fact has been proved by different empirical surveies in states like North America, Latin American and Middle East. But this fact has non been proved true for African and European states. It is of import to that the relationship between boring a ctivities and oil monetary value in dollars has well changed since 1999. But it is difficult to happen that whether this alteration occur due to the debut of Euro currency in 1999 or due to the decrease in oil monetary value in 1998. Depreciation in the dollar monetary value novices rising prices ensuing decrease in the income of oil manufacturer states, the currencies which are pegged to the dollar. All the states are non affected in the say manner, states which mostly import from USA like OPEC is less affected than states than states which imports from Europe or Asia. Overall, depreciation in the dollar monetary value may cut down the supply of oil. On the short tally, supply is less or decrepit elastic to the monetary value in upward and downward way. The upward weak flexibleness is due to the production restraint and the downward flexibleness is weak due to really little fringy cost. Demand is besides inelastic in the short tally due to the deficiency of replacements available in the short tally ( Carnot and Hagege, 2004 ) . In short, demand and supply of oil in short is about inelastic in the short tally. Noticeable alterations in the supply and demand are chiefly discernible on the long term period. At this phase supply is more elastic due to the capableness of new investing and demand is more elastic due to the handiness of close replacements. By and large, a dollar effectual exchange rate depreciation cause an addition in the demand and supply of the oil significantly merely in the long tally, which tends to increase oil monetary value. The early old ages of 2000 ‘s period are an first-class illustration of this mechanism. Hagege and Carnot ( 2004 ) underlined that the addition in oil monetary values stems from two coincident factors on the one manus, incorrect appraisal of utmost demand for oil from United States and China. On the other manus, decreasing investing in the oil sector causes stagnancy in the capacity sweetening of oil supply. If this mechanism of demand and supply can right explicate the state of affairs of 2000s so this mechanism is unable to account for the relationship found in different empirical surveies. There are several groundss and grounds to believe that oil monetary value could impact dollar effectual exchange rate. Most frequent account of this impact that oil bring forthing states prefer fiscal investing in dollars ( Amano & A ; van Norden, 1993 & A ; 1995 ) . This model, explains that a haste in the oil monetary value boot the wealth of the oil manufacturer states which in bend addition the demand for dollar. Another account of this impact of oil monetary value on exchange rate can be found in the theoretical accounts such as Farguee ( 1995 ) and BEER theoretical account proposed by McDonald and Clark ( 1998 ) . In this attack, two independent variables are often used for explicating the exchange rate i.e. , net foreign investing and the footings of trade. A speedy initial concluding leads to a negative relation between oil monetary value and the dollar exchange rate. Addition in oil monetary value should deteriorate the United States footings of trade which consequences in t he dollar monetary value depreciation. A more comprehensive account would let explicating the positive relationship normally found in the literature by taking in history the comparative consequence on the United States compared to its trade spouses. If United States is an of import oil importer, an oil monetary value addition can deteriorate its state of affairs, nevertheless, if US import less than some other states like Japan or Euro zone, its place may good better compared to the other states. In this state of affairs, addition in the oil monetary value would take to the grasp in the dollar monetary value comparatively to the hankering and the euro, finally it leads to grasp in effectual footings in dollar. In an attack proposed by Krugman ( 1983a ) uses a vivacious symmetricalness of model to pattern how manufacturer states use the gross of their oil exports in dollars. Change in demand for dollar will impact the dollar exchange rate. The proposed theoretical account can be expressed mathematically as:Ten = CYWhere Ten = Oil monetary value denominated in dollar Y = Effective exchange rate of dollar C = Correlation Co-efficient This theoretical accounts help to find the correlativity between the oil monetary value and the effectual dollar exchange rate, either it is positive, negative or impersonal. This theoretical account besides explains the short term and long term impact of oil monetary value on the effectual exchange rate of the dollar and frailty versa. This empirical survey use monthly informations of oil monetary value denominated in the U.S dollar. Oil monetary values are expressed in existent footings and the exchange rate of dollar is effectual exchange rate. This survey tests the hypothesis at 5 % degree of significance. Hypothesis to be tested is as follows: Ho = There is a no correlativity between the oil monetary value and effectual exchange rate of dollar H1 = There is a correlativity between the two variables. Ho = There is a negative correlativity between the two variables H1 = There is positive correlativity between the oil monetary value and effectual exchange rate Above hypothesis are tested by Spearman rank correlativity utilizing SPSS, renowned statistical package. Data for this variable is collected through different beginnings such as Central Bank of Germany, Data Stream and Economagic which maintain the monthly norm informations of oil monetary value, effectual exchange rate and international gold monetary values. Sample size is of 42 values from each class. Oil monetary values and gold monetary values are denominated in the US dollar. Apparent observation of the natural information indicates the positive relation between oil monetary value and effectual dollar exchange rate.TestingThe testing of the hypothesis is done through SPSS v.16. Econometric technique of Spearman Rank Correlation is applied as it falls in the categorization of non-parametric trial. The consequences of econometric analysis shows that there is a medium positive correlativity between the oil monetary value and effectual exchange rate of dollar as co-efficient of correlativity is 0.316 which means that 1 dollar or 1 percent addition in oil monetary value will increase 0.316 % in the effectual dollar exchange rate. The oil monetary values show more variableness as compared to the exchange rate. The graphical presentation of the original information is as follows:Graphic Presentation of Oil Price and Exchange RateAbove graph shows a general positive tendency between the two variables over the period crossing from January 2007 to October 2010. The graph besides reveals greater variableness in the oil monetary value and less in the exchange rate. The variables are assigned as OP referred to oil monetary value and ER referred to effectual exchange rate of US dollar. The tabulated consequences show that there is a somewhat negative correlativity between the oil monetary value and gold rate. If oil monetary value addition by 1 % gold monetary value will diminish by 0.05 per centum under the influence of oil monetary value. The graphical presentation of the original values of oil monetary value and gold rate are as follows: The tabulated consequences show that there is little positive correlativity between the gold rate and the oil monetary value which means that 1 % addition in the exchange rate gives 0.085 % addition in the gold rate. The graphical presentation of the original informations of gilded monetary value and the exchange rate is follows:DecisionIn this literature, I have tried to happen the nexus between the US dollar effectual exchange rate and existent oil monetary values. Overall this survey focal point on merely the US dollar effectual exchange rate and existent oil monetary values but subsequently one other critical factor besides included in the theoretical account which helps to happen the corresponding dealingss between the variables. This survey shows that there is a important relation between the existent oil monetary values and the effectual exchange rate. In the short tally, consequences may be reverse but in the long tally consequences are in support of earlier surveies, which c oncluded that there is positive relationship between the exchange rate and the effectual dollar exchange rate. The fluctuation in the oil monetary value is far more intense than the fluctuation in the oil monetary value. This phenomenon is evident through the tested results and the besides in the graphical presentation. The adjustment velocity of effectual exchange rate is less than the oil monetary value. Results besides reveal that addition in the oil monetary value will increase the net foreign assets of the United States of America. The states whose currency is pegged to the US dollar will endure less with the addition in the oil monetary value and those states who falls in the floating exchange rate is affected more. The consequences besides reveals the of import fact, which is that the United States of America is basking the benefits of low monetary value and cheapest oil based energy over the period of more than half century as oil monetary value is denominated and traded worldwide in the US dollar. The addition in the oil monetary value will increase the demand for more US dollars to purchase the same quantum of oil and this increased demand will impact the exchange rate of the state with regard to the US dollar and this addition the import measure of the several consumer states and the manufacturer states will bask the benefits of more wealth.