Wednesday, July 31, 2019

Education and Economics Essay

I. Introduction: The conventional theory of human capital developed by Becker (1962) and Mincer (1974) views education and training as the major sources of human capital accumulation that, in turn, have direct and positive effect on individuals’ life time earnings. In the Mincerian earning function, the coefficient of school years indicates the returns to education, i. e. , how much addition in earnings takes place with an additional school year. There exists a wide range of literature that estimated the rates of returns to education for different countries [Pascharapoulos (1980; 1985; and 1994); Pascharapoulos and Chu Ng (1992)]1. In Pakistan, most of the nationally representative household surveys do not contain information on variables, such as, completed years of schooling, age starting school, literacy and numeracy skills, quality of schooling, and technical training. Due to the unavailability of completed school years, one can neither compute the potential experience nor observe the effect of an additional year of schooling on individual earnings. Therefore, the available literature in Pakistan is lacking in estimating the returns to education by using the Mincerian earning function2. In recent years, the government of Pakistan has started nation-wide survey, Pakistan Integrated Household Survey (PIHS), to address the imbalances in the social sector. This survey ? The authors are Senior Research Economist and Research Economist at the Pakistan Institute of Development Economics (PIDE) Islamabad. 1 Pascharapoulos (1994) provide a comprehensive update of the estimated rates of returns to education at a global scale. He observed high social and private profitability of primary education (18%and 9% respectively) in all regions of world. The private rate of returns at this level were found highest in Asia (39%) as compared to other regions. He also noted a considerable increase in total earnings by an additional year of education in all regions of world; 13% in Sub-Saharan Africa; 10% in Asia; 12% in Europe/Middle East/North Africa; and 12% in Latin America/Caribbean. 2 At national level, only two studies are available in Pakistan that used the Mincerian earning function approach to examine the returns to education [see Shabbir and Khan (1991) and Shabbir (1994)]. However, both these studies are based on twenty years old data set. 2 provides rich information on the above mentioned variables that were missing in the earlier household surveys. This study uses the data of PIHS to examine the returns to education by using Mincerian earning function and thus aims to fill the vacuum that, due to the lack of appropriate data, exists in the literature on returns to education in Pakistan. In this paper we will first estimate the earning function with continuous school years with the assumption of uniform rate of returns for all school years. It is argued that different school years impart different skills therefore we extend our analysis to examine the addition in earning associated with extra years of schooling at different levels of education, i. e. , how much increase in earnings takes place with an extra year of schooling at different levels, such as, primary, middle, matric, intermediate, bachelors and masters. By doing so we overcome the problem that exists in the available literature in Pakistan. To our knowledge no study has yet adopted this method to examine the returns to education in Pakistan3. The impact of technical training and school quality on the earnings of fixed salaried and wage earners will be examined in this study. Based on the available data in Pakistan, most of the studies, for example, Haque (1977), Hamdani (1977), Guisinger et al (1984), Khan and Irfan (1985), Ahmad, et al (1991); and Ashraf and Ashraf (1993a, 1993b, and 1996) estimated the earning functions by defining the dummy variables for different levels of education4. These studies observe low rates of returns at different levels of education as compared to other developing countries. However, a positive association between levels of education and earnings and an inverse relationship between the degree of income inequality and educational attainment has been noted. In order to examine the inter- 3 Most of the studies on returns to education in Pakistan used dummy variables for different levels of education where the rates of returns at different levels of education are computed by the estimated coefficients. 4 In Pakistan, the data on education in most of the nationally representative household surveys have been reported in discrete form that denotes the completion of different levels of education, such as, ‘primary but incomplete middle’, ‘middle and incomplete matric’, and so on. 3 provincial differentials in returns to education, Shabbir and Khan (1991) estimated the Mincerian earning function by using a nationally representative sample, drawn from the of Population, Labour Force and Migration Survey (1979) for the literate wage earners and salaried males. Later Shabbir (1994) estimated the earning function on the extended sample of the same data set. These studies found 7 to 8 percent increase in earnings with an additional year of schooling. Although the results are consistent with those of comparable LDCs but may not reflect the recent developments in Pakistan’s economy as these studies are based on the data set which are 20 years old now. Since 1979, the economy of Pakistan has passed through various changes, especially after the inception of the Structural Adjustment Programme in late 1980s. For example, the literacy rate has increased from 26 percent to 45 percent and enrolment at primary level has increased by 67 percent. Public and household expenditures on education have also increased [Economic Survey (1998-99)]. Moreover, due to the fiscal constraints, the employment opportunities in the public sector have started shrinking and the economy is moving towards more openness with stronger role of private sector in recent years. In this scenario, it becomes imperative to re-test the role of human capital as both private and public sectors are moving towards more efficiency and productivity. This study is important from three standpoints. First, in order to estimate the effect of education on earnings, the most recent and nationally representative household survey data is used which provides detailed information on the variables that were missing in previous surveys. Second, it uses the splines of education in the earning function to examine the additional earnings associated with extra school years at different levels. Third, this study investigates the role of some important factors such as, technical training, school quality, and literacy and numeracy skills on earnings for the first time. 4 The rest of the paper is organised as follows: section 2 presents an overview of the education sector. Section 3 outlines the model for empirical estimation and describes data. Section 4 reports the results. Conclusions and policy Implications are presented in the last Section. II. The Education Sector in Pakistan: An Overview: Education plays an important role in human capital formation. It raises the productivity and efficiency of individuals and thus produces skilled manpower that is capable of leading the economy towards the path of sustainable economic development. Like many other developing countries, the situation of the education sector in Pakistan is not very encouraging. The low enrolment rates at the primary level, wide disparities between regions and gender, lack of trained teachers, deficiency of proper teaching materials and poor physical infrastructure of schools indicate the poor performance of this sector. The overall literacy rate for 1997-98 was estimated at 40 percent; 51 percent for males and 28 percent for females; 60 percent in urban areas and 30 percent in rural areas. These rates are still among the lowest in the world. Due to various measures in recent years, the enrolment rates have increased considerably. However, the high drop-out rate could not be controlled at primary level. Moreover, under-utilisation of the existing educational infrastructure can be seen through low student-institution ratio, (almost 18 students per class per institution) low teacher-institution ratio (2 teachers per institution) and high studentteacher ratio (46 students per teacher). The extremely low levels of public investment are the major cause of the poor performance of Pakistan’s education sector. Public expenditure on education remained less than 2 percent of GNP before 1984-85. In recent years it has increased to 2. 2 percent. In addition, the allocation of government funds is skewed towards higher education so that the benefits of public subsidy on education are largely reaped by the upper income class. Many of the highly educated 5 go abroad either for higher education or in search of better job opportunities. Most of them do not return and cause a large public loss. After mid-1980s, each government announced special programs for the improvement of the education sector. However, due to the political instability, none of these programs could achieve their targets. The Social Action Program was launched in early 1990s to address the imbalances in the social sector. This program aims to enhance education; to improve school environment by providing trained teachers, teaching aids and quality text books; and to reduce gender and regional disparities. The Phase-I of SAP (1993-96) has been completed and Phase-II is in progress. The gains from the Phase-I are still debatable because the rise in enrolment ratio has not been confirmed by the independent sources. Irrespective of this outcome, government has started work on Phase-II of SAP. In this Phase, government is paying special attention to promote technical and vocational education, expanding higher education in public as well as in the private sector, enhancing computer literacy, promoting scientific education, and improving curriculum for schools and teachers training institutions in addition to promoting primary and secondary education. Due to low levels of educational attainment and lack of technical and vocational education, Pakistan’s labour market is dominated by less educated and unskilled manpower. A considerable rise in the number of educational institutions and enrolment after 1980s is not yet reflected in Pakistan’s labour market. This might be due to the fact that most of the bachelor’s and master’s degree programmes emphasise only on academic education without developing specific skills. The sluggish demand for the graduates of these programs in the job markets leads to unemployment among the educated and the job market remains dominated by the less educated. In this scenario, it becomes important to explore the role of education for the economic benefit of individuals. 6 III. Theoretical Model and Estimation Methodology: We start with the human capital model developed by Becker (1964) and Mincer (1974) where natural logarithm of monthly earnings are the linear function of completed school years, experience and its square. In mathematical form the equation can be written as: ln Wi = ? 0 + ? 1 EDU i + ? 2 EXPi + ? 3 ( EXPi ) 2 + Ui (1) where ln Wi stands for natural logarithm of monthly earnings, EDUi represents completed years of schooling, and EXPi is the labor market experience of ith individual. ?1 implies the marginal rate of return to schooling. A positive value of ? 2 and negative value of ? 3 reflects the concavity of the earning function with respect to experience. Ui is the error term, assumed to be normally and identically distributed. It has been argued in the literature that different school years impart different skills and hence affect earnings differently. Therefore, it is misleading to assume a uniform rate of return for all educational levels. Most of the previous studies used dummy variables to capture the effect of different levels of education. In order to examine the effect of school years at different levels of education, van der Gaag and Vijverberg (1989) divided the years of schooling according to the school systems of Cote d’ Ivore. Similarly Khandker (1990) also used years of primary, secondary and post-secondary schooling in wage function for Peru. Both studies found significant differences in returns to education at different levels of education. Following van der Gaag and Vijverberg (1989), we divide the school years into seven categories according to the education system of Pakistan. In Pakistan, the primary education consists of 5 years of schooling; middle requires 3 more years; and by completing 2 more years of schooling after middle, an individual obtains a secondary school certificate i. e. , Matric. After matric , i. e. , 10 years of schooling, students have a choice between technical and formal education. Technical education 7 can be obtained from technical institutions which award diploma after 3 years of education while the certificate of intermediate can be obtained after two years of formal education. After the completion of intermediate certificate, students can enter either in the professional colleges for four years or in non-professional bachelors degree program for two years in a college. Those who choose non-professional degree can pursue their studies in a university for masters for two more years. At this stage the graduates of professional and non-professional colleges complete 16 years of education. They can now proceed to the M. Phil. or Ph. D. degrees. In order to examine the returns to education at different splines of education, we estimate the following extended earning function. ln Wi = ? 0 + ? 1Yrs Pr imi + ? 2 YrsMid i + ? 3YrsMati + ? 4 YrsInteri + ? 5 YrsBAi + (2) ? 6 Yrs Pr of i + ? 7 EXPi + ? 8 ( EXPi ) 2 + Ui where YrsPrim, YrsMid, YrsMat YrsInter YrsBA YrsProf are defined as: YrsPrim = D5EDUi YrsMid = D8EDUi YrsMat = D10EDUi YrsInter = D12EDUi YrsBA = D14EDUi YrsProf = D16EDUi where D5 = 1 if where D8 = 1 if where D10 = 1 if where D12 = 1 if where D14 = 1 if where D16 = 1 if 0< EDU ? 5 5< EDU ? 8 8< EDU ? 10 10< EDU ? 12 12< EDU ? 14 EDU > 14 The coefficients associated with YrsPrim, YrsMid, YrsMat YrsInter YrsBA YrsProf in equation 2 imply an increase in income with one year increase in education at respective levels. For example, the returns to five completed years of education at primary level will be 5*? 1. Similarly, the returns to for six, seven and eight of education will be 5*? 1+? 2, 5*? 1+2? 2, and 5*? 1+3? 2 respectively. On the same lines we can compute the returns to education at each level as: 8 Returns to Primary =5*? 1 Returns to Middle =5*? 1+3*? 2 Returns to Matric= 5*? 1+3*? 2+2*? 3 Returns to Intermediate=5*? 1+3*? 2+2*? 3 +2*? 4 Returns to Bachelor’s =5*? 1+3*? 2+2*? 3 +2*? 4 +2*? 5 Returns to MA/Prof=5*? 1+3*? 2+2*? 3 +2*? 4 +2*? 5 +2*? 6 The data are drawn from the nationally representative Pakistan Integrated Household Survey 1995-96. In order to assess the performance of the Social Action Programme (SAP), the government of Pakistan has launched the series of Pakistan Integrated Household Surveys (PIHS), a collaborative nation wide data collection effort undertaken by the Federal Bureau of Statistics (FBS). So far two rounds have been completed. The first round of the PIHS is different from other round on two counts. Firstly, the information on employment and wages is available only in this round. Secondly, only 33 percent of the sample used in the first round is being repeated in the subsequent rounds. This implies that all of these rounds are independent cross-section data sets and can not be properly linked with each other to be used as panel data. Therefore, the appropriate sample can only be drawn from the first round of PIHS. This round was conducted in 1995-96, which covers 12,622 households and more than 84,000 individuals. The 1995-96 PIHS provides a detailed information on completed school years5. In addition, this survey contains information on age started school. This information is particularly important for our study to calculate the potential experience of a worker. The indicator for experience used by Mincer (1974) is a good proxy for U. S. workers as they start school at the uniform age of six years6. However, this assumption does not hold in Pakistan, as in this country there is no uniform age to start school. In urban areas, children as young as three years start going to school whereas in rural 5 This is the only nation-wide data set that provides this particular information. Similarly no other survey contains information on public and private school attendence and year starting school. 6 Mincer defined experience as (Age-education-6). 9 areas the school starting age is higher. 7 This information enables us to construct potential experience as (age-schools years-age starting school). Although experience is still a proxy for actual experience but it is relatively better measure than age and the Mincer type potential experience. In addition to education and experience, various other factors, such as quality of schooling, technical training and quality of schooling have significant impact on earning8. It has been argued that because of the market-oriented approach adopted by the private schools, the graduates of these schools earn more as compared to the graduates of public schools9. According to Sabot (1992), Behrman, Ross, Sabot and Tropp (1994), Alderman, Behrman, Ross and Sabot (1996a), Alderman, Behrman, Ross and Sabot (1996b), and Behrman, Khan, Ross and Sabot (1997), the quality of education has positive, significant and substantial impact on cognitive achievements and hence on post school productivity, measured by earnings. These studies observed higher earnings of the graduates of high quality school than those who attended a low quality school. A recent study by Nasir (1999) found considerably higher earnings for the private school graduates. These schools, however, charge higher fees. â€Å"Estimates of average annual expenditure per pupil in both government and private schools indicates that the total cost of primary level in rural areas is Rs. 437 (Rs 355 for government schools and Rs. 1252 for private schools), compared with Rs. 2038 in urban areas (Rs. 1315 for government and Rs. 3478 for private schools). This means that the cost of primary schooling is almost three times that of public schools in urban 7 The issue of age starting school has been highlighted by Ashraf and Ashraf (1993) and because of the nonavailability of this information, they used age as proxy for experience. 8 See Summers and Wolf (1977); Rizzuto and Wachtel (1980); Behrman and Birdsall (1983); Booissiere, Knight and Sabot (1985); Knight and Sabot (1990);Behrman, Ross, Sabot, and Tropp (1994); Behrman, Khan, Ross and Sabot (1997). 9 Various studies found the effectiveness of private schools to acquire cognitive skills [Colemen, Hoffer and Kilgore (1982); and Jimenez, Lockheed, Luna and Paqueo (1989)]. For Pakistan, Sabot (1992), Behrman, Ross, Sabot and Tropp (1994), Alderman, Behrman, Ross and Sabot (1996a), Alderman, Behrman, Ross and Sabot (1996b), and Behrman, Khan, Ross and Sabot (1997) found a significant variation in the cognitive skills among children with same number of school years. These studies conclude that some of the differences are due to the family characteristics while some are due to the quality of schooling. 10. areas and nearly four times in rural areas. The differences in cost of schooling also reflect the degree of quality differentials in public and private schools, and between urban and rural schools. A relatively better provision of school facilities and quality of education in private schools is causing a continuous rise in school enrolment in urban areas† [Mehmood (1999) page 20]. The PIHS provides information on the type of school attended10. On the basis of this information we can identify workers according to the school they attended and therefore examine the effect of type of school on individual earnings. In order to capture the quality of education an individual received, a dummy variable is included in the model that takes the value ‘1’ if individual is a graduate of private schools and ‘0’ otherwise. The effect of post-school training on earning has been found positive and substantial in many developing countries [see Jimenez and Kugler (1987); van der Gaag and Vijverberg (1989); Khandker (1990); and Nasir (1999)]. The PIHS contains information on years of technical training. This information helps us to examine the effect of technical training received on individual earnings. We use completed years of technical training as independent variable in the earning function. The existence of vast gender gap in human capital accumulation is evidenced by various studies in Pakistan11. The PIHS reports vast gender disparities in literacy and enrolment rates. The literacy rate among females is half than that of males’ literacy rate for whole Pakistan. This difference has increased to three-folds for rural areas. The gender difference is however smaller for the gross enrolment rate at primary level. For the higher levels of education, this difference 10. The coefficient of private school may also capture the effect of socio-economic background of workers. The data, however, does not contain such information, therefore we are unable to separate the effect of parental characteristics from the effect of private schools in worker’s earnings. 11 Sabot (1992); and Alderman, Behrman, Ross and Sabot (1996b); Sawada (1997); Shabbir (1993); and Ashraf and Ashraf (1993a, 1993b, and 1996) 11 shows an increasing trend. Similarly vast gender gap has been observed in returns to education where males earn more than the female workers [Ashraf and Ashraf (1993a, 1993b and 1996) and Nasir (1999)]. In order to capture the effect of gender, a dummy variable is introduced in the model that takes the value ‘1’ for males and ‘0’ otherwise. The regional imbalances in the provision of limited available social services are more pronounced in Pakistan. Rural areas are not only underdeveloped in terms of physical infrastructure but also neglected in gaining basic amenities. Haq (1997) calculated the disaggregated human development index for Pakistan and its provinces. He noted that nearly 56 percent of population is deprived of basic amenities of life in Pakistan; 58 percent in rural areas and 48 percent in urban areas. According to the 1995-96 PIHS, the literacy rate in urban areas is 57 percent and in rural areas it is 31 percent. The gross enrolment rate was noted 92 percent in urban areas and 68 percent in rural areas. Because of these differences low returns to education are observed in rural areas [Shabbir (1993 and 1994) and Nasir (1999)]. To capture the effect of regional differences, a dummy variable is used that takes the value ‘1’ if individual lives in urban areas and zero otherwise. The four provinces of Pakistan exhibit different characteristics in terms of economic as well as social and cultural values. Significant provincial differentials in rates of returns to education have been noted that reflect not only the differences in market opportunities but also indicate uneven expansion of social services across provinces [Khan and Irfan (1985); Shabbir and Khan (1991); Shabbir (1993); Shabbir (1994); and Haq (1997)]. The effects of these differences are captured through the use of dummy variables for each province in the earning function, Sindh being the excluded category. 12 For the purpose of analysis we restrict our sample to wage earners and salaried persons. Our sample contains 4828 individuals. Among them, 4375 are males and 453 are females. Table 1 presents the descriptive statistics of some of the salient features of the important variables. According to the statistics in table 1, average age of the individuals included in the sample is 34 years with 18 years of experience. A typical worker in the sample has completed approximately 10 years of education. A majority is graduated from public schools. Most of the workers live in urban areas. On average an individual earns Rs. 3163 per month. In our sample, there are only 22 percent individuals who received technical training. The average years spent for training are less than one year. A majority of wage earners belong to Punjab, followed by Sindh and Balochistan. Table1 Mean, Standard Deviation and Brief Definitions of Important Variables Variables W Age EDU EXP RWA MALE Urban Private Training Punjab Sindh NWFP Balochistan Mean SD Variables Definitions 3163. 34 3397. 39 Individual’s monthly earnings in rupees consist of wages and salaries. 34. 07 12. 36 Age of an individual in years. 9. 53 4. 36 Completed years of schooling. 18. 14 11. 80 Total Years of labour market experience calculated as (age-school years-age starting school). 2. 37 1. 07 Categorical variables, contains 4 categories of literacy and numeracy. 0. 91 0. 29 Dichotomous variable equal to 1 if individual is male. 0. 60 0. 49 Dichotomous variable equal to 1 if individual belongs to urban area 0. 04 0. 19 Dichotomous variable equal to 1 if individual is a graduate of private school 0. 35 0. 87 Completed years of technical training 0. 38 0. 49 Dichotomous variable equal to 1 if individual belongs to Punjab 0. 31 0. 46 Dichotomous variable equal to 1 if individual belongs to Sindh 0. 15 0. 36 Dichotomous variable equal to 1 if individual belongs to NWFP 0. 16 0. 36 Dichotomous variable equal to 1 if individual belongs to Balochistan 13 IV. Empirical Results The estimated results of equation 1 and equation 2 are reported in table 2. The highly significant coefficients of school years and experience indicate the applicability of human capital model for Pakistan. An additional year of schooling raises individual’s monthly income by 7. 3 percent, which is very close to the prior studies. 12 13 The coefficient of experience shows substantial increase in wages with each additional year. The concavity of age-earnings profile is evident from the negative and significant coefficient of experience squared. The results reveal that an individual with five years of experience earns 31 percent higher wages as compared to non-experience worker. The highest level of earnings is achieved with approximately 30 years of experience. These estimates are relatively low compared to prior studies14. The positive and significant coefficients of gender (0. 401) and regional dummies (0. 178) strengthens the a priori expectation that males earn more than females and earnings are higher in urban areas as compared to rural areas. These estimates are consistent with earlier studies [see Arshaf and Ashraf (1993), Khan and Irfan (1985)]. Furthermore, significant inter-provincial differences in individual’s earnings can be observed in the estimated model. Many studies indicate substantial differences in earnings across school levels. For example, van der Gaag and Vijverberg (1989) noted that an increase of one year in elementary, high and university education causes an increase of 12 percent, 20 percent and 22 percent respectively in 12 The estimated coefficients of school years by Shabbir and Khan (1991), Shabbir (1991), Shabbir (1993) and Shabbir (1994) are found to be in the range of 6 percent to 9. 7 percent. 13 The returns to education are calculated by taking the anti-log of 0. 092 (estimated coefficient of completed school years) and subtracting from 1. To convert into percentage, multiply the value by 100. For details, please see Gujrati (1988) page 149. 14 The difference in the returns to experience could be due to the approach adopted by these studies. Most of the studies used age as a proxy for experience [see for example Khan and Irfan (1985); Ashraf and Ashraf (1993); and Nasir (1999)]. Shabbir (1991) used the Mincerian approach to calculate experience. The present study uses actual age of starting school and actual years of education. These information enable us to calculate total years of labor market experience. This approach is also not the perfect alternative for actual experience, as we do not have information about the starting time of the first job. But when compared with other approaches, it is more precise in measuring experience. 14 earnings. In order to examine the returns to education across different school years, we include the information on schooling according to the education system of Pakistan (equation 2). The results reported in column 3 of table 2 show a positive and significant impact of school years at each educational level on earnings. For example, an increase of one year in education at primary level increases the earnings by 3 percent. Similarly, at middle level, one year of schooling brings about an increase of 4 percent in earnings and the total returns to schooling at middle level are 27 percent. Table 2 Earning Function with and without Levels of Education Variables Coefficient s 6. 122 0. 072* 0. 058* -0. 001* 0. 178* 0. 401* 0. 127* -0. 113* -0. 203* 0. 412 t-ratios Coefficient s 6. 380 0. 058* -0. 001* 0. 150* 0. 264* 0. 098* -0. 112* -0. 166* 0. 027** 0. 040* 0. 050* 0. 057* 0. 071* 0. 082* 0. 429 t-ratios Coefficient s 6. 342 0. 058* -0. 001* 0. 152* 0. 262* 0. 096* -0. 108* -0. 164* 0. 052* 0. 007 0. 025* 0. 038* 0. 047* 0. 063* 0. 075* 0. 429 t-ratios Constant EDU EXP EXP2 Urban Male Balochistan NWFP Punjab RWA Yrs-Prim Yrs-Mid Yrs-Mat Yrs-Inter Yrs-BA Yrs-Prof Adj R2 148. 91 46. 71 26. 49 -19. 20 10. 31 13. 98 4. 94 -4. 34 -10. 21 – 92. 03 23. 85 -16. 84 7. 87 8. 15 3. 40 -4. 06 -7. 75 2. 03 5. 07 8. 69 11. 41 16. 85 21. 98 – 89. 25 23. 84 -16. 88 7. 98 8. 09 3. 32 -3. 91 -7. 63 2. 41 0. 45 2. 45 5. 02 7. 28 11. 47 15. 57 – * significant at 99 percent level. ** significant at 95 percent level. One can note higher returns of additional year of schooling for higher educational levels from this table. For example, the returns to masters and professional education (Yrs-Prof) are more than five- 15 times higher than that of primary school years (Yrs-Prim). The results exhibit a difference of 15 percent between primary graduates and illiterates, the excluded category. This category includes illiterates as well as all those who have not obtained any formal schooling but have literacy and numeracy skills15. To further explore the earning differential between primary school graduates and those who never attended school but have literacy and numeracy skills, we have constructed an index RWA that separates illiterates from those who have literacy and numeracy skills. This index takes the value ‘zero’ if individual does not have any skill; ‘1’ if individual has only one skill; ‘2’ if individual has two skills; and ‘3’ if individual has all three skills. We re-estimated equation 2 with this new variable and the results are reported in column 5 of table 2. According to our expectations, the coefficient of RWA is found not only large (0. 05) in magnitude but also statistically significant at 99 percent level. This indicates that the individuals with all three skills earn 15 percent more than those who have no skill. On the other hand, the coefficient of Yrs-Prim dropped to 0. 007 and became insignificant16. The differential in the earnings of illiterates and those having five years of primary education was 15 percent (0. 03*5=0. 15). This differential however, reduced to approximately 9 percent (0. 007*5+0. 053=8. 8) when we include those who have no formal education but have literacy and numeracy skills. These high returns to cognitive skills indicates the willingness of employer to pay higher wages to the able workers as compared to those who have five or less years of schooling but do not have these skills. Now we examine the effect of technical training and quality of schooling on earnings, first in separate equations and then in a single equation. The impact of technical training on earnings is examined by including years of apprenticeship as continuous variable in our model. The results are reported in column 1 of table 3. The results show a positive and significant impact of technical 15 There are 48 wage earners in our sample who have education less than primary but do not have any of these skill. Whereas we found 76 wage earners who do not have any formal education but have at least one of these skills. 16 This result is consistent with van der Gaag and Vijierberg (1989). 16 Table 3 Earning Functions : Impact of Technical Training and School Quality (Separate Functions) Variables Constant EDU EXP EXP2 Urban Male Balochistan NWFP Punjab Train.

Job Analysis vs. Job Evaluation Essay

Describe the differences between job analysis and job evaluation and how these practices help establish internally consistent job structures. Job analysis is the organized gathering, documenting, and analyzing information to describe a job. A job analyses describes the job duties, worker requirements, working conditions, etc. Job evaluation is the recognition of differences within a set of jobs and establishes pay rates according to the job. A job analysis provides information about what duties the job consists of and what is required to perform the job which in turn allows the manager to know what types of people to hire for the positions. The job analysis results aid in establishing compensation for the various positions by the differences between job content and work requirements. Job content refers to actual job duties as well as the tasks that employees must perform on the job. Worker requirements are the minimum qualifications and skills that people must have in order to perform the job in question. Companies use this to develop pay grades and salary ranges to determine how much pay each position is worth. Describe the challenges in developing compensations that are both internally consistent and market competitive. Internally consistent compensation systems help allows companies to develop relative pay scales. Relative pay scale means that jobs within the company pay different rates in comparison to other jobs within the same company. The means that these internally consistent compensation systems are developed are based on simple principles and fundamentals. Jobs that require a person to have a higher level of education, experience, or a specific skill will be assigned a higher pay than a job requiring less. Another factor that affects the relative pay of a job within the company includes the complexity of the job as well as the level of responsibility that comes with it. This is very useful for a company but it will become necessary for employees to take on the duties of  other positions or even duties of newly created position in order for the company to remain competitive within the market. This could be caused my several different things. The company may downsize in the future, making employees take on more tasks. Or responsibilities can be added prior to the company becoming fully staffed or adding staff. This would increase the employee’s responsibilities or skills without increasing pay. One way to plan for this would be for the company to have the ability to give additional pay for additional responsibilities as defined by a defined policy allowing the company to grow based on market changes while still being able to fairly pay the employees for the work they do. This would make the company have a market competitive compensation policy which mean that the pay scale for jobs will attract and retain the most skilled and knowledgeable workers. A draw back to a market competitive compensation policy would be that it would not help keep costs low. An example would be the company paying too much for a specific job based on what the company can afford to pay, which can limit the company from doing other important things like training and development. Discuss whether it is fair to give one employee a smaller percentage merit increase because his pay falls within the 3rd quartile but give a larger percentage merit increase to the other because his pay falls within the 1st quartile and explain why. I do not believe it is fair to give one employee a smaller percentage merit increase because their pay falls within the 3rd quartile but give a larger percentage merit increase to the other because his pay falls within the 1st quartile. I think both should be evaluated on the work they are doing and their contributions to their team no matter what quartile they are in. Employees are rated by their management on job specific objectives as well as performance ratings over a course of time in order to determine whether an employee is due to receive a merit increase and the amount of increase. This typically happens after management does a performance appraisal of their employees work. If it is found that both employees do the exact same work, and they both have the same skill sets, and the same statistics on job performance then both should be given the same percentage merit increase. Discuss the basic concept of insurance and how this concept applies to health care. The basic concept of insurance is that it covers the costs of a group of services that provide employees with coverage for services. This is to provide the employees with the ability to take care of their physical and mental health. This includes and is not limited to covering physical examinations, diagnostic testing, surgery, hospitalization, dental care, vision coverage, as well as prescription drug coverage. Health insurance can be purchased by an individual directly through an insurance carrier, or it can be purchased through payroll deduction with their employer. The costs can be a lot more expensive if purchased directly from the carrier, deductibles may be higher, and the benefits may not coverage as much as group health coverage through an employer. Group health coverage through the employer is for a larger group of people and coverage negotiated. The company pays a portion of the benefits, allowing their employees to pay a lesser cost. In a fee for service plan there are deductibles, and this means that over a period of time an employee will have to pay for services needed before insurance benefits start to pay for services received. Describe the changes in the business environment and society that might affect the relevance or perhaps the viability of any of these benefits. Companies faced with rising cost of benefits and health care may cut employment in order to reduce benefits costs. This will make unemployment rise. Unemployment insurance payments for are there to provide temporary financial assistance to unemployed workers who meet their specific state requirements. Eligibility for unemployment insurance, benefit amounts, and length of time benefits are determined by the state law under which employment insurance claims are awarded. The problem with unemployment benefits is that due to a decline in revenue there are budget deficits. Other factors affecting the business and/or society that might affect the relevance or viability of benefits are things like companies closing, off shoring work, as well as layoffs. Anything that’s causes people to lose their jobs to pay for coverage’s and out of pocket expense or just loosing the coverage itself affect this. Without employer group coverage’s for health insurance, employees may not be able to afford to pay for medical services. References Dessler, G. (2011). Human Resource Management: 2010 custom edition (12th ed.). Upper Saddle River, NJ: Prentice Hall.Martocchio, J. J. (2011). Strategic compensation: A human resource management approach: 2011 custom edition (6th ed.). Upper Saddle River, NJ: Prentice Hall. What is health insurance? Retrieved May 22, 2012, from http://www.investorwords.com/2289/health_insurance.html

Tuesday, July 30, 2019

Analysis of the Images of Mind in Society Essay

In our society, there are different images, icons and symbols of the mind, and one set of this are those that are portrayed by nerds and geeks. These types of symbols are popularly seen in movies and television shows. By definition, a nerd is â€Å"a person who is single-minded or accomplished in scientific or technical pursuits but is felt to be socially inept (Nerd 2009). † According to this article, it bears a derogatory connotation or stereotype. In television shows and movies, the nerds are the ones who are often ostracized by the more popular crowd. In this image (http://www. dougweb. org/images/blog/ Nerd_of_the_Year_2001. jpg), nerds are shown as having big eye glasses and are socially awkward. It can be seen in the picture that they seem to be a laughing stock since they â€Å"do not get laid. † I believe that people of intelligence are portrayed like this because only a small percent of population has only been made intelligent, or only a small percentage of people are willing to sacrifice their social image to pursue an endeavor (most especially academic ones). Because they are small in number, what they are doing is not popular to others. By doing these things, intelligent people do not get asked in proms and other social activities. This can also be seen in the show the Big Bang Theory, where the main characters are intellectuals (theoretical physicists). It is shown in this series that they are socially awkward, and do not know how to deal with girls (Picture: http://editorial. sidereel. com/Images/Pages/big_bang_theory. jpg). Other images/symbols of the mind that is portrayed in society are the Ivy League schools and the professors there. It shows that highly intelligent people need to have an exceptional environment where they could hone their skills. Also, these kinds of schools have high standards because not all students have the intellectual capacity to persevere in these kinds of schools. These schools are needed to produce exceptional work and researches and brilliant minds that could be helpful in improving the society. List of References â€Å"Nerd† The Free Dictionary. com. Available from [22 July 2009]

Monday, July 29, 2019

Management in context Essay Example | Topics and Well Written Essays - 1250 words - 1

Management in context - Essay Example These damaging theories have made students to believe that managers cannot be trusted. I also noticed that the theories suggest that strict supervision and control of employees is the optimal manner of operating a business. The article showed that academic research associated with business and management conduct, influences management negatively in that students relinquish their moral responsibility by learning its theories (Ghoshal, 2005). Surprises in JA2 This article demonstrated that the theories taught in universities and business schools are to blame for the managers’ poor performance. This is because the managers underutilise the available resources when they follow incorrect channels that lead to bad decisions as a result having of inadequate knowledge. Possession of relevant knowledge was emphasized in this article, where Donaldson implies that managers do not make bad decisions intentionally, but it is because of inadequate knowledge that these errors arise. â€Å"T hese errors are not intended by the managers, and are due to deficiencies in their knowledge† (Donaldson, 2002:97). A new thought was introduced when Donaldson wrote that social theories taught in business schools have had contradictions with the assumptions made in management education. â€Å"There is contradiction between the views expressed by some major contemporary social science theories taught in management schools and the assumptions on which management education is founded† (Donaldson, 2002:97). The way Donaldson proved the incompatibility of the economic and finance, strategy theory, agency theory, institutional theory, and judgmental bias theory was so convincing that I come to completely agreed with the article. The evidence In the article, Ghoshal argued that the negative management and conduct of business have been influenced by academic research learnt in business school by students who later become managers. I observed that the argument on assumptions an d ideas that Ghoshal was talking about were indeed true. â€Å"Our theories and ideas have done much to strengthen the management practices that we are all now so loudly condemning† by â€Å"adoption of a particular theory and more at the incorporation†, which have â€Å" ideologically inspired amoral theories† that are taught in business school(Ghoshal, 2005:76-76). The ideology of pessimism also known as liberalism brought a gloomy vision in management where the owners of a business do not trust the managers as it is evident in the many companies across the globe. Looking into Donaldson’s article and how he had argued, the five theories he had highlighted really contradicted optimal management of business and what students learnt in business schools. In economics and finance, when information is made public it cannot help one firm as all the other firms will have it and use it to their advantage. â€Å"Thus research-based knowledge, once public, confer s no economic advantage in (even semi strong) efficient markets. Only knowledge that is kept private can confer an advantage to the investor† (Donaldson, 2002:96). This shows that the research done in business school once made public cannot give students an upper hand. In the theory of strategy, when a firm has unique resources, it cannot disclose them to the managers, as they can reveal them to rival firms. This in turn results to resources being underutilised; therefore, failing to realize the full potential of the

Sunday, July 28, 2019

Health Care Law Changes Reimbursement Systems Research Paper

Health Care Law Changes Reimbursement Systems - Research Paper Example This study evaluates the benefits and disadvantages of these proposed reforms. On the one hand, the reforms could improve quality of service by providing incentive for hospitals and increasing competition among them but on the other hand, ordinary citizens could also be affected because many expenses that were earlier applied against FSA and HSA accounts may no longer be possible. Medical reimbursement in the United States Introduction: The costs of health care in the United States are prohibitive and only a few people in the country can afford to avail of health care without any form of insurance. Private health insur4ance plans are available in the country and most employees have access to some form of health insurance through group insurance plans that are offered by their employers. Most people in the United States however, fall under the category of Medicare or Medicaid insurance plans to cover their health care costs. Medicaid is available to individuals who are from the poorer socio economic backgrounds and have no insurance at all. Medicare is the public health insurance program which has been formulated to provide for the health care of the elderly and the disabled. It covers individuals who are aged 65 or over, or under 65 but with certain disabilities and those of any age with permanent kidney failure (www.medicare.gov). In the year 2003, Medicare expenses cost the U.S. Government a sum of $271 billion, representing 13% of the federal budget (Frankes and Evans, 2006). The program comprises two parts – Part A which covers hospitalization and nursing facilities, and Part B which covers physician and outpatient services, laboratory charges and medical equipment. Since costs for the Medicare program were turning out to be prohibitive, changes were introduced to the reimbursement policies in 2008, in an effort to reduce some of the expenditures and thereby bring about some trimming of the federal government budget on health care. The sweeping chang es proposed reduced payments for complex medical treatment procedures by 20 to 30%. Some of the major changes which were introduced and came into legal existence in 2008 were as follows (www.seniorjournal.com): (a) reducing reimbursement for procedures such an angioplasties and implanting of drug coated stents by 33% (b) reducing reimbursement for implanting defibrillators by 23% (c) Reducing reimbursements for hip and knee replacements by 10% Reimbursement for other diseases was also cut down; hospitals and health care professionals fully reimbursed only if their patients were suffering from one of 13 diseases which have been listed. The Medicare reimbursement policies for Inpatient Rehabilitation Facilities were revised further in 2009, validated legally from 2010. The patients are classified into different categories based upon their clinical symptoms and payments for clinical conditions that are secondary to the major one are no longer reimbursed (Ingenix, 2009). Cost outliner p ayments have also been readjusted to 3% of total estimated payments for Inpatient rehabilitation facilities. Coverage criteria were further revised for inpatient rehabilitation facilities with several pre-conditions being exposed, such as mandating therapy treatments to begin with 36 hours of the midnight of the day the patient was admitted.(Ingenix, 2009). It may be noted that the changes which had

Saturday, July 27, 2019

Climate Change Essay Example | Topics and Well Written Essays - 250 words - 1

Climate Change - Essay Example On the other hand, impact of climate change is not always uniform globally due to the difference in exposure and adaptive capacities. The effects of climate change can become worst if other issues such as poverty, ageing population and pollution are combined. The effect on developing and poor countries by the change of climate is huge. This could also extend to advanced economies like the U.S. because they have a connection with the developing countries. Developed countries have an economic connection such as trade, investments, migration, travel, and tourism with the developing. The effects of climate change on New York City could be felt soon if measures are not taken to curb the changing climate. According to Lallanilla (2013), the city could soon witness huge rainstorms, floods and heat waves. This could have huge impacts on New York population and more on the vulnerable persons such as children, the elderly and disabled people. The results of climate change have previously been felt in New York. The hurricane sandy caused serious destruction on October 2012; transport system was halted because of hurricane sandy. The recent march in New York shows that the population in New York and around the world are feeling the effects of climate change. This is evidenced by the huge number of demonstrators who turned up in New York to urge the world leaders to find measures of curbing climate change. Solution to climate change can only be reached by identifying the cause. For example, research reveals that the emission of greenhouse gases is the cause of climate change. Emission result from burning of fossil fuel and coal. Solution is to adopt measures such as the use of renewable energy like wind power and solar

Friday, July 26, 2019

Comparison between Boeing 737-800 and Embraer ERJ-170LR Research Paper

Comparison between Boeing 737-800 and Embraer ERJ-170LR - Research Paper Example Presently, just the -700, -800, as well as -900ER, are assembled, as neither the -600 nor the -900 was well-liked. Its main competition is the Airbus A320 family. The Embraer E-Jet family, on the other hand, is a series of narrow-range and medium-range double-engine jet airliners manufactured by Brazilian airline corporation, Embraer (Endres, 2009). Initially introduced at the Paris Air Exhibition, in 1999, and going into production, in 2002, the airplane series has been a business success (Norris & Wagner, 2011). The aircraft is utilized both by regional and mainline airlines all over the global. From December 31st, 2012, there was an accumulation of 185 firm orders for the E-Jets, 908 units and 580 options delivered. On September 13th, 2013, a celebration was held at the Embraer plant in Sà £o Josà © dos Campos to celebrate the release of the 1,000th E-jet family airplane. The E-175 was released in an American Eagle Airlines colored with a unique "1,000th E-Jet" label over the ca bin windows (Endres, 2009). This paper will compare between Boeing 737-800 and Embraer ERJ-170LR of Boeing Commercial Airlines and Embraer, respectively. The 737-800 is an expanded edition of the 737-700, and substitutes the 737-400. It also sealed the gap left by the choice to cease the MD-80 and MD-90 (McDonnell Douglas) after Boeings unification with MD. The 737−800 was first introduced by Hapag-Lloyd Flug (at the moment TUI fly). The model also seeks to swap the market section.

Thursday, July 25, 2019

Edgewires Automated Phone System Case Study Example | Topics and Well Written Essays - 2500 words

Edgewires Automated Phone System - Case Study Example They serve a niche market of vacationers who desire adventure in their holidays. Thus, call centre operators became immersed in and learned how to provide adventure seekers satisfying holiday experiences. Not only are such holidays off the beaten path but they typically require special equipment. Edgewire's operators coordinate with suppliers to fulfil all their customers' needs. Satisfaction of thrill-seeking customers is obviously healthy or else Edgewire couldn't have grown into a robust business if its operators delivered inferior adventures. Today's situation is that goals #1 and #2 have evolved into apparent conflict. In the interest of long-term stability, Edgewire's management firmly believe they must install an automated phone answering system. Doing so means replacing call centre operators. Management has calculated the savings by cutting back on labour costs. However, this goal of presumed long-term stability comes at the cost of jobs; yet providing jobs was a goal of the EU regeneration grant. Quite naturally the call centre operators are distressed and disagree. This Social Impact Statement (SIS) examines the issue in-depth. The goal is to find options that perhaps can be smartly used to satisfy all parties that are affected by Edgewire's conversion to an automated phone answering system. ... This SIS comes somewhat late. Edgewire management's decision is almost fait accompli. They are reluctant to discuss the inherent conflict-of-interest or negotiate the core issues. They stalled before acquiescing to the independent evaluation requested by the call centre operators' Trade Union. 1.1 The New System and Its High-Level Benefit - Reduce Costs 2,310,000 Yearly In addition to cost savings, two features of the proposed automated phone answering system are quite attractive. It will be a "decision support system" (DSS) as well as a "knowledge based system" (KBS). DSS tells call centre personnel specific decision information each customer already made about an adventure holiday and left on the automated phone system. KBS is a standardisation of holiday packages for customers who don't need special customising. Management believes these new ways to do business streamline operations and save labour costs, thus better ensuring Edgewire's long-term financial wellbeing and economic stability. Here are supporting data. Staffing Current Staff Current Cost Proposed - New Staffing Level Cost with New System Savings Call Centre operators 200 4 million 50 1.25 million 2.75 million System Update Officers -0- -0- 20 0.5 million -0.5million Line Managers 10 300,000 10 300,000 -0- Drop-in centre 3 60,000 -0- -0- 60,000 Totals 213 4,360,000 80 2,050,000 2,310,000 1.2 Stakeholders Stakeholders have not been apprised or consulted. Until now only Edgewire's management (primary stakeholder) has had input on decision making about the automated phone answering system. Nobody knows with certainty how Edgewire's customer base (another primary) will react. Management lacks 100% unity. Dissention exists. Anxieties are growing

An overview of Under Armor and how they are challenging nike in sports Essay

An overview of Under Armor and how they are challenging nike in sports apparel - Essay Example The company’s wide range of products is being used by many reputed consumers and other people of the world as well. Apart from all these, there also exist numerous competitors of Under Armour in the world. Among them, Nike is one of the major competitors of the company as both of them operate their business in the world of sports brand (Thomas Reuters, 2014). To analyse the competition between these two huge brands, this study will consist of the overview about Under Armour and the ways they are competing with its competitors. A complete study of the background of the company along with its vision, business culture and expansion strategies will be taken into due consideration. This information will help to understand the competitive advantage of the company in the local as well as the global market. Since 1996, Under Armour has been running its business successfully in the field of sports shoes and other required accessories for sports. In the present scenario, the company is operating its business in major parts of the world such as North America, the Middle East, Latin America, Europe and Asia. The idea of business of sports brand came from a simple idea developed by the 23 year old former captain of the University of Maryland i.e. Kevin Plank’s special team. During playing days of Kevin Plank, he hated to wear the sweat soaked cotton T-shirt again and again, especially in the hot warmers days. In order to get rid of this problem, he set out an idea of manufacturing such T-shirt, which can be suitable for the players to wear even in the hotter days. Plank named the company of sports brand as Under Armour. For the first time in the world of sports, a new design of T-shirt named as #0037 came into existence. This t-shirt was designed in such a manner that it comprised of moisture soaking fibres, which help the players to keep them cool and dry even in the hotter conditions. At the initial stages of the development of such

Wednesday, July 24, 2019

Health Care Providers in Different Religions Essay

Health Care Providers in Different Religions - Essay Example In many cases however, one does encounter a variety of different faiths when seeking out health care. In Christianity there are seven aspects of providing health care to patients. All of these are tied to religion and the Holy Bible and its sayings. The first is a moral code of conduct and justification that will guide health care providers to tend to patients in a manner that is in alignment with religion. Such as the fact that they cannot present life threatening drugs or force people to take a drug that may cause death. At the same time, women cannot be advised of abortion etc. Secondly the doctor patient relationship needs to be built on the element of trust; the patient is entitled to receive all the information regarding treatment, health care and any aspect of a procedure that they have to undergo. Even though the provider is the true healer and is believed to be so by the patient as well, all sides of the story is a duty of the doctor to the patient. Thirdly, patients are aut onomous, that is, they are allowed to make any decision they want to, after being presented all the facts of the situation. At the same time, health care providers have to act like good citizens, taking it on their conscience to care for the patient and heal them; and not just go through with this procedure in a mechanical fashion, but to be caring and loving and compassionate towards the patient. God will only show mercy to him who showed mercy on his creation, and so providers need to offer not just treatment but compassion and mercy to patients, praying for their well-being and taking it as a duty as God’s servants to take care of his creation. At the same time there is the concept of non-maleficence in Christianity; which essentially translates into â€Å"never harm anyone.† According to this principle, health care providers need to be such that they give the right kind of prescription to the right patient and they need to assess whether a particular treatment woul d harm or benefit the patient in question. Christianity also believes in the principle of justice. Therefore patients are all to be treated equally and fairly, and treated till depth of the provider’s ability as that is their right as not just patients of the doctor, but also as human beings. He is also to provide access to the patients to health care if he himself cannot suffice. Lastly, it is believed that one, no matter what profession he follows, must have a character of integrity and virtue, so that no matter who he is dealing with, he will uphold his virtuous character and refer to the Bible and never wander from what it deems to be right and wrong. (Benedict M. Ashley, 2006) It is the general view, that all the principles aforementioned should be followed, whether the person seeking treatment is a Christian or otherwise. Doctors need to be compassionate, caring, try their best to bring the proper and complete treatment to their patients, while at the same time, upholdi ng the ethics of their profession, that is, anything that can cause harm to their patients is supposed to be out of the question. However, some cultures and religions have different aspects or an addition to the aforementioned principles. Buddhism originated through the concept of suffering, the state of the soul being in trouble rather than the body being in any agony. The Buddhists believe in both technique and discipline, and those principles along with the eight fold path determine all other aspects of life even health care. The eight fold path includes right speech, right view, right

Tuesday, July 23, 2019

Strategic Human Resource Mangement Essay Example | Topics and Well Written Essays - 1000 words

Strategic Human Resource Mangement - Essay Example s as well as in the implementation of those strategies through HR activities such as recruiting, selecting, training and rewarding personnel† (Lii, 2003). SHRM models work to promote learning and competitiveness of the workforce as the basic prerequisite for improved competitiveness and better efficiency in organizations. Cadbury and Craft are the two examples of how SHRM works in practical workplace environments. Needless to say, SHRM in these organizations is heavily influenced by national and corporate cultures, and is closely aligned with the social responsibility and ethical dimensions of workplace performance. The history of SHRM at Cadbury dates back to the times when there were no unions; yet, Cadbury’s owners clearly well realized the value of HR to their competitiveness and performance. Cadbury considered people as inherently valuable to the firm and thus a resource that had to be used effectively (Price 2007). Those were also the views promoted by Craft in its approaches to HR. Obviously, those were the roots of SHRM that positions effective utilization of human resources as the source of strategic competitive advantage (Bratton & Gold 2001). For both Cadbury and Craft, SHRM stands out as the cyclic combination of several different activities: organization’s direction, environmental analysis, strategy formulation, implementation, and evaluation; these altogether exemplify Bratton’s model of SHRM which successfully works in dozens of modern organizations. Moreover, Craft and Cadbury realize that HR are valuable, inimitable, and rare – the view that goes in line with the resource-based view of the firm (Hall 1993). Finally, the success of SHRM in Cadbury and Craft lays in that both organizations were successful in linking their HRM practice to behavioral, performance, and financial outcomes the way they are discussed by Guest (HRM Guide 2005). As a result, HR stands out as the core of sustained competitiveness in organizations in the long run.

Monday, July 22, 2019

HR data collection Essay Example for Free

HR data collection Essay HR data collection makes company able to measure against it supporting workforce planning, monitoring progress and development, developing initiatives for generic cases. It identifies and analyses information to aid the organization in making ultimate decisions both beneficial to the organization and its employees. Two reasons considered closely. Through HR data collection an organisation can: 1) comply with legislative and regulatory requirements regarding equal opportunities, equal pay audits, recruitment, assessing skills balance, absence recording. 2) monitor training and performance for employees, assessing each individual employee for productivity and identifying training needs. That means assessing the productivity within the business. Being well informed about the workforce is the key to aim the ultimate goal of the organization. Data collection enables management team to make informed decisions about future activity. Two types of data collected and their support: 1) Attendances is useful to monitor and gauge daily working hours and monitor absences. That enables HR to manage regular absentees successful and deal with any issues the employee might have. 2) Organisational records which includes: staff turnover, absenteeism, recruitment documentation, learning and development. HR department can monitor staff level making decision about a further recruitment process. It is also essential collecting and updating employee records such as home addresses and people to contact in time of emergency. These information are helpful where the employee not to come to work without notice. Records can be stored: 1) Electronically through computerised system. In this way organization can keep information up to date easily and any information can be sent and received rapidly. It also reduce company costs and the amount of data can be stored with no taking up much office space and they can be sorted, found, moved and protected easily. 2) Manually in paper format. In this case the risk of corrupted data is less and information are accessible in any time considering occurrences of power cuts or electronic system crashes. Moreover problems with duplicates of the same record are usually avoided. Two items of UK legislation relating to recording and storing HR data: 1) Data Protection Act, 1998. It concerns all personal records whether held in paper or electronic format. The act contains eight protection principles specifying personal data must be: Processed fairly and lawfully. Obtained for specified and lawful purposes. Adequate, relevant and not excessive. Accurate and up to date. Not kept any longer than necessary. Processed in accordance with the â€Å"data subject’s† (the individual’s) rights. Securely kept. Not transferred to any other country outside the EU without adequate protection in situ. 2) Freedom of Information Act, 2000. It allows people to ask any public body for information on both any subject an organization has and themselves too. So that the act encourages organization to be transparent and, unless a valid reason, the organization must provide requested information within 20 working days. Through this act people can access to informations needed and ensure they are not exploited or used inappropriately.

Sunday, July 21, 2019

Systems Development Life Cycle

Systems Development Life Cycle Introduction SDLC, The systems development life cycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system development project, from an initial feasibility study through maintenance of the completed application. Hence an array of system development life cycle (SDLC) models has been created: Fountain, Spiral, rapid prototyping, synchronize and stabilize and Incremental. Although in the academic sense, SDLC can be used to refer to various processes followed during the development of software, SDLC is typically used to refer to the oldest of the traditional models a waterfall methodology. Software Engineering Process The SDLC supports a list of important phases that are essential for developers, such as planning, analysis, design, and implementation, and are explained more in detail later in this report. Traditionally the waterfall model was regarded as the original: which adhered to a sequence of stages in which the output of each stage became the input for the next. No definitive models exist, but the steps can be describe and divided as follows: Project planning, feasibility study, Initiation: A feasibility study is a quick examination of the problems, goals and expected cost of the system. Projects are usually evaluated in three areas of feasibility: economical, operational, and technical. In addition, it is also used as a guide to keep the project on track and to evaluate the progress of project (Post Anderson, 2006). Thus the goal of the feasibility studies is to evaluate alternative systems solutions and to propose the most feasible and desirable business application for development, (Obrien Marakas, 2006) states that the feasibility of a proposed business system can be evaluated in four major categories Organizational Feasibility: An illustration of how a business supports the strategic business priorities of the organization. Economic feasibility: Identifies whether expected cost savings, increase revenue, increase profits and reductions in required investments will exceed the cost of developing and operating a proposed system. Technical feasibility: can be demonstrated if reliable hardware and software capable of meeting the needs of a proposed systems can be acquired or developed by the business in the required time. Operational feasibility: can be measured by the ability and willingness of management, employees, customers, suppliers and others to operate, use, and support a proposed system. for example if Tescos was to change its software platform at the tills to something entirely different, employees may begin to make to many errors and find ways around using it or just all together quite, thus it will fail to show operational feasibility. Requirements gathering and Systems Analysis: (Hawrzyszkiewycz 2004) This step defines the proposed business solutions and any new or changed businesses processes. The goal at this stage is to find any problems and attempt to fix the system or improve its productivity and efficiency. The technique here is to break the system into smaller pieces as it is easier to be explained to others and can be split up amongst different development team. A draw back of this though is that it takes time and effort to reintegrate all of the pieces (Post Anderson, 2006). Systems design: Functions and operations are described in detail during the design stage, including screen layouts, business rules, process diagrams and other documentation. The output of this stage will be to describe the new system as a collection of modules or subsystems. (Hawrzyszkiewycx 2004) states that system designs is a two step process, Broad design: which indentifies the main architecture of the proposed system which may include the language use to develop the databases, network configurations, software requirements and whether programs are to be developed using internal programmers or external contractors. Detailed design: only after the design phase is completed the detailed design phase can be initiated, during this phase the database and program modules are design and detailed user and system interaction procedures and protocols are documented. Build: Software developers may install (or modify and then install) purchased software or they may write new or custom design programs (Senn 1989). Just like the design phase, this phase is broken up into two separate sub phases, development and implementation. During the implementation phase the components built during the development are put into operational use. Usually this means that the new and old systems run parallel until users are trained in system operations and existing processes converted to the new system. (Hawrzyszkiewycz 2004) Testing: During the integration and test stage, the software artefacts, online help, and test data are migrated from the development environment to a separate test environment. At this point, all test cases are run to verify the correctness and completeness of the software. Successful execution of the test suite confirms a robust and complete migration capability. In addition, reference data is finalized for production use and production users are identified and linked to their appropriate roles. The final reference data (or links to reference data source files) and production user list are compiled into the Production Initiation Plan and the system is used experimentally to ensure that the software does not fail, also the code is tested iteratively at each level (Senn 1989). Installation, Implementation and Deployment: Implementation is a vital step in the deployment of information technology to support employees, customers, and other business stakeholders, the system implementation stage involves hardware and software acquisition, software development, testing of programs and procedures, conversion of data resources and additionally involves the educating and training of end users and specialist who will operate the new system. All together this is the final stage where the project is finally used by the business (Obrien Marakas, 2006). Maintenance: Once a system is fully implemented and is being used in business operation, the maintenance function begins; this involves the life of the system which may include changes and enhancements before its decommissioning. (Obrien Marakas, 2006) states that the maintenance activity includes a post implementation review process to ensure that newly implemented systems meet the business objectives establish for them. (Hawrzyszkiewycx (2004) supports the argument that maintenance is required to eliminate errors in the system during its working life and to improve the system in the light of changes by monitoring, evaluating and modifying operational business systems to make desirable or necessary improvements. Evaluation and Reason for Adopting SDLC for a small Pc Application The adoption of the SDLC for the development of a small application on a pc will not be appropriate because the SDLC is just what is says it is the Life Cycle of the system software. The SDLC is a process use to manage time and resources on a project, from the identification of a need for the system Initiation) to rolling it out to the user (Implementation) to de-supporting or no longer needing it (Disposition), Each phase of the SDLC requires documentation, reporting, and approval. This assures that a project cannot get out of hand either by changing the direction or becoming a financial black hole and the project sponsors are aware at every step of exactly what is going on as it is documented. Therefore it is reasonable to assume that the development of a small application on a pc does not require the adoption of the SDLC model whereas a large systems which have teams of architects, analysts, programmers, testers and users must work together to create the millions of lines of cust om-written code that drive enterprises today, will without a doubt need to adopt an SDLC solution to manage the resources of such a project. Evaluation Of the Traditional SDLC Strengths Limitations The Waterfall Model The waterfall model is the most classical sequential life cycle; each phase must be completed in its entirety before the next phase can begin. (Post Anderson, 2006) states that one advantage of the SDLC is the formality aspect which makes it easier to train employees and to evaluate the progress of the development as well as ensuring that steps are not skip, such as user approval, documentation and testing. In addition with eighty percent of MIS resources spent of maintenance, adhering to standards whilst building the system makes it easier to modify and maintain in the future because of the documentation generated and the sustain consistency, however the formality of the SDLC approach can be problematic as it increases the cost of development and lengthens the development time (Post Anderson, 2006) The formality of the SDLC method also causes problems with projects that are hard to defined, unlike newer methods like Agile which helps software development teams to respond to the unpredictability of building software through incremental, iterative work cadences, known as sprints (Cohn, Mike 2006). Agile Methods aim at allowing organizations to deliver quickly, change quickly and change often. While, agile techniques vary in practice and emphasis, they share common characteristics, including iterative development and a focus on inter-action and communication. Maintaining regularity allows development teams to adapt rapidly to changing requirements, and working in close proximity, focusing on communication, means teams can make decisions and act on them immediately, rather than wait on correspondence. It is also important to reduce non-value adding intermediate artefacts to allow more resources to be devoted to product development for early completion. The SDLC however works best if the entire system can be accurately specified in the beginning. That is, users should know what the system should do long before the system is created. (Post Anderson, 2006) further explains that because of the rigidity of the SDLC, the development of more modern applications are difficult, hence the combination of existing SDLC models and the creation of other alternatives models and methodologies are adopted as outlined later in this paper. Advantages Easier to use. Easier to manage because of rigidity Phases are completed at specific phase intervals Requirements are very well understood. Disadvantages scope adjustment during the life cycle can kill a project Working software is not produced until the life cycle is complete. Not suited for long and ongoing projects. In appropriate where requirements are at a moderate to high risk of changing Alternative development mythologies One management advantage of the traditional SDLC method is the sequential series of tasks; on the other hand using the traditional SDLC has many drawbacks. For example, when adopting a traditional SDLC methodology, the rigid chain of phases may subsequently make it impossible for developers to improved ways to provide functional requirements as the project is being built, which results in the designers redoing their work. Instead programmers should be involved in the planning and design phases, so that they may be able to identify improvements much earlier in the process, thus enhancing the effectiveness of project activities, (FFIEC IT Handbook (2009). Development solutions such as iterative and Rapid prototyping address many of the shortcomings of a traditional SDLC. And a brief description of two the newer methodologies are outlined below along with some advantages and disadvantages for comparison purposes. Agile Development Model Agile software development is a conceptual framework for undertaking software engineering projects. Agile methods attempt to minimize risk and maximize productivity by developing software in short iterations and de-emphasizing work on secondary or interim work artefacts. The key differences between agile and traditional methodologies are as follows: Development is incremental rather than sequential. People and interactions are emphasized. Working software is the priority rather than detailed documentation. Customer collaboration is used, rather than contract negotiation. Responding to change is emphasized, rather than extensive planning. Rapid Prototyping model Rapid prototyping is a process for creating a realistic model of a products user interface (Najjar, L. J. (1990) ,Using rapid prototyping, you model the look and feel of the user interface without investing the time and labour required to write actual code (Najjar, L. J. (1990). Advantages Saves time and money Promotes consistency in user interface design Allows early customer involvement Reduces time required to create a product functional specification Disadvantages Usually does not produce reusable code Lacks an obvious stopping point Conclusion It can be seen from the above comparison that differing philosophies can produce radically different views of a system. Nevertheless, both the Traditional SDLC and the alternatives produce valid working systems as well as their share in drawbacks The one size fits all approach to applying SDLC methodologies is no longer appropriate. Each SDLC methodology is only effective under specific conditions. (Traditional SDLC methodologies are often regarded as the proper and disciplined approach to the analysis and design of software applications but the drawback is that it takes a considerable amount of time and all of the system details have to be specified upfront. Methodologies like Rapid Prototyping alternatively are a compromise of rigidity and no rigidity. These new hybrid methods were created to bridge the gap with the evolution of more modern application developments requirements. Newer the less methodologies like Agile are most appropriate when volatility and uncertainty exist in the development requirements, and the SDLC is good when the requirements are already defined. Bibliography Najjar, L. J. (1990). Rapid prototyping (TR 52.0020). Atlanta, GA: IBM Corporation. http://www.lawrence-najjar.com/papers/Rapid_prototyping.html FFIEC IT Handbook (2009). Alternative development methodologies http://www.ffiec.gov/ffiecinfobase/booklets/d_a/02.html Senn James A. (1989), Analysis Design of Information Systems, Introduction to Information Systems, pg27 32 Ch1 McGraw-Hill Co- Singapore Post. G Anderson. D (2006), Management Information Systems, Organizing Business Solutions, pg 448 459 Ch 4 McGraw-Hill Co- New York Igor Hawryszkiewycz. (1998), Introduction to System Analysis Design, The Development Process, pg120 136 Ch 7 Prentice Hall- Australia Obrien A. O Marakas .M. (1989), Management Information Systems, Introduction to Information Systems, pg27 32 Ch1 McGraw-Hill Co- Singapore Systems development life cycle Systems development life cycle 1. Introduction SDLC, The systems development life cycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system development project, from an initial feasibility study through maintenance of the completed application. Hence an array of system development life cycle (SDLC) models has been created: Fountain, Spiral, rapid prototyping, synchronize and stabilize and Incremental. Although in the academic sense, SDLC can be used to refer to various processes followed during the development of software, SDLC is typically used to refer to the oldest of the traditional models a waterfall methodology. 2. Software Engineering Process The SDLC supports a list of important phases that are essential for developers, such as planning, analysis, design, and implementation, and are explained more in detail later in this report. Traditionally the waterfall model was regarded as the original: which adhered to a sequence of stages in which the output of each stage became the input for the next. No definitive models exist, but the steps can be describe and divided as follows: †¢ Project planning, feasibility study, Initiation: A feasibility study is a quick examination of the problems, goals and expected cost of the system. Projects are usually evaluated in three areas of feasibility: economical, operational, and technical. In addition, it is also used as a guide to keep the project on track and to evaluate the progress of project (Post Anderson, 2006). Thus the goal of the feasibility studies is to evaluate alternative systems solutions and to propose the most feasible and desirable business application for development, (Obrien Marakas, 2006) states that the feasibility of a proposed business system can be evaluated in four major categories Organizational Feasibility: An illustration of how a business supports the strategic business priorities of the organization. Economic feasibility: Identifies whether expected cost savings, increase revenue, increase profits and reductions in required investments will exceed the cost of developing and operating a proposed system. Technical feasibility: can be demonstrated if reliable hardware and software capable of meeting the needs of a proposed systems can be acquired or developed by the business in the required time. Operational feasibility: can be measured by the ability and willingness of management, employees, customers, suppliers and others to operate, use, and support a proposed system. for example if Tescos was to change its software platform at the tills to something entirely different, employees may begin to make to many errors and find ways around using it or just all together quite, thus it will fail to show operational feasibility. †¢ Requirements gathering and Systems Analysis: (Hawrzyszkiewycz 2004) This step defines the proposed business solutions and any new or changed businesses processes. The goal at this stage is to find any problems and attempt to fix the system or improve its productivity and efficiency. The technique here is to break the system into smaller pieces as it is easier to be explained to others and can be split up amongst different development team. A draw back of this though is that it takes time and effort to reintegrate all of the pieces (Post Anderson, 2006). †¢ Systems design: Functions and operations are described in detail during the design stage, including screen layouts, business rules, process diagrams and other documentation. The output of this stage will be to describe the new system as a collection of modules or subsystems. (Hawrzyszkiewycx 2004) states that system designs is a two step process, Broad design: which indentifies the main architecture of the proposed system which may include the language use to develop the databases, network configurations, software requirements and whether programs are to be developed using internal programmers or external contractors. Detailed design: only after the design phase is completed the detailed design phase can be initiated, during this phase the database and program modules are design and detailed user and system interaction procedures and protocols are documented. †¢ Build: Software developers may install (or modify and then install) purchased software or they may write new or custom design programs (Senn 1989). Just like the design phase, this phase is broken up into two separate sub phases, development and implementation. During the implementation phase the components built during the development are put into operational use. Usually this means that the new and old systems run parallel until users are trained in system operations and existing processes converted to the new system. (Hawrzyszkiewycz 2004) †¢ Testing: During the integration and test stage, the software artefacts, online help, and test data are migrated from the development environment to a separate test environment. At this point, all test cases are run to verify the correctness and completeness of the software. Successful execution of the test suite confirms a robust and complete migration capability. In addition, reference data is finalized for production use and production users are identified and linked to their appropriate roles. The final reference data (or links to reference data source files) and production user list are compiled into the Production Initiation Plan and the system is used experimentally to ensure that the software does not fail, also the code is tested iteratively at each level (Senn 1989). †¢ Installation, Implementation and Deployment: Implementation is a vital step in the deployment of information technology to support employees, customers, and other business stakeholders, the system implementation stage involves hardware and software acquisition, software development, testing of programs and procedures, conversion of data resources and additionally involves the educating and training of end users and specialist who will operate the new system. All together this is the final stage where the project is finally used by the business (Obrien Marakas, 2006). †¢ Maintenance: Once a system is fully implemented and is being used in business operation, the maintenance function begins; this involves the life of the system which may include changes and enhancements before its decommissioning. (Obrien Marakas, 2006) states that the maintenance activity includes a post implementation review process to ensure that newly implemented systems meet the business objectives establish for them. (Hawrzyszkiewycx (2004) supports the argument that maintenance is required to eliminate errors in the system during its working life and to improve the system in the light of changes by monitoring, evaluating and modifying operational business systems to make desirable or necessary improvements. 3. Evaluation and Reason for Adopting SDLC for a small Pc Application The adoption of the SDLC for the development of a small application on a pc will not be appropriate because the SDLC is just what is says it is the Life Cycle of the system software. The SDLC is a process use to manage time and resources on a project, from the identification of a need for the system Initiation) to rolling it out to the user (Implementation) to de-supporting or no longer needing it (Disposition), Each phase of the SDLC requires documentation, reporting, and approval. This assures that a project cannot get out of hand either by changing the direction or becoming a financial black hole and the project sponsors are aware at every step of exactly what is going on as it is documented. Therefore it is reasonable to assume that the development of a small application on a pc does not require the adoption of the SDLC model whereas a large systems which have teams of architects, analysts, programmers, testers and users must work together to create the millions of lines of cust om-written code that drive enterprises today, will without a doubt need to adopt an SDLC solution to manage the resources of such a project. 4. Evaluation Of the Traditional SDLC Strengths Limitations The Waterfall Model The waterfall model is the most classical sequential life cycle; each phase must be completed in its entirety before the next phase can begin. (Post Anderson, 2006) states that one advantage of the SDLC is the formality aspect which makes it easier to train employees and to evaluate the progress of the development as well as ensuring that steps are not skip, such as user approval, documentation and testing. In addition with eighty percent of MIS resources spent of maintenance, adhering to standards whilst building the system makes it easier to modify and maintain in the future because of the documentation generated and the sustain consistency, however the formality of the SDLC approach can be problematic as it increases the cost of development and lengthens the development time (Post Anderson, 2006) The formality of the SDLC method also causes problems with projects that are hard to defined, unlike newer methods like Agile which helps software development teams to respond to the unpredictability of building software through incremental, iterative work cadences, known as sprints (Cohn, Mike 2006). Agile Methods aim at allowing organizations to deliver quickly, change quickly and change often. While, agile techniques vary in practice and emphasis, they share common characteristics, including iterative development and a focus on inter-action and communication. Maintaining regularity allows development teams to adapt rapidly to changing requirements, and working in close proximity, focusing on communication, means teams can make decisions and act on them immediately, rather than wait on correspondence. It is also important to reduce non-value adding intermediate artefacts to allow more resources to be devoted to product development for early completion. The SDLC however works best if the entire system can be accurately specified in the beginning. That is, users should know what the system should do long before the system is created. (Post Anderson, 2006) further explains that because of the rigidity of the SDLC, the development of more modern applications are difficult, hence the combination of existing SDLC models and the creation of other alternatives models and methodologies are adopted as outlined later in this paper. Advantages Easier to use. Easier to manage because of rigidity Phases are completed at specific phase intervals Requirements are very well understood. Disadvantages scope adjustment during the life cycle can kill a project Working software is not produced until the life cycle is complete. Not suited for long and ongoing projects. In appropriate where requirements are at a moderate to high risk of changing Alternative development mythologies One management advantage of the traditional SDLC method is the sequential series of tasks; on the other hand using the traditional SDLC has many drawbacks. For example, when adopting a traditional SDLC methodology, the rigid chain of phases may subsequently make it impossible for developers to improved ways to provide functional requirements as the project is being built, which results in the designers redoing their work. Instead programmers should be involved in the planning and design phases, so that they may be able to identify improvements much earlier in the process, thus enhancing the effectiveness of project activities, (FFIEC IT Handbook (2009). Development solutions such as iterative and Rapid prototyping address many of the shortcomings of a traditional SDLC. And a brief description of two the newer methodologies are outlined below along with some advantages and disadvantages for comparison purposes. Agile Development Model Agile software development is a conceptual framework for undertaking software engineering projects. Agile methods attempt to minimize risk and maximize productivity by developing software in short iterations and de-emphasizing work on secondary or interim work artefacts. The key differences between agile and traditional methodologies are as follows: Development is incremental rather than sequential. People and interactions are emphasized. Working software is the priority rather than detailed documentation. Customer collaboration is used, rather than contract negotiation. Responding to change is emphasized, rather than extensive planning. Rapid Prototyping model Rapid prototyping is a process for creating a realistic model of a products user interface (Najjar, L. J. (1990) ,Using rapid prototyping, you model the look and feel of the user interface without investing the time and labour required to write actual code (Najjar, L. J. (1990). Advantages Saves time and money Promotes consistency in user interface design Allows early customer involvement Reduces time required to create a product functional specification Disadvantages Usually does not produce reusable code Lacks an obvious stopping point 5. Conclusion It can be seen from the above comparison that differing philosophies can produce radically different views of a system. Nevertheless, both the Traditional SDLC and the alternatives produce valid working systems as well as their share in drawbacks The one size fits all approach to applying SDLC methodologies is no longer appropriate. Each SDLC methodology is only effective under specific conditions. (Traditional SDLC methodologies are often regarded as the proper and disciplined approach to the analysis and design of software applications but the drawback is that it takes a considerable amount of time and all of the system details have to be specified upfront. Methodologies like Rapid Prototyping alternatively are a compromise of rigidity and no rigidity. These new hybrid methods were created to bridge the gap with the evolution of more modern application developments requirements. Newer the less methodologies like Agile are most appropriate when volatility and uncertainty exist in the development requirements, and the SDLC is good when the requirements are already defined. 6. Bibliography Najjar, L. J. (1990). Rapid prototyping (TR 52.0020). Atlanta, GA: IBM Corporation. http://www.lawrence-najjar.com/papers/Rapid_prototyping.html FFIEC IT Handbook (2009). Alternative development methodologies http://www.ffiec.gov/ffiecinfobase/booklets/d_a/02.html Senn James A. (1989), Analysis Design of Information Systems, Introduction to Information Systems, pg27 32 Ch1 McGraw-Hill Co- Singapore Post. G Anderson. D (2006), Management Information Systems, Organizing Business Solutions, pg 448 459 Ch 4 McGraw-Hill Co- New York Igor Hawryszkiewycz. (1998), Introduction to System Analysis Design, The Development Process, pg120 136 Ch 7 Prentice Hall- Australia Obrien A. O Marakas .M. (1989), Management Information Systems, Introduction to Information Systems, pg27 32 Ch1 McGraw-Hill Co- Singapore

Price Forecasts for Oil

Price Forecasts for Oil â€Å"TECHNOLOGY FORECASTING CRISIS ANALYSIS† Technology Futures Business Strategy 1st Assessment Project PART ONE Michel Godet indicated that qualitative parameters were important in accurate forecasting. Using the available information in the international literature and between 1000 and 1500 words: 1. Mention the qualitative parameters that may be considered in future energy price scenarios. For this purpose take the year 2020 and list, with a brief explanation, the parameters you consider should be included. 2. Which of these parameters can you reasonably quantify? (Attempt to identify at least five parameters) 3. Do you agree with this specific aspect of Godets proposition? Why or why not? 4. Evaluate a crisis impact of the accuracy of technology forecasting. Identify the parameters characterizing the crisis aspects. Accordingly, present your opinion about the validity of the forecasts. 5. Using the installed nuclear power data between 1967 and 1987 estimate (using extrapolation techniques) the expected nuclear power time evolution between 1987 and 2007. Comment on the accuracy of your forecasts in relation with the real data. Can you mention any lead time between the major accident of Chernobyl and the reaction of the international electrical power market? PART TWO The OPEC oil price rise in 1973 had an important effect on energy use and energy efficiency, although much of the impact was short-lived. In 2003-4 the oil price effectively doubled, reaching $50/barrel for a period and lately it has reached over $90/barrel. A major player now is Gazprom in Russia News has broken that Gazprom will cut supplies of natural gas to Europe unless it is allowed to raise prices by 200% for export customers, (Customers in Russia historically pay much lower prices). Using the available information in the international literature and between 2000 and 2500 words: 6. Describe your measured response to this, as either an energy Supplier or major energy user. 7. Would you say that your response was based upon â€Å"out of the box† solutions, or a more conservative, incremental approach? 8. Discuss the relative merits and limitations of each of these possible responses, identifying what you believe the two approaches mean. 9. How this crisis shall influence the future of European economies? How could these effects been mitigated? Make your own forecasts. Your answers to 6-8 above are based upon assumed positions within organisations which may employ many people. The next part of this question relates to the impact rising energy prices and, perhaps more importantly, the effect of climate change, may have on your own style of living. 10. At a personal/domestic level, can you foresee a situation in which we may consider that for the benefit of all, we may need to make do with less, in terms of capital goods, travel, and perceived acceptable levels of comfort? Technology Futures Business Strategy 1st Assessment Project PART A-Introduction Based on the Prospective approach and the scenarios method (Godet, 1982), Michel Godet noted the limitations of the classical forecasting concerned with quantification and models (see also Appendix, Table Ap.I). According to Godet, models that only consider quantified parameters do not take into account the development of new relationships and the possible changes in trends. The impossibility of forecasting the future as a function solely of past data is directly related to the omission of qualitative and non-quantifiable parameters such as the wishes and behaviour of relevant actors (Godet, 1982). Furthermore, to structure future scenarios, the variables related to the phenomenon under investigation and the variables configuring its environment should be recognized and analyzed in detail. Besides, the interrelationship among variables, the relative power and fundamental actors, their strategies and available resources as well as the objectives and constraints that must be overcome, should also be taken into account. By granting energy as a commodity under the view of conventional economic theories, markets and price mechanisms are used in order to allocate the respective resources. More specifically, it is the interaction of demand and supply in the markets that allocates resources and largely shapes prices, and it is the broader ecosystem boundaries that these market interactions take place in. Energy pricing, with energy being perceived either as an input or as a potentially polluting source of our ecosystem, clearly stands upon both the sub disciplines of resource and environmental economics (Sweeney, 2004), also depending on the social, political and technological status of the time being and the time to come until 2020. In this context, one may acknowledge a bundle of parameters that may be considered for configuring the respective future energy price scenarios. What is important to note is that similar to the beliefs of Godet, the parameters involved should be studied in terms of interrela tionship, while qualitative and non quantified parameters should be taken into account as well. Question 1 As already mentioned, the configuration of prices within a market -the energy market currently discussed- is largely dependent on the supply and demand balance. This is measured by the respective supply and demand tension expressing the status of a commodity in market terms and providing indications concerning potential energy price changes. While high tensions imply prices imbalance, the opposite is valid for low tension rates. Hence, in order to evaluate future energy prices on the basis of parameters, one should identify the parameters that influence the supply-demand balance in every of the fields previously acknowledged (i.e. social, political, environmental, economical and technological). In this context, in 1.1 the most influential of the parameters configuring energy prices may be encountered. Energy markets are largely influenced by the economic growth factors expressed on the basis of Gross Domestic Product (GDP), inflation, interest and unemployment rates. Given the economic growth along with the parameter of demographics (regarding both the population increase and migrations) one may picture the corresponding trend in energy consumption (i.e. the demand side). Following, policy decisions concerning the determination of fuel mix are determinative as far as energy pricing is considered. For instance, fossil fuels continuing to dominate will stimulate stricter pollution prevention legislation measures (e.g. taxation) and policies for tackling climate change and global warming that will raise energy prices. In parallel, the reinforcement of the respective market holders, potentially leading to strong monopolies, should also be expected. Turning to renewable energy sources may on the one hand -for some of the technologies- imply an adjustment period in order for the corresponding markets to balance and on the other entail significant environmental benefits, in monetary terms as well. Global warming and climate change effects being evident supports the implementation of mitigation measures towards the reduction of greenhouse gas (GHG) emissions, this holding a key role in respect of the future. Reserves holding a key role in the future configuration of energy prices not only in terms of scarcity, but also in terms of production costs is directly related with the technological development concerning the exploitation of new deposits and the promotion of substitutes. As already implied, the power of existing markets is another key factor while the efficiency and absorption of energy investments -the investment shares and outcomes of research and development efforts should be underlined- must be also taken into account. The factors concerned with the quality of life suggest an additional parameter that may affect energy consumption patterns and one that cannot be easily captured despite of the indices recommended so far (Allen, 1991). Moreover, as properly put in the Annual Energy Outlook of 2007 (EIA, 2007a), energy markets projections are subject to much uncertainty (unanticipated events). Many of the events that shape energy markets and therefore the price of energy as well cannot be foreseen. These include unexpected weather events and natural disasters (Rezek and Blair, 2008), major innovations and technological breakthroughs (Marbà ¡n and Valdà ©s-Solà ­s, 2007; Varandas, 2008), disruptions and whirls in the political level (Stern, 2006) with analogous societal consequences, the outbreak of a war (Tahmassebi, 1986; Fernandez, 2008) or a nuclear accident, all of them either smouldering or implying blind spots that cannot be directly projected and consequently quantified. Besides, another area of uncertainty is concerned with the fact that even the established trends steady evolution cannot be guaranteed. Summarizing, a brief explanation was presently given on how each of the parameters acknowledged may influence energy pricing. Additionally, an effort was also carried out in order to give a short description of the interrelationship among parameters, this supporting one of Godets arguments. Insisting on the interrelationship of variables, several of the parameters previously encountered should be diffused to every major regional energy market, the latter being largely influenced by the relationship between fuel types and energy sectors (see also 1.2). Eventually, one may result in a rather complex system that encounters the evolution of influential parameters inside the balance between energy types and energy sectors, this revealing the crucial role of energy fuel mix previously discussed. Following, an effort is carried out in order to reasonably quantify some of the parameters acknowledged. Question 2 Given the bundle of parameters that are thought to influence future energy pricing, a certain number of them can be quantified. For instance, the parameters of population, economic growth, energy consumption, greenhouse gas emissions, energy reserves, and energy fuel mix can be expressed in numerical terms. Demographic growth examines how regional and global demography changes over time. According to the United Nations projections (UN, 2006), world population will increase by over 1 billion people in the years to come until 2020, this suggesting an annual increase rate of 1.1%. While in some areas there is a negative population growth to be considered (e.g. European countries), the opposite may be encountered for some of the Asian countries (e.g. India) where overpopulation is met (see for example 2.1 with EIA forecasts). Besides, the migration of people comprises an additional factor influencing energy patterns via the imposition of unequal population distribution already encountered due to birth and mortality rates. Based on the energy consumption trends ( 2.2), it is expected that energy demand related to all energy products will increase in the years to come, even in such levels that supply may not be able to adequately respond (Asif and Muneer, 2007). In fact, the annual world energy consumption growth is approximately 2% with projections supporting future average rates of 1.1% per annum (EIA, 2007b). In fact, by considering the two of parameters so far examined one may result in the most substantial energy per cap index, clearly establishing the differentiation in energy consumption patterns among world regions (see also question 10). Furthermore, according to the WEO claims (WEC, 2007) that energy generated from fossil fuels will remain the main energy source (expected to cover almost 83% of global energy demand in 2030) and given the 2020 time horizon, much depends on the appearing constraints of world energy reserves, especially those regarding oil and natural gas. While certain studies sound relieving (WCI, 2007), others questioning the extent of increase in the production outputs ring the alarm of forthcoming peaks within the next one or two decades (Bentley, 2002). If the latter is valid, the corresponding demand will not be met, prices will rise, inflation, and international tension will become very likely to occur, and inevitably energy users will have to ration (Wirl, 2008). Overall, what the combination of energy mix with energy reserves provides is the measuring of security of supply, the latter configuring the supply and demand tensions, largely shaping energy prices. Besides, targets set in respect of renewable energy sources further penetration also provide a quantification view; e.g. the EWEA target for 22% coverage of the European electricity consumption by 2030 (EWEA, 2006). Next, expressing economic growth on the basis of gross domestic product (GDP) suggests a constant increase of the former within the range of an average 3% to 4% per year (IMF, 2004), noted during the period from 1970 to 2003. Again, inequity that is to be considered among different world regions is directly related with the previous parameters, illustrating the energy requirements variation. A characteristic example considers China demonstrating an average annual percent change of GDP 2.4% greater than the world average. In 2.3, the respective trends of GDP growth up to the year 2020 may be obtained. Finally, the environmental impact of energy use being expressed on the basis of GHG emissions not only considers the fuel mix and energy consumption but also takes into account the technology used for energy generation. Taking CO2, an increase of 17Gt in a 34 years period, i.e. from 1970 to 2004 (IPCC, 2007), indicates the strong increasing trend, also presented in 2.4. Given also some of the commitments adopted in order to mitigate the greenhouse effect however (e.g. the Kyoto protocol), further quantification, not relying solely on past trends, is possible. The stimulation of additional mitigation measures until 2020 is rather likely, this both imposing the need for shifting to non-fossil fuels and developing cleaner energy generation technologies. Considering the various parameters trends illustrated above, one may sense that the tensions between supply and demand, this comprising the main driver for energy prices, are going to rise. Energy consumption, GDP and population rates on one hand demonstrate the demand side, while declining reserves and mitigation measures describe the opposite supply side. In between, the decisions for future energy fuel mix patterns, although able to completely reverse the energy markets status quo, are not thought to radically vary within the next 10 to 15 years. Hence, unless some major changes occur, the rising tensions between supply and demand imply both instability and increase of prices on a global level with strong differentiation to be encountered among different world regions. As far as the degree of energy price variation is concerned, the implementation of forecasting may both incorporate all of the pre-mentioned parameters and provide various scenarios considering each ones expected fu ture time evolution. Question 3 As previously seen, several parameters were acknowledged in order to form future energy price scenarios. While some of them were possible to quantify, others although not quantified were equally important inputs to keep in mind. Apart from the given inaccuracy of data (either high or low levelled) coupled with unstable models and the pertinacity of explaining the future in terms of the past, Godet emphasizes on the lack of a global and qualitative approach concerned with forecasting (Godet, 1982). Although quantitative methods may prove to be reliable enough and reasonably accurate for short term forecasts, the same is not valid for forecasts concerned with longer periods. The greater the distance from the reference point, the more obvious is the inability of quantitative data to provide valid forecasts (see also 3.1). In this context, it is critical to comment on the relativity of time scales noted among the study of various phenomena. Hence, what may seem short termed for one phenomenon studied may actually comprise a long period forecast for another that appears to be rapidly changing over time. Any case given, the chances of significant changes regarding the environment in which the phenomenon under study develops are considerably higher as the time horizon becomes longer and it would be more or less naà ¯ve to solely depend on forecasting methods like the extrapolation of trends. Furthermore, the complexity of phenomena studied and the interdependence among the influencing parameters calls for the inclusion of both quantitative and qualitative parameters with Godet clearly addressing the complementarity between the prospective and classical forecasting (Godet, 1982). It was in fact during the first section of this part that the analysis of energy pricing configuration revealed the importance of interaction between quantitative and qualitative parameters. Energy price could not be disengaged from the parallel evolvement of parameters that even though not easily quantified, do structure the phenomenon environment (e.g. political, technological, economic, social, legal and other aspects). What must be outlined here is that similar to the scaling of decision making (strategic-long term, innovative-medium term, operational-short term), the role of quantitative data is gradually fading out as we tend to conceptualize the entire phenomenon environment. Hence the bro ader the view, again the more obvious is the inability of quantitative data to support a reliable forecasting (see also 2.1). Although in its extreme point of view, Godets proposition perfectly fits the ability of diagnosing forthcoming crises. Already extremely difficult to predict a crisis, omitting parameters such as the wishes of relevant actors and other influential factors that cannot be quantified makes it impossible even to sense it. It is in this context that one should not disregard the importance of other forecasting resources -apart from data- including assumptions, insight and judgment, all of them involving the subjectivity factor. If managing to get over the reef of the NIH syndrome, creativity and broad minded thinking are also essential elements for good forecasting. Question 4 1973 may be granted as the most pivotal year in energy history. The energy crisis defining the period began on October 17, 1973, when the Arab members of OPEC along with Egypt and Syria, all together comprising OAPEC, decided to place an embargo on shipments of crude oil to nations that had supported Israel in its conflict with Syria and Egypt, mainly targeting at the United States and Netherlands. The result of this decision also brought about major oil price increases. Because of the fact that OPEC was the dominant oil distributor at the time, the price increase implied serious impacts on the national economies of the targeted countries, therefore suggesting an international range crisis. Although the embargo was lifted in March 1974, the effects of the energy crisis, mainly in terms of price increase, lingered on throughout the 1970s, with the Iranian crisis aggravating the situation (see also 4.1). Suggesting a crisis that was mainly expressed on the basis of high energy pricing, the outcome of the previous questions concerned with the determination of energy price influential parameters may be illustrated. In fact, the impact of a more or less unanticipated event changed the correlation patterns between supply and demand and imposed the attachment of high tensions in the market balance, the latter entailing the high volatility of oil price and its potential outburst ever since (Regnier, 2007). The market structures, the dominance of OPEC and the political tension, all suggest aspects of the crisis illustrating the importance of considering qualitative parameters as well. As Godet well pointed out, one cannot neglect the wishes and decisions of major actors when configuring the future (e.g. OPEC members). Similar to the 1973 oil crisis, the California energy crisis occurring some 27 years later also revealed the strength of key actors in completely changing what was meant to follow a past trend or ameliorate a past situation. The deregulation of the electricity market in California (during 1998) targeting to decrease the highest of retail prices among the States turned into a complete fiasco that abetted the manipulation of the market by the energy companies. The crisis main characteristics involved very high wholesale prices, interrupted service of customers (rolling blackouts), bankrupt utilities and huge state expenditures, while the crisis main causes were: The lack of new generating capacity inside California (California was heavily dependent on energy imports from nearby states (CEC, 2007a)). The coincidence of a dry year and natural gas spikes with other market oriented factors (California was largely based on hydro and natural gas for the consumers electrification). The market structure allowing generators to manipulate wholesale prices in the power exchange market through escalating power plants outages that caused market disorder (on the other hand there was a retail price cap that did not allow the investor owned utilities to pass the increasing cost of wholesale purchases to consumers). The delay and inability of the regulators to predict the crisis and respond to it (it was only after a certain time that a wholesale cap was set by the Federal Energy Regulatory Commission and an increase of retail prices was allowed to the investor owned utilities). Emphasizing on the manipulation of the market by the energy generators, in 4.2 one may observe the out of schedule power plant outages rapid increase during the period of the crisis, even exceeding 10GW (approximately 20% of the total installed capacity), responsible for three series of rolling blackouts. No prediction could have captured the 300% and 400% increases of power plants outages. The analogous increase in wholesale prices being the result of the appearing power deficit caused the major suppliers (3 major investor utilities (IOUs)) to be trapped between remarkable wholesale price increases and a fixed retail price (see 4.3). Further, as seen in 4.3, in the early days of deregulation a relatively smooth trend was to be encountered as far as the wholesale market prices are concerned, this also not implying the rapid increase of prices following. Accordingly, although not influenced to the same extent that the IOUs were, the instant impact to the final consumers must also be considered. Note that according to the rough forecast of retail electricity prices -being based on the respective past data- the increase of retail prices was not to be expected either because deregulation promised for a lowering of prices or because the trend applied entailed much lower prices then the ones actually presented at the time (see also 4.4). Similar to this, predictions involving oil pricing before the 1973 crisis and relying on extrapolation techniques (Anon, 1973) expected that world energy consumption would keep up in the increasing rates of 5% up to 2000. If having managed to somehow foresee the 1973 oil price increase, the predictions made would not be exclusively based on the past data trend that would undoubtedly provide a misjudgement of future prices (see also 4.5). What actually followed for the years to come (1980 to 2000) was a 20 years mean annual increase rate of 1.7%. Furthermore, if only having used quantitative data, none could have predicted before the crisis that USA would cut back on oil use. In , 4.6 the response of the USA to the crisis effect reveals the review of energy patterns issued by the government for the times to come. What is also interesting to note in the is the lead time in order to adapt to the new situation encountered (e.g. the natural gas contribution share started increasing 5 year s after the crisis). Another critical point concerning the weaknesses of forecasting previous to crises, not related to the use of numerical past data, may be met in the case of California. Once the regulators and the state adopted a deregulation system that was elsewhere applied successfully (Woo et al., 2003), they decided to proceed in certain modifications (i.e. partial deregulation and imposition of retail price caps) without bothering to consider the different characteristics, features and conditions of operation encountered in the California environment. Hence what might have been thought as successful elsewhere would not be a priori successful in California as well. Finally, if the modification of market structures and potential manipulations had been taken into account via the implementation of alternative scenarios assessing the risk of deregulating the Californian electricity market, certain versatile mechanisms that would instantly respond to a potential crisis may have been put forward. From the analysis provided it becomes clear that forecasting methods that solely rely on past data trends, disregard the wishes of relevant actors and major players, and do not consider the conditions forming the environment where the phenomenon develops cannot capture a broader view of the situation and thus give valid predictions. Question 5 As already addressed, the limited ability of quantitative parameters and extrapolation techniques to provide a valid forecasting, especially in the case where a crisis was to follow, is indisputable. To validate the conclusion made and further support Godets beliefs an example is presently given. Using the installed nuclear power data between 1967 and 1987 along with the application of extrapolation techniques (the forecast function is currently used) one may present the expected nuclear capacity time evolution for the next twenty years. A straightforward comparison of the extrapolation s with the respective real data for the period 1987 to 2007 is available in 5.1. What of course cannot be captured by the extrapolation technique is the Chernobyl crisis, deeply influencing any further development of the nuclear installations. It was on the 26th of April 1986 that reactor number four at the Chernobyl Nuclear Power Plant, located in Ukraine exploded. By considering the magnitude of consequences that the Chernobyl accident entailed (UNDP UNICEF, 2002), one may easily realize the cut back of nuclear capacity in the years to come. Furthermore, what is interesting to note is the different influence that the Chernobyl accident had in countries around the world. In 5.2 one may see the immediate response of the Russians, the Germans and the Ukrainians, while it took a little longer for the USA to reconsider its nuclear program. On the contrary, countries like France and Japan continued to install nuclear plants, while on the other hand Italy abandoned its nuclear program and gradually decommissioned all of its plants (NEA, 2007). What is evaluated here, is the conditions configuring the future. Although in a global level, nuclear capacity did stagnate, this was not the case for every country. Depending on each nations needs, requirements and obligations, a different energy policy may be drawn. If not properly weighing these factors in the forecasting process, the outcome cannot be valid. Based on s 5.1 and 5.2, one may also note the lead time of both the international community and the selected countries. Regarding the response of the world as a whole, a period of 3 to 4 years is to be considered for the international community to perform the actions concerned with the decision of cutting back on nuclears. As already noted, a varying response time met in different countries may be partially ascribed to the distance range from the area of the accident. However, a bundle of parameters should be evaluated in order to explain and predict each actors wishes, obligations and decisions. Moreover, when investigating the long term evolution of nuclear power, one should also consider the factor of a rapidly changing environment. Since the Chernobyl accident and the stagnation of nuclear power occurred, any attempt to reestablish previous growth rates has to deal with competitors such as the galloping natural gas market, the return of the coal sector and the maturity of renewable energy technologies (Lovins, 2005). Besides, the considerations regarding waste management, decommissioning expenses and the risk of a new Chernobyl still remain strong. PART B-Introduction Europe becoming increasingly dependent on imported amounts of energy is indisputable. According to the estimations of the recent business as usual scenarios (EC, 2007), it is expected that the energy imports dependency of Europe will increase from the present 50% to a total of 65% by 2030. Within this forecast, reliance on imports of natural gas is expected to increase from 57% to 84% while the respective increase for oil imports shall correspond to an additional 11%, i.e. from 82% to 93%. In this context, European countries and Russia hold a strong interdependency bond based on the significant European energy imports of oil and natural gas supplied by Russia. Note that loss of autonomy is always a side effect of an interdependent relationship as the parties are constrained by their need for one another. Gazprom being the largest Russian company and the greatest natural gas exporter in the world (Cedigaz, 2007) constantly raises its share in the European market, with the respective volume of natural gas supplies reaching 161.5 billion cubic meters during 2006 (Gazprom, 2007), equal to approximately 26% of the total European natural gas needs. Being also Russias single natural gas exporter (according to the Federal Law on Natural Gas Exports adopted in July 2006), Gazprom alone utilizes the existing natural gas pipelines in order to supply Europe (see also Appendix, Existing Natural Gas Pipelines). Meanwhile a series of recent and past events mainly suggesting disputes with Ukraine and Belarus (Bruce, 2005; Stern, 2006) have questioned the security of supply towards Europe, this revealing the potential gaming behavior of the Russians, either to satisfy political purposes or simply take advantage of the energy card in terms of increased pricing. Similar to the 1973 energy crisis and the recent oil price major increases, a scenario concerned with the raise of European natural gas supplies price by Gazprom is to be examined. The scenario supports that unless the desire of Gazprom for a 200% increase of natural gas prices is satisfied, supply towards Europe will be stopped. Question 6 Given the threat of a 200% price increase of natural gas heading towards European countries, an effort is presently carried out in order to investigate the measured responses of both an energy supplier and an energy user being involved in the potential crisis occurrence. Because of the particular features attributed to the subject under investigation, several cases of different energy suppliers and users may be examined. A macroscopic approach may consider two major sides, i.e. the European countries and Gazprom (Russia). However, a closer look focusing on country level and considering organizations as well is thought to be essential in order to better evaluate the situation. As already seen in the previous question concerned with the nuclear power evolution, not all countries responded in the same way to the Chernobyl crisis (NEA, 2007). Working on a country level, energy users will derive from the main Gazprom customers in both Western-Central Europe and the Commonwealth of Indepen dent States (CIS)-Baltic countries (see also Table 6.I). On the other hand, the major energy supplier shall refer to either Gazprom or another natural gas supplier. The alternative of considering different energy sources suppliers will be also outlined. Furthermore, both conservative and more extreme solutions responding to the problem will be considered. Table 6.I: Key s o Price Forecasts for Oil Price Forecasts for Oil â€Å"TECHNOLOGY FORECASTING CRISIS ANALYSIS† Technology Futures Business Strategy 1st Assessment Project PART ONE Michel Godet indicated that qualitative parameters were important in accurate forecasting. Using the available information in the international literature and between 1000 and 1500 words: 1. Mention the qualitative parameters that may be considered in future energy price scenarios. For this purpose take the year 2020 and list, with a brief explanation, the parameters you consider should be included. 2. Which of these parameters can you reasonably quantify? (Attempt to identify at least five parameters) 3. Do you agree with this specific aspect of Godets proposition? Why or why not? 4. Evaluate a crisis impact of the accuracy of technology forecasting. Identify the parameters characterizing the crisis aspects. Accordingly, present your opinion about the validity of the forecasts. 5. Using the installed nuclear power data between 1967 and 1987 estimate (using extrapolation techniques) the expected nuclear power time evolution between 1987 and 2007. Comment on the accuracy of your forecasts in relation with the real data. Can you mention any lead time between the major accident of Chernobyl and the reaction of the international electrical power market? PART TWO The OPEC oil price rise in 1973 had an important effect on energy use and energy efficiency, although much of the impact was short-lived. In 2003-4 the oil price effectively doubled, reaching $50/barrel for a period and lately it has reached over $90/barrel. A major player now is Gazprom in Russia News has broken that Gazprom will cut supplies of natural gas to Europe unless it is allowed to raise prices by 200% for export customers, (Customers in Russia historically pay much lower prices). Using the available information in the international literature and between 2000 and 2500 words: 6. Describe your measured response to this, as either an energy Supplier or major energy user. 7. Would you say that your response was based upon â€Å"out of the box† solutions, or a more conservative, incremental approach? 8. Discuss the relative merits and limitations of each of these possible responses, identifying what you believe the two approaches mean. 9. How this crisis shall influence the future of European economies? How could these effects been mitigated? Make your own forecasts. Your answers to 6-8 above are based upon assumed positions within organisations which may employ many people. The next part of this question relates to the impact rising energy prices and, perhaps more importantly, the effect of climate change, may have on your own style of living. 10. At a personal/domestic level, can you foresee a situation in which we may consider that for the benefit of all, we may need to make do with less, in terms of capital goods, travel, and perceived acceptable levels of comfort? Technology Futures Business Strategy 1st Assessment Project PART A-Introduction Based on the Prospective approach and the scenarios method (Godet, 1982), Michel Godet noted the limitations of the classical forecasting concerned with quantification and models (see also Appendix, Table Ap.I). According to Godet, models that only consider quantified parameters do not take into account the development of new relationships and the possible changes in trends. The impossibility of forecasting the future as a function solely of past data is directly related to the omission of qualitative and non-quantifiable parameters such as the wishes and behaviour of relevant actors (Godet, 1982). Furthermore, to structure future scenarios, the variables related to the phenomenon under investigation and the variables configuring its environment should be recognized and analyzed in detail. Besides, the interrelationship among variables, the relative power and fundamental actors, their strategies and available resources as well as the objectives and constraints that must be overcome, should also be taken into account. By granting energy as a commodity under the view of conventional economic theories, markets and price mechanisms are used in order to allocate the respective resources. More specifically, it is the interaction of demand and supply in the markets that allocates resources and largely shapes prices, and it is the broader ecosystem boundaries that these market interactions take place in. Energy pricing, with energy being perceived either as an input or as a potentially polluting source of our ecosystem, clearly stands upon both the sub disciplines of resource and environmental economics (Sweeney, 2004), also depending on the social, political and technological status of the time being and the time to come until 2020. In this context, one may acknowledge a bundle of parameters that may be considered for configuring the respective future energy price scenarios. What is important to note is that similar to the beliefs of Godet, the parameters involved should be studied in terms of interrela tionship, while qualitative and non quantified parameters should be taken into account as well. Question 1 As already mentioned, the configuration of prices within a market -the energy market currently discussed- is largely dependent on the supply and demand balance. This is measured by the respective supply and demand tension expressing the status of a commodity in market terms and providing indications concerning potential energy price changes. While high tensions imply prices imbalance, the opposite is valid for low tension rates. Hence, in order to evaluate future energy prices on the basis of parameters, one should identify the parameters that influence the supply-demand balance in every of the fields previously acknowledged (i.e. social, political, environmental, economical and technological). In this context, in 1.1 the most influential of the parameters configuring energy prices may be encountered. Energy markets are largely influenced by the economic growth factors expressed on the basis of Gross Domestic Product (GDP), inflation, interest and unemployment rates. Given the economic growth along with the parameter of demographics (regarding both the population increase and migrations) one may picture the corresponding trend in energy consumption (i.e. the demand side). Following, policy decisions concerning the determination of fuel mix are determinative as far as energy pricing is considered. For instance, fossil fuels continuing to dominate will stimulate stricter pollution prevention legislation measures (e.g. taxation) and policies for tackling climate change and global warming that will raise energy prices. In parallel, the reinforcement of the respective market holders, potentially leading to strong monopolies, should also be expected. Turning to renewable energy sources may on the one hand -for some of the technologies- imply an adjustment period in order for the corresponding markets to balance and on the other entail significant environmental benefits, in monetary terms as well. Global warming and climate change effects being evident supports the implementation of mitigation measures towards the reduction of greenhouse gas (GHG) emissions, this holding a key role in respect of the future. Reserves holding a key role in the future configuration of energy prices not only in terms of scarcity, but also in terms of production costs is directly related with the technological development concerning the exploitation of new deposits and the promotion of substitutes. As already implied, the power of existing markets is another key factor while the efficiency and absorption of energy investments -the investment shares and outcomes of research and development efforts should be underlined- must be also taken into account. The factors concerned with the quality of life suggest an additional parameter that may affect energy consumption patterns and one that cannot be easily captured despite of the indices recommended so far (Allen, 1991). Moreover, as properly put in the Annual Energy Outlook of 2007 (EIA, 2007a), energy markets projections are subject to much uncertainty (unanticipated events). Many of the events that shape energy markets and therefore the price of energy as well cannot be foreseen. These include unexpected weather events and natural disasters (Rezek and Blair, 2008), major innovations and technological breakthroughs (Marbà ¡n and Valdà ©s-Solà ­s, 2007; Varandas, 2008), disruptions and whirls in the political level (Stern, 2006) with analogous societal consequences, the outbreak of a war (Tahmassebi, 1986; Fernandez, 2008) or a nuclear accident, all of them either smouldering or implying blind spots that cannot be directly projected and consequently quantified. Besides, another area of uncertainty is concerned with the fact that even the established trends steady evolution cannot be guaranteed. Summarizing, a brief explanation was presently given on how each of the parameters acknowledged may influence energy pricing. Additionally, an effort was also carried out in order to give a short description of the interrelationship among parameters, this supporting one of Godets arguments. Insisting on the interrelationship of variables, several of the parameters previously encountered should be diffused to every major regional energy market, the latter being largely influenced by the relationship between fuel types and energy sectors (see also 1.2). Eventually, one may result in a rather complex system that encounters the evolution of influential parameters inside the balance between energy types and energy sectors, this revealing the crucial role of energy fuel mix previously discussed. Following, an effort is carried out in order to reasonably quantify some of the parameters acknowledged. Question 2 Given the bundle of parameters that are thought to influence future energy pricing, a certain number of them can be quantified. For instance, the parameters of population, economic growth, energy consumption, greenhouse gas emissions, energy reserves, and energy fuel mix can be expressed in numerical terms. Demographic growth examines how regional and global demography changes over time. According to the United Nations projections (UN, 2006), world population will increase by over 1 billion people in the years to come until 2020, this suggesting an annual increase rate of 1.1%. While in some areas there is a negative population growth to be considered (e.g. European countries), the opposite may be encountered for some of the Asian countries (e.g. India) where overpopulation is met (see for example 2.1 with EIA forecasts). Besides, the migration of people comprises an additional factor influencing energy patterns via the imposition of unequal population distribution already encountered due to birth and mortality rates. Based on the energy consumption trends ( 2.2), it is expected that energy demand related to all energy products will increase in the years to come, even in such levels that supply may not be able to adequately respond (Asif and Muneer, 2007). In fact, the annual world energy consumption growth is approximately 2% with projections supporting future average rates of 1.1% per annum (EIA, 2007b). In fact, by considering the two of parameters so far examined one may result in the most substantial energy per cap index, clearly establishing the differentiation in energy consumption patterns among world regions (see also question 10). Furthermore, according to the WEO claims (WEC, 2007) that energy generated from fossil fuels will remain the main energy source (expected to cover almost 83% of global energy demand in 2030) and given the 2020 time horizon, much depends on the appearing constraints of world energy reserves, especially those regarding oil and natural gas. While certain studies sound relieving (WCI, 2007), others questioning the extent of increase in the production outputs ring the alarm of forthcoming peaks within the next one or two decades (Bentley, 2002). If the latter is valid, the corresponding demand will not be met, prices will rise, inflation, and international tension will become very likely to occur, and inevitably energy users will have to ration (Wirl, 2008). Overall, what the combination of energy mix with energy reserves provides is the measuring of security of supply, the latter configuring the supply and demand tensions, largely shaping energy prices. Besides, targets set in respect of renewable energy sources further penetration also provide a quantification view; e.g. the EWEA target for 22% coverage of the European electricity consumption by 2030 (EWEA, 2006). Next, expressing economic growth on the basis of gross domestic product (GDP) suggests a constant increase of the former within the range of an average 3% to 4% per year (IMF, 2004), noted during the period from 1970 to 2003. Again, inequity that is to be considered among different world regions is directly related with the previous parameters, illustrating the energy requirements variation. A characteristic example considers China demonstrating an average annual percent change of GDP 2.4% greater than the world average. In 2.3, the respective trends of GDP growth up to the year 2020 may be obtained. Finally, the environmental impact of energy use being expressed on the basis of GHG emissions not only considers the fuel mix and energy consumption but also takes into account the technology used for energy generation. Taking CO2, an increase of 17Gt in a 34 years period, i.e. from 1970 to 2004 (IPCC, 2007), indicates the strong increasing trend, also presented in 2.4. Given also some of the commitments adopted in order to mitigate the greenhouse effect however (e.g. the Kyoto protocol), further quantification, not relying solely on past trends, is possible. The stimulation of additional mitigation measures until 2020 is rather likely, this both imposing the need for shifting to non-fossil fuels and developing cleaner energy generation technologies. Considering the various parameters trends illustrated above, one may sense that the tensions between supply and demand, this comprising the main driver for energy prices, are going to rise. Energy consumption, GDP and population rates on one hand demonstrate the demand side, while declining reserves and mitigation measures describe the opposite supply side. In between, the decisions for future energy fuel mix patterns, although able to completely reverse the energy markets status quo, are not thought to radically vary within the next 10 to 15 years. Hence, unless some major changes occur, the rising tensions between supply and demand imply both instability and increase of prices on a global level with strong differentiation to be encountered among different world regions. As far as the degree of energy price variation is concerned, the implementation of forecasting may both incorporate all of the pre-mentioned parameters and provide various scenarios considering each ones expected fu ture time evolution. Question 3 As previously seen, several parameters were acknowledged in order to form future energy price scenarios. While some of them were possible to quantify, others although not quantified were equally important inputs to keep in mind. Apart from the given inaccuracy of data (either high or low levelled) coupled with unstable models and the pertinacity of explaining the future in terms of the past, Godet emphasizes on the lack of a global and qualitative approach concerned with forecasting (Godet, 1982). Although quantitative methods may prove to be reliable enough and reasonably accurate for short term forecasts, the same is not valid for forecasts concerned with longer periods. The greater the distance from the reference point, the more obvious is the inability of quantitative data to provide valid forecasts (see also 3.1). In this context, it is critical to comment on the relativity of time scales noted among the study of various phenomena. Hence, what may seem short termed for one phenomenon studied may actually comprise a long period forecast for another that appears to be rapidly changing over time. Any case given, the chances of significant changes regarding the environment in which the phenomenon under study develops are considerably higher as the time horizon becomes longer and it would be more or less naà ¯ve to solely depend on forecasting methods like the extrapolation of trends. Furthermore, the complexity of phenomena studied and the interdependence among the influencing parameters calls for the inclusion of both quantitative and qualitative parameters with Godet clearly addressing the complementarity between the prospective and classical forecasting (Godet, 1982). It was in fact during the first section of this part that the analysis of energy pricing configuration revealed the importance of interaction between quantitative and qualitative parameters. Energy price could not be disengaged from the parallel evolvement of parameters that even though not easily quantified, do structure the phenomenon environment (e.g. political, technological, economic, social, legal and other aspects). What must be outlined here is that similar to the scaling of decision making (strategic-long term, innovative-medium term, operational-short term), the role of quantitative data is gradually fading out as we tend to conceptualize the entire phenomenon environment. Hence the bro ader the view, again the more obvious is the inability of quantitative data to support a reliable forecasting (see also 2.1). Although in its extreme point of view, Godets proposition perfectly fits the ability of diagnosing forthcoming crises. Already extremely difficult to predict a crisis, omitting parameters such as the wishes of relevant actors and other influential factors that cannot be quantified makes it impossible even to sense it. It is in this context that one should not disregard the importance of other forecasting resources -apart from data- including assumptions, insight and judgment, all of them involving the subjectivity factor. If managing to get over the reef of the NIH syndrome, creativity and broad minded thinking are also essential elements for good forecasting. Question 4 1973 may be granted as the most pivotal year in energy history. The energy crisis defining the period began on October 17, 1973, when the Arab members of OPEC along with Egypt and Syria, all together comprising OAPEC, decided to place an embargo on shipments of crude oil to nations that had supported Israel in its conflict with Syria and Egypt, mainly targeting at the United States and Netherlands. The result of this decision also brought about major oil price increases. Because of the fact that OPEC was the dominant oil distributor at the time, the price increase implied serious impacts on the national economies of the targeted countries, therefore suggesting an international range crisis. Although the embargo was lifted in March 1974, the effects of the energy crisis, mainly in terms of price increase, lingered on throughout the 1970s, with the Iranian crisis aggravating the situation (see also 4.1). Suggesting a crisis that was mainly expressed on the basis of high energy pricing, the outcome of the previous questions concerned with the determination of energy price influential parameters may be illustrated. In fact, the impact of a more or less unanticipated event changed the correlation patterns between supply and demand and imposed the attachment of high tensions in the market balance, the latter entailing the high volatility of oil price and its potential outburst ever since (Regnier, 2007). The market structures, the dominance of OPEC and the political tension, all suggest aspects of the crisis illustrating the importance of considering qualitative parameters as well. As Godet well pointed out, one cannot neglect the wishes and decisions of major actors when configuring the future (e.g. OPEC members). Similar to the 1973 oil crisis, the California energy crisis occurring some 27 years later also revealed the strength of key actors in completely changing what was meant to follow a past trend or ameliorate a past situation. The deregulation of the electricity market in California (during 1998) targeting to decrease the highest of retail prices among the States turned into a complete fiasco that abetted the manipulation of the market by the energy companies. The crisis main characteristics involved very high wholesale prices, interrupted service of customers (rolling blackouts), bankrupt utilities and huge state expenditures, while the crisis main causes were: The lack of new generating capacity inside California (California was heavily dependent on energy imports from nearby states (CEC, 2007a)). The coincidence of a dry year and natural gas spikes with other market oriented factors (California was largely based on hydro and natural gas for the consumers electrification). The market structure allowing generators to manipulate wholesale prices in the power exchange market through escalating power plants outages that caused market disorder (on the other hand there was a retail price cap that did not allow the investor owned utilities to pass the increasing cost of wholesale purchases to consumers). The delay and inability of the regulators to predict the crisis and respond to it (it was only after a certain time that a wholesale cap was set by the Federal Energy Regulatory Commission and an increase of retail prices was allowed to the investor owned utilities). Emphasizing on the manipulation of the market by the energy generators, in 4.2 one may observe the out of schedule power plant outages rapid increase during the period of the crisis, even exceeding 10GW (approximately 20% of the total installed capacity), responsible for three series of rolling blackouts. No prediction could have captured the 300% and 400% increases of power plants outages. The analogous increase in wholesale prices being the result of the appearing power deficit caused the major suppliers (3 major investor utilities (IOUs)) to be trapped between remarkable wholesale price increases and a fixed retail price (see 4.3). Further, as seen in 4.3, in the early days of deregulation a relatively smooth trend was to be encountered as far as the wholesale market prices are concerned, this also not implying the rapid increase of prices following. Accordingly, although not influenced to the same extent that the IOUs were, the instant impact to the final consumers must also be considered. Note that according to the rough forecast of retail electricity prices -being based on the respective past data- the increase of retail prices was not to be expected either because deregulation promised for a lowering of prices or because the trend applied entailed much lower prices then the ones actually presented at the time (see also 4.4). Similar to this, predictions involving oil pricing before the 1973 crisis and relying on extrapolation techniques (Anon, 1973) expected that world energy consumption would keep up in the increasing rates of 5% up to 2000. If having managed to somehow foresee the 1973 oil price increase, the predictions made would not be exclusively based on the past data trend that would undoubtedly provide a misjudgement of future prices (see also 4.5). What actually followed for the years to come (1980 to 2000) was a 20 years mean annual increase rate of 1.7%. Furthermore, if only having used quantitative data, none could have predicted before the crisis that USA would cut back on oil use. In , 4.6 the response of the USA to the crisis effect reveals the review of energy patterns issued by the government for the times to come. What is also interesting to note in the is the lead time in order to adapt to the new situation encountered (e.g. the natural gas contribution share started increasing 5 year s after the crisis). Another critical point concerning the weaknesses of forecasting previous to crises, not related to the use of numerical past data, may be met in the case of California. Once the regulators and the state adopted a deregulation system that was elsewhere applied successfully (Woo et al., 2003), they decided to proceed in certain modifications (i.e. partial deregulation and imposition of retail price caps) without bothering to consider the different characteristics, features and conditions of operation encountered in the California environment. Hence what might have been thought as successful elsewhere would not be a priori successful in California as well. Finally, if the modification of market structures and potential manipulations had been taken into account via the implementation of alternative scenarios assessing the risk of deregulating the Californian electricity market, certain versatile mechanisms that would instantly respond to a potential crisis may have been put forward. From the analysis provided it becomes clear that forecasting methods that solely rely on past data trends, disregard the wishes of relevant actors and major players, and do not consider the conditions forming the environment where the phenomenon develops cannot capture a broader view of the situation and thus give valid predictions. Question 5 As already addressed, the limited ability of quantitative parameters and extrapolation techniques to provide a valid forecasting, especially in the case where a crisis was to follow, is indisputable. To validate the conclusion made and further support Godets beliefs an example is presently given. Using the installed nuclear power data between 1967 and 1987 along with the application of extrapolation techniques (the forecast function is currently used) one may present the expected nuclear capacity time evolution for the next twenty years. A straightforward comparison of the extrapolation s with the respective real data for the period 1987 to 2007 is available in 5.1. What of course cannot be captured by the extrapolation technique is the Chernobyl crisis, deeply influencing any further development of the nuclear installations. It was on the 26th of April 1986 that reactor number four at the Chernobyl Nuclear Power Plant, located in Ukraine exploded. By considering the magnitude of consequences that the Chernobyl accident entailed (UNDP UNICEF, 2002), one may easily realize the cut back of nuclear capacity in the years to come. Furthermore, what is interesting to note is the different influence that the Chernobyl accident had in countries around the world. In 5.2 one may see the immediate response of the Russians, the Germans and the Ukrainians, while it took a little longer for the USA to reconsider its nuclear program. On the contrary, countries like France and Japan continued to install nuclear plants, while on the other hand Italy abandoned its nuclear program and gradually decommissioned all of its plants (NEA, 2007). What is evaluated here, is the conditions configuring the future. Although in a global level, nuclear capacity did stagnate, this was not the case for every country. Depending on each nations needs, requirements and obligations, a different energy policy may be drawn. If not properly weighing these factors in the forecasting process, the outcome cannot be valid. Based on s 5.1 and 5.2, one may also note the lead time of both the international community and the selected countries. Regarding the response of the world as a whole, a period of 3 to 4 years is to be considered for the international community to perform the actions concerned with the decision of cutting back on nuclears. As already noted, a varying response time met in different countries may be partially ascribed to the distance range from the area of the accident. However, a bundle of parameters should be evaluated in order to explain and predict each actors wishes, obligations and decisions. Moreover, when investigating the long term evolution of nuclear power, one should also consider the factor of a rapidly changing environment. Since the Chernobyl accident and the stagnation of nuclear power occurred, any attempt to reestablish previous growth rates has to deal with competitors such as the galloping natural gas market, the return of the coal sector and the maturity of renewable energy technologies (Lovins, 2005). Besides, the considerations regarding waste management, decommissioning expenses and the risk of a new Chernobyl still remain strong. PART B-Introduction Europe becoming increasingly dependent on imported amounts of energy is indisputable. According to the estimations of the recent business as usual scenarios (EC, 2007), it is expected that the energy imports dependency of Europe will increase from the present 50% to a total of 65% by 2030. Within this forecast, reliance on imports of natural gas is expected to increase from 57% to 84% while the respective increase for oil imports shall correspond to an additional 11%, i.e. from 82% to 93%. In this context, European countries and Russia hold a strong interdependency bond based on the significant European energy imports of oil and natural gas supplied by Russia. Note that loss of autonomy is always a side effect of an interdependent relationship as the parties are constrained by their need for one another. Gazprom being the largest Russian company and the greatest natural gas exporter in the world (Cedigaz, 2007) constantly raises its share in the European market, with the respective volume of natural gas supplies reaching 161.5 billion cubic meters during 2006 (Gazprom, 2007), equal to approximately 26% of the total European natural gas needs. Being also Russias single natural gas exporter (according to the Federal Law on Natural Gas Exports adopted in July 2006), Gazprom alone utilizes the existing natural gas pipelines in order to supply Europe (see also Appendix, Existing Natural Gas Pipelines). Meanwhile a series of recent and past events mainly suggesting disputes with Ukraine and Belarus (Bruce, 2005; Stern, 2006) have questioned the security of supply towards Europe, this revealing the potential gaming behavior of the Russians, either to satisfy political purposes or simply take advantage of the energy card in terms of increased pricing. Similar to the 1973 energy crisis and the recent oil price major increases, a scenario concerned with the raise of European natural gas supplies price by Gazprom is to be examined. The scenario supports that unless the desire of Gazprom for a 200% increase of natural gas prices is satisfied, supply towards Europe will be stopped. Question 6 Given the threat of a 200% price increase of natural gas heading towards European countries, an effort is presently carried out in order to investigate the measured responses of both an energy supplier and an energy user being involved in the potential crisis occurrence. Because of the particular features attributed to the subject under investigation, several cases of different energy suppliers and users may be examined. A macroscopic approach may consider two major sides, i.e. the European countries and Gazprom (Russia). However, a closer look focusing on country level and considering organizations as well is thought to be essential in order to better evaluate the situation. As already seen in the previous question concerned with the nuclear power evolution, not all countries responded in the same way to the Chernobyl crisis (NEA, 2007). Working on a country level, energy users will derive from the main Gazprom customers in both Western-Central Europe and the Commonwealth of Indepen dent States (CIS)-Baltic countries (see also Table 6.I). On the other hand, the major energy supplier shall refer to either Gazprom or another natural gas supplier. The alternative of considering different energy sources suppliers will be also outlined. Furthermore, both conservative and more extreme solutions responding to the problem will be considered. Table 6.I: Key s o