0% found this document useful (0 votes)
68 views3 pages

Sem 2

I apologize, upon re-reading the document I do not see any clear question or problem being posed. The document provides information about different evaluation methods but does not seem to be asking a specific question. Could you please rephrase or clarify what you're looking to understand from this text?

Uploaded by

Babak Moghadasin
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views3 pages

Sem 2

I apologize, upon re-reading the document I do not see any clear question or problem being posed. The document provides information about different evaluation methods but does not seem to be asking a specific question. Could you please rephrase or clarify what you're looking to understand from this text?

Uploaded by

Babak Moghadasin
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Babak Moghadasin- babamo749- 8408319351

Working Paper R2006:002 Evaluation: definitions, methods and models An ITPS framework Federica Calidoni-Lundberg In 1960s, traditional evaluation appeared by scientific methods. There are many alternatives for it, one of the earliest one is responsible evaluation that tries to sacrifice so accuracy of measurement in order for gaining more helpful outputs for people of a program. The main aim of evaluation is social recovery but it is hard to find only one definition for evaluation since a great number of different categories of individuals work on it. There are some main ideas of evaluation such as scientific research and methods, Economic theory and public choices, Organization and management theory and Political and administrative sciences. ITPS has been used for evaluation and designing program alternatives in all the process of it. It is unlike practice evaluation. While the purpose of project evaluation is normally evaluating effectiveness or efficiency, these are only middle purposes and the matters of program goals are so wide. For instance, scaling the output of public politics, deciding the efficiency of programs, realizing organizations learning, enhancing efficiency, changing government and so on. The purposes of programs are divided in different categories. First, evaluation for development. Second, evaluation for accountability. Third, evaluation for knowledge. The important note is that these three different categories are not divided clearly and they have overlapping. There is variety of models of evaluation. Each of them has its own advantages or disadvantages and also its use. These models are called result, actor and economy models. Result model concentrates on results and checks if they have satisfied the aims. It also checks side effects of a program. Methodologies for them are divided into goal-bound and goal-free procedures. Economic model focuses on expenses and cost aspects of a program. Cost allocation is a simpler concept than both cost-effectiveness and cost-benefit analysis, it assesses the program manager to decide a unit expenses in order to allocate budgets. Cost-effectiveness and cost-benefit fields are not equal. The first one is comparative. On the other hand, the second one mostly focuses on one program. The actor model focuses on technology, clients and users of a program. To evaluate the value of a program, the evaluation method is needed. Two types of it are called quantitative and qualitative. These two methods have been very controversial concepts. Each of them has their own fans although the modern tendency tries to use quantitative methods within qualitative skeleton. Three ways of gaining data are participant observation, interviews and artifact analysis or documents in the qualitative method. Following are some examples of the method. Analytic induction, that looks for common things among phenomena. It is used by social scientists to make subgroups. Focus group is about asking people existing in interactive groups regarding their feelings about things. Ethnography is a long term search that gains data about conversations, daily behavior and genealogical methods. Participant observation - which includes quantitative aspects as well- tries to find information of bunch of people in their colony. These

data can be retrieved by informal interviews, direct observation, participating in their levies, selfanalysis, collective discussions, analysis of personal documents of the group and life history. Here are listed some quantitative methods. Statistical surveys that are called structured interview if they are administrated by a researcher and it tries to present people with same questions. If administrator is respondent, it is called questionnaire or self-administered. They are standardized, include huge amount of data, easy to administer, flexible, not wasting time or money however; they are not suitable for complicated social phenomena, dependent on honesty, motivation, power of answering of people. Content or textual analysis, statistical descriptive techniques and statistical inferential techniques are other types of quantitative methods. Triangulation fans believe that for both quantitative or qualitative methods should have similar results. Improving the influence of justifying policies, changing trends and so on are the role of an evaluator. In order for removing doubt and uncertainty from the evaluation results we need to use validation completely, triangulation and meta-evaluation. Quantitative techniques try to categorize expenses of a system. These costs might be caused by the functions of the system, people involved in the system or the life-cycle of the system. Then we might be able to recognize all sources of costs by categorizing costs.

Evaluating the Information Behavior methods: Formative evaluations of two methods for assessing the functionality and usability of electronic information resources Stephann Makria,n, AnnBlandforda, AnnaL.Coxa, SimonAttfieldb, ClaireWarwickc Some new evaluations consider improving methods instead of evaluating efficiency. In other words, to them formative is more important than summative evaluations. Summative evaluations try to assess the efficiency of made methods. On the other hand, formative evaluations try to evaluate recently developed methods and develop or improve them. Activities that an individual might get involved in them when distinguishing needs for data, looking for those data, using or moving them are called information behavior. Evaluating methods would close the loop between considering interactive behavior that uses electronic data sources and certifying that sources are properly made to sponsorship the behavior. Formative evaluations should show us shall information resource developers use methods in future or they might get better. Some criterias that can be achieved by formative evaluations are Downstream Utility, Practicalities, Analyst activities, Persuasive power, Usability and Learnability. Furthermore, some characteristics that can be assessed by summative evaluations are listed as Productivity, Thoroughness, Validity, Effectiveness, Efficiency, and Reliability. To analyze usability issues, an elevator ask a user to work with an defined electronic source then he tries to observe or take notes to catch data of the user, how he/she works with it, how much he/she feels comfortable, how much energy he/she spends without any addressing to the user, he just records whatever about he/she. Later, the evaluator will analyze the think-aloud data of several users. He also records all of this information with details. For example, which page, which part, which function and then he assign to records. The outputs are consisting of both usability methods and functionality methods. These outputs and data that are gathered from Focus group -that within them people are

asked regarding usability, learnability of methods and their feeling about them- and also summary questionnaire data are used to evaluate the IB methods. An important point about method development is trade-off. There are different types of them such as 1- trade-off for making a method easy to learn without much teaching or training with preparing suitable type of training like online training instead of face-to-face one. 2- With intelligence we can reduce the complexity of learning and without any theoretical knowledge; method should be easy to learn even in early steps. 3- by making it flexible and user friendly and not to misuse the method, causing easy understanding accessible help for customizing methods to end-users. 4- Trade-off between flexibility and learnability and for us the second one is more important to us 5- to ensure that a method is usable, helpful and readable plus escort helps are available.

Information Technology Evaluation: Is It Different? Author(s): Philip Powell There are four reasons to use it as a strategic resource and for all of them economic concept is the foundation. First, earning competitive profits. Second, expanding new business. Third, increasing productivity and performance. Fourth, creating modern managing and organizing. Techniques are divided into objective and subjective. Objective ones stick values to I/O of a system by quantifying them. Subjective methods try find out the weaknesses of values that are dependent on users ideas and trends. Subjective methods give the feeling of contribution and loyalty to users. They might want to quantify but less emphasized on money aspect. Sometimes the satisfaction of user and company benefit may have confliction. Methodologies are categorized in three different groups. First, exponential or factorial based. Second, unit rates use which is a database of costs of each identified units for civil engineering roles. Third, termed operational methods which consist is of all methods within a project. There is a lack of usage formal evaluation. The reason is sometimes there are not motivations to apply it. For example, firms do not have objectives for investment decision so they do not have scale to measure. Computerizing is somehow connected to computer system assessments. In some scenarios cost in not the main issue. For example, when just a percentage of total costs is computerization. The other instance is R&D that invests might be awarded without explicit need either anticipation of profit to the investment. Lacking of specification in computer systems is obvious. Customers do not know what exactly they want. This causes tool-led or analyst-led scenarios. Technical problems are larger and more occurs here than constant environments. There is a tradeoff amongst cost, efficiency and time. The duration of a project is harder to assess than its cost by Norris suggestion.

Question: As far as it is written, no tool leads us to a certain answer of evaluation; especially


those that are related to qualitative methods are not trustful. By knowing this, how can we trust the results of such tools and apply changes or make decisions according to them since our project is about electronic passport?

You might also like