Narrow your search

Library

National Bank of Belgium (2)

ULB (2)

KU Leuven (1)


Resource type

book (5)


Language

English (5)


Year
From To Submit

2007 (5)

Listing 1 - 5 of 5
Sort by

Book
Underlying Dimensions of Knowledge Assessment : Factor Analysis of the Knowledge Assessment Methodology Data
Authors: ---
Year: 2007 Publisher: Washington, D.C., The World Bank,

Loading...
Export citation

Choose an application

Bookmark

Abstract

The Knowledge Assessment Methodology (KAM) database measures variables that may be used to assess the readiness of countries for the knowledge economy and has many policy uses. Formal analysis using KAM data is faced with the problem of which variables to choose and why. Rather than make these decisions in an ad hoc manner, the authors recommend factor-analytic methods to distill the information contained in the many KAM variables into a smaller set of "factors." Their main objective is to quantify the factors for each country, and to do so in a way that allows comparisons of the factor scores over time. The authors investigate both principal components as well as true factor analytic methods, and emphasize simple structures that help provide a clear political-economic meaning of the factors, but also allow comparisons over time.


Book
Underlying Dimensions of Knowledge Assessment : Factor Analysis of the Knowledge Assessment Methodology Data
Authors: ---
Year: 2007 Publisher: Washington, D.C., The World Bank,

Loading...
Export citation

Choose an application

Bookmark

Abstract

The Knowledge Assessment Methodology (KAM) database measures variables that may be used to assess the readiness of countries for the knowledge economy and has many policy uses. Formal analysis using KAM data is faced with the problem of which variables to choose and why. Rather than make these decisions in an ad hoc manner, the authors recommend factor-analytic methods to distill the information contained in the many KAM variables into a smaller set of "factors." Their main objective is to quantify the factors for each country, and to do so in a way that allows comparisons of the factor scores over time. The authors investigate both principal components as well as true factor analytic methods, and emphasize simple structures that help provide a clear political-economic meaning of the factors, but also allow comparisons over time.


Book
Models for ecological data : an introduction
Author:
ISBN: 0691220123 Year: 2007 Publisher: Princeton, New Jersey ; Oxfordshire, England : Princeton University Press,

Loading...
Export citation

Choose an application

Bookmark

Abstract

The environmental sciences are undergoing a revolution in the use of models and data. Facing ecological data sets of unprecedented size and complexity, environmental scientists are struggling to understand and exploit powerful new statistical tools for making sense of ecological processes. In Models for Ecological Data, James Clark introduces ecologists to these modern methods in modeling and computation. Assuming only basic courses in calculus and statistics, the text introduces readers to basic maximum likelihood and then works up to more advanced topics in Bayesian modeling and computation. Clark covers both classical statistical approaches and powerful new computational tools and describes how complexity can motivate a shift from classical to Bayesian methods. Through an available lab manual, the book introduces readers to the practical work of data modeling and computation in the language R. Based on a successful course at Duke University and National Science Foundation-funded institutes on hierarchical modeling, Models for Ecological Data will enable ecologists and other environmental scientists to develop useful models that make sense of ecological data. Consistent treatment from classical to modern Bayes Underlying distribution theory to algorithm development Many examples and applications Does not assume statistical background Extensive supporting appendixes Accompanying lab manual in R


Book
How Good A Map ? : Putting Small Area Estimation to the Test
Authors: --- --- ---
Year: 2007 Publisher: Washington, D.C., The World Bank,

Loading...
Export citation

Choose an application

Bookmark

Abstract

The authors examine the performance of small area welfare estimation. The method combines census and survey data to produce spatially disaggregated poverty and inequality estimates. To test the method, they compare predicted welfare indicators for a set of target populations with their true values. They construct target populations using actual data from a census of households in a set of rural Mexican communities. They examine estimates along three criteria: accuracy of confidence intervals, bias, and correlation with true values. The authors find that while point estimates are very stable, the precision of the estimates varies with alternative simulation methods. While the original approach of numerical gradient estimation yields standard errors that seem appropriate, some computationally less-intensive simulation procedures yield confidence intervals that are slightly too narrow. The precision of estimates is shown to diminish markedly if unobserved location effects at the village level are not well captured in underlying consumption models. With well specified models there is only slight evidence of bias, but the authors show that bias increases if underlying models fail to capture latent location effects. Correlations between estimated and true welfare at the local level are highest for mean expenditure and poverty measures and lower for inequality measures.


Book
How Good A Map ? : Putting Small Area Estimation to the Test
Authors: --- --- ---
Year: 2007 Publisher: Washington, D.C., The World Bank,

Loading...
Export citation

Choose an application

Bookmark

Abstract

The authors examine the performance of small area welfare estimation. The method combines census and survey data to produce spatially disaggregated poverty and inequality estimates. To test the method, they compare predicted welfare indicators for a set of target populations with their true values. They construct target populations using actual data from a census of households in a set of rural Mexican communities. They examine estimates along three criteria: accuracy of confidence intervals, bias, and correlation with true values. The authors find that while point estimates are very stable, the precision of the estimates varies with alternative simulation methods. While the original approach of numerical gradient estimation yields standard errors that seem appropriate, some computationally less-intensive simulation procedures yield confidence intervals that are slightly too narrow. The precision of estimates is shown to diminish markedly if unobserved location effects at the village level are not well captured in underlying consumption models. With well specified models there is only slight evidence of bias, but the authors show that bias increases if underlying models fail to capture latent location effects. Correlations between estimated and true welfare at the local level are highest for mean expenditure and poverty measures and lower for inequality measures.

Listing 1 - 5 of 5
Sort by