Listing 1 - 5 of 5 |
Sort by
|
Choose an application
The Knowledge Assessment Methodology (KAM) database measures variables that may be used to assess the readiness of countries for the knowledge economy and has many policy uses. Formal analysis using KAM data is faced with the problem of which variables to choose and why. Rather than make these decisions in an ad hoc manner, the authors recommend factor-analytic methods to distill the information contained in the many KAM variables into a smaller set of "factors." Their main objective is to quantify the factors for each country, and to do so in a way that allows comparisons of the factor scores over time. The authors investigate both principal components as well as true factor analytic methods, and emphasize simple structures that help provide a clear political-economic meaning of the factors, but also allow comparisons over time.
Correlation --- Correlations --- Covariance --- Data --- E-Business --- Errors --- Factor Analysis --- Information Security and Privacy --- Matrices --- Matrix --- Measurement --- Missing Data --- Orthogonality --- Population Parameters --- Principal Components Analysis --- Private Sector Development --- Regression Analysis --- Sample Size --- Samples --- Science and Technology Development --- Scientists --- Standard Errors --- Stata --- Statistical and Mathematical Sciences --- Variables
Choose an application
The Knowledge Assessment Methodology (KAM) database measures variables that may be used to assess the readiness of countries for the knowledge economy and has many policy uses. Formal analysis using KAM data is faced with the problem of which variables to choose and why. Rather than make these decisions in an ad hoc manner, the authors recommend factor-analytic methods to distill the information contained in the many KAM variables into a smaller set of "factors." Their main objective is to quantify the factors for each country, and to do so in a way that allows comparisons of the factor scores over time. The authors investigate both principal components as well as true factor analytic methods, and emphasize simple structures that help provide a clear political-economic meaning of the factors, but also allow comparisons over time.
Correlation --- Correlations --- Covariance --- Data --- E-Business --- Errors --- Factor Analysis --- Information Security and Privacy --- Matrices --- Matrix --- Measurement --- Missing Data --- Orthogonality --- Population Parameters --- Principal Components Analysis --- Private Sector Development --- Regression Analysis --- Sample Size --- Samples --- Science and Technology Development --- Scientists --- Standard Errors --- Stata --- Statistical and Mathematical Sciences --- Variables
Choose an application
The environmental sciences are undergoing a revolution in the use of models and data. Facing ecological data sets of unprecedented size and complexity, environmental scientists are struggling to understand and exploit powerful new statistical tools for making sense of ecological processes. In Models for Ecological Data, James Clark introduces ecologists to these modern methods in modeling and computation. Assuming only basic courses in calculus and statistics, the text introduces readers to basic maximum likelihood and then works up to more advanced topics in Bayesian modeling and computation. Clark covers both classical statistical approaches and powerful new computational tools and describes how complexity can motivate a shift from classical to Bayesian methods. Through an available lab manual, the book introduces readers to the practical work of data modeling and computation in the language R. Based on a successful course at Duke University and National Science Foundation-funded institutes on hierarchical modeling, Models for Ecological Data will enable ecologists and other environmental scientists to develop useful models that make sense of ecological data. Consistent treatment from classical to modern Bayes Underlying distribution theory to algorithm development Many examples and applications Does not assume statistical background Extensive supporting appendixes Accompanying lab manual in R
Environmental sciences --- Ecology --- Mathematical models. --- Mathematical models. --- Dirichlet distribution. --- Fisher Information. --- Hadamard product. --- Poisson. --- Weibull distribution. --- autocorrelation. --- autocovariance. --- beta distribution. --- beta-binomial. --- binomial distribution. --- completing the square. --- confidence interval. --- correlation. --- covariance. --- differential equation. --- eigenanalysis. --- exponential distribution. --- extreme value distribution. --- fecundity. --- frequentist. --- gamma distribution. --- generation time. --- integrated analysis. --- inverse gamma. --- kriging. --- logistic population growth. --- longitudinal model. --- multinomial. --- negative binomial. --- positive definite matrix. --- predictive loss. --- spectral density. --- stage structured model. --- uniform distribution.
Choose an application
The authors examine the performance of small area welfare estimation. The method combines census and survey data to produce spatially disaggregated poverty and inequality estimates. To test the method, they compare predicted welfare indicators for a set of target populations with their true values. They construct target populations using actual data from a census of households in a set of rural Mexican communities. They examine estimates along three criteria: accuracy of confidence intervals, bias, and correlation with true values. The authors find that while point estimates are very stable, the precision of the estimates varies with alternative simulation methods. While the original approach of numerical gradient estimation yields standard errors that seem appropriate, some computationally less-intensive simulation procedures yield confidence intervals that are slightly too narrow. The precision of estimates is shown to diminish markedly if unobserved location effects at the village level are not well captured in underlying consumption models. With well specified models there is only slight evidence of bias, but the authors show that bias increases if underlying models fail to capture latent location effects. Correlations between estimated and true welfare at the local level are highest for mean expenditure and poverty measures and lower for inequality measures.
Capita Expenditure --- Degrees of Freedom --- Delta Method --- Econometrics --- Education --- Estimates of Poverty --- Explanatory Variables --- Finance and Financial Sector Development --- Financial Literacy --- Health, Nutrition and Population --- Household Survey --- Household Survey Data --- Households --- Macroeconomics and Economic Growth --- Parameter Estimates --- Population Census --- Population Policies --- Poverty Mapping --- Poverty Mapping Methodology --- Poverty Maps --- Poverty Measures --- Poverty Reduction --- Pro-Poor Growth --- Rural Development --- Rural Poverty Reduction --- Science and Technology Development --- Science Education --- Scientific Research and Science Parks --- Simulation Procedures --- Simulations --- Small Area Estimation --- Small Area Estimation Poverty Mapping --- Standard Deviation --- Standard Errors --- Statistical and Mathematical Sciences --- Variance-Covariance Matrix
Choose an application
The authors examine the performance of small area welfare estimation. The method combines census and survey data to produce spatially disaggregated poverty and inequality estimates. To test the method, they compare predicted welfare indicators for a set of target populations with their true values. They construct target populations using actual data from a census of households in a set of rural Mexican communities. They examine estimates along three criteria: accuracy of confidence intervals, bias, and correlation with true values. The authors find that while point estimates are very stable, the precision of the estimates varies with alternative simulation methods. While the original approach of numerical gradient estimation yields standard errors that seem appropriate, some computationally less-intensive simulation procedures yield confidence intervals that are slightly too narrow. The precision of estimates is shown to diminish markedly if unobserved location effects at the village level are not well captured in underlying consumption models. With well specified models there is only slight evidence of bias, but the authors show that bias increases if underlying models fail to capture latent location effects. Correlations between estimated and true welfare at the local level are highest for mean expenditure and poverty measures and lower for inequality measures.
Capita Expenditure --- Degrees of Freedom --- Delta Method --- Econometrics --- Education --- Estimates of Poverty --- Explanatory Variables --- Finance and Financial Sector Development --- Financial Literacy --- Health, Nutrition and Population --- Household Survey --- Household Survey Data --- Households --- Macroeconomics and Economic Growth --- Parameter Estimates --- Population Census --- Population Policies --- Poverty Mapping --- Poverty Mapping Methodology --- Poverty Maps --- Poverty Measures --- Poverty Reduction --- Pro-Poor Growth --- Rural Development --- Rural Poverty Reduction --- Science and Technology Development --- Science Education --- Scientific Research and Science Parks --- Simulation Procedures --- Simulations --- Small Area Estimation --- Small Area Estimation Poverty Mapping --- Standard Deviation --- Standard Errors --- Statistical and Mathematical Sciences --- Variance-Covariance Matrix
Listing 1 - 5 of 5 |
Sort by
|