content
stringlengths 0
14.9M
| filename
stringlengths 44
136
|
---|---|
#' fish
#'
#' Wooldridge Source: K Graddy (1995), “Testing for Imperfect Competition at the Fulton Fish Market,” RAND Journal of Economics 26, 75-92. Professor Graddy's collaborator on a later paper, Professor Joshua Angrist at MIT, kindly provided me with these data. Data loads lazily.
#'
#' @section Notes: This is a nice example of how to go about finding exogenous variables to use as instrumental variables. Often, weather conditions can be assumed to affect supply while having a negligible effect on demand. If so, the weather variables are valid instrumental variables for price in the demand equation. It is a simple matter to test whether prices vary with weather conditions by estimating the reduced form for price.
#'
#' Used in Text: pages 443, 580
#'
#' @docType data
#'
#' @usage data('fish')
#'
#' @format A data.frame with 97 observations on 20 variables:
#' \itemize{
#' \item \strong{prca:} price for Asian buyers
#' \item \strong{prcw:} price for white buyers
#' \item \strong{qtya:} quantity sold to Asians
#' \item \strong{qtyw:} quantity sold to whites
#' \item \strong{mon:} =1 if Monday
#' \item \strong{tues:} =1 if Tuesday
#' \item \strong{wed:} =1 if Wednesday
#' \item \strong{thurs:} =1 if Thursday
#' \item \strong{speed2:} min past 2 days wind speeds
#' \item \strong{wave2:} avg max last 2 days wave height
#' \item \strong{speed3:} 3 day lagged max windspeed
#' \item \strong{wave3:} avg max wave hghts of 3 & 4 day lagged hghts
#' \item \strong{avgprc:} ((prca*qtya) + (prcw*qtyw))/(qtya + qtyw)
#' \item \strong{totqty:} qtya + qtyw
#' \item \strong{lavgprc:} log(avgprc)
#' \item \strong{ltotqty:} log(totqty)
#' \item \strong{t:} time trend
#' \item \strong{lavgp_1:} lavgprc[_n-1]
#' \item \strong{gavgprc:} lavgprc - lavgp_1
#' \item \strong{gavgp_1:} gavgprc[_n-1]
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(fish)
"fish"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/fish.R
|
#' fringe
#'
#' Wooldridge Source: F. Vella (1993), “A Simple Estimator for Simultaneous Models with Censored Endogenous Regressors,” International Economic Review 34, 441-457. Professor Vella kindly provided the data. Data loads lazily.
#'
#' @section Notes: Currently, this data set is used in only one Computer Exercise – to illustrate the Tobit model. It can be used much earlier. First, one could just ignore the pileup at zero and use a linear model where any of the hourly benefit measures is the dependent variable. Another possibility is to use this data set for a problem set in Chapter 4, after students have read Example 4.10. That example, which uses teacher salary/benefit data at the school level, finds the expected tradeoff, although it appears to less than one-to-one. By contrast, if you do a similar analysis with FRINGE.RAW, you will not find a tradeoff. A positive coefficient on the benefit/salary ratio is not too surprising because we probably cannot control for enough factors, especially when looking across different occupations. The Michigan school-level data is more aggregated than one would like, but it does restrict attention to a more homogeneous group: high school teachers in Michigan.
#'
#' Used in Text: page 624-625
#'
#' @docType data
#'
#' @usage data('fringe')
#'
#' @format A data.frame with 616 observations on 39 variables:
#' \itemize{
#' \item \strong{annearn:} annual earnings, $
#' \item \strong{hrearn:} hourly earnings, $
#' \item \strong{exper:} years work experience
#' \item \strong{age:} age in years
#' \item \strong{depends:} number of dependents
#' \item \strong{married:} =1 if married
#' \item \strong{tenure:} years with current employer
#' \item \strong{educ:} years schooling
#' \item \strong{nrtheast:} =1 if live in northeast
#' \item \strong{nrthcen:} =1 if live in north central
#' \item \strong{south:} =1 if live in south
#' \item \strong{male:} =1 if male
#' \item \strong{white:} =1 if white
#' \item \strong{union:} =1 if union member
#' \item \strong{office:}
#' \item \strong{annhrs:} annual hours worked
#' \item \strong{ind1:} industry dummy
#' \item \strong{ind2:}
#' \item \strong{ind3:}
#' \item \strong{ind4:}
#' \item \strong{ind5:}
#' \item \strong{ind6:}
#' \item \strong{ind7:}
#' \item \strong{ind8:}
#' \item \strong{ind9:}
#' \item \strong{vacdays:} $ value of vac. days
#' \item \strong{sicklve:} $ value of sick leave
#' \item \strong{insur:} $ value of employee insur
#' \item \strong{pension:} $ value of employee pension
#' \item \strong{annbens:} vacdays+sicklve+insur+pension
#' \item \strong{hrbens:} hourly benefits, $
#' \item \strong{annhrssq:} annhrs^2
#' \item \strong{beratio:} annbens/annearn
#' \item \strong{lannhrs:} log(annhrs)
#' \item \strong{tenuresq:} tenure^2
#' \item \strong{expersq:} exper^2
#' \item \strong{lannearn:} log(annearn)
#' \item \strong{peratio:} pension/annearn
#' \item \strong{vserat:} (vacdays+sicklve)/annearn
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(fringe)
"fringe"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/fringe.R
|
#' gpa1
#'
#' Wooldridge Source: Christopher Lemmon, a former MSU undergraduate, collected these data from a survey he took of MSU students in Fall 1994. Data loads lazily.
#'
#' @section Notes: This is a nice example of how students can obtain an original data set by focusing locally and carefully composing a survey.
#'
#' Used in Text: pages 75, 77, 81, 129-130, 160, 232, 262, 295-296, 300-301
#'
#' @docType data
#'
#' @usage data('gpa1')
#'
#' @format A data.frame with 141 observations on 29 variables:
#' \itemize{
#' \item \strong{age:} in years
#' \item \strong{soph:} =1 if sophomore
#' \item \strong{junior:} =1 if junior
#' \item \strong{senior:} =1 if senior
#' \item \strong{senior5:} =1 if fifth year senior
#' \item \strong{male:} =1 if male
#' \item \strong{campus:} =1 if live on campus
#' \item \strong{business:} =1 if business major
#' \item \strong{engineer:} =1 if engineering major
#' \item \strong{colGPA:} MSU GPA
#' \item \strong{hsGPA:} high school GPA
#' \item \strong{ACT:} 'achievement' score
#' \item \strong{job19:} =1 if job <= 19 hours
#' \item \strong{job20:} =1 if job >= 20 hours
#' \item \strong{drive:} =1 if drive to campus
#' \item \strong{bike:} =1 if bicycle to campus
#' \item \strong{walk:} =1 if walk to campus
#' \item \strong{voluntr:} =1 if do volunteer work
#' \item \strong{PC:} =1 of pers computer at sch
#' \item \strong{greek:} =1 if fraternity or sorority
#' \item \strong{car:} =1 if own car
#' \item \strong{siblings:} =1 if have siblings
#' \item \strong{bgfriend:} =1 if boy- or girlfriend
#' \item \strong{clubs:} =1 if belong to MSU club
#' \item \strong{skipped:} avg lectures missed per week
#' \item \strong{alcohol:} avg # days per week drink alc.
#' \item \strong{gradMI:} =1 if Michigan high school
#' \item \strong{fathcoll:} =1 if father college grad
#' \item \strong{mothcoll:} =1 if mother college grad
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(gpa1)
"gpa1"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/gpa1.R
|
#' gpa2
#'
#' Wooldridge Source: For confidentiality reasons, I cannot provide the source of these data. I can say that Data loads lazily.
#'
#' @section Used in Text: pages 106, 184, 208-209, 210-211, 221, 259, 262-263
#'
#' they come from a midsize research university that also supports men’s and women’s athletics at the Division I level.
#'
#' @docType data
#'
#' @usage data('gpa2')
#'
#' @format A data.frame with 4137 observations on 12 variables:
#' \itemize{
#' \item \strong{sat:} combined SAT score
#' \item \strong{tothrs:} total hours through fall semest
#' \item \strong{colgpa:} GPA after fall semester
#' \item \strong{athlete:} =1 if athlete
#' \item \strong{verbmath:} verbal/math SAT score
#' \item \strong{hsize:} size grad. class, 100s
#' \item \strong{hsrank:} rank in grad. class
#' \item \strong{hsperc:} high school percentile, from top
#' \item \strong{female:} =1 if female
#' \item \strong{white:} =1 if white
#' \item \strong{black:} =1 if black
#' \item \strong{hsizesq:} hsize^2
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(gpa2)
"gpa2"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/gpa2.R
|
#' gpa3
#'
#' Wooldridge Source: See GPA2.RAW Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 246-248, 273, 297-298, 478
#'
#' @docType data
#'
#' @usage data('gpa3')
#'
#' @format A data.frame with 732 observations on 23 variables:
#' \itemize{
#' \item \strong{term:} fall = 1, spring = 2
#' \item \strong{sat:} SAT score
#' \item \strong{tothrs:} total hours prior to term
#' \item \strong{cumgpa:} cumulative GPA
#' \item \strong{season:} =1 if in season
#' \item \strong{frstsem:} =1 if student's 1st semester
#' \item \strong{crsgpa:} weighted course GPA
#' \item \strong{verbmath:} verbal SAT to math SAT ratio
#' \item \strong{trmgpa:} term GPA
#' \item \strong{hssize:} size h.s. grad. class
#' \item \strong{hsrank:} rank in h.s. class
#' \item \strong{id:} student identifier
#' \item \strong{spring:} =1 if spring term
#' \item \strong{female:} =1 if female
#' \item \strong{black:} =1 if black
#' \item \strong{white:} =1 if white
#' \item \strong{ctrmgpa:} change in trmgpa
#' \item \strong{ctothrs:} change in total hours
#' \item \strong{ccrsgpa:} change in crsgpa
#' \item \strong{ccrspop:} change in crspop
#' \item \strong{cseason:} change in season
#' \item \strong{hsperc:} percentile in h.s.
#' \item \strong{football:} =1 if football player
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(gpa3)
"gpa3"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/gpa3.R
|
#' happiness
#'
#' Wooldridge Data loads lazily.
#'
#' @section
#'
#'
#'
#' @docType data
#'
#' @usage data('happiness')
#'
#' @format A data.frame with 17137 observations on 33 variables:
#' \itemize{
#' \item \strong{year:} gss year for this respondent
#' \item \strong{workstat:} work force status
#' \item \strong{prestige:} occupational prestige score
#' \item \strong{divorce:} ever been divorced or separated
#' \item \strong{widowed:} ever been widowed
#' \item \strong{educ:} highest year of school completed
#' \item \strong{reg16:} region of residence, age 16
#' \item \strong{babies:} household members less than 6 yrs old
#' \item \strong{preteen:} household members 6 thru 12 yrs old
#' \item \strong{teens:} household members 13 thru 17 yrs old
#' \item \strong{income:} total family income
#' \item \strong{region:} region of interview
#' \item \strong{attend:} how often r attends religious services
#' \item \strong{happy:} general happiness
#' \item \strong{owngun:} =1 if own gun
#' \item \strong{tvhours:} hours per day watching tv
#' \item \strong{vhappy:} =1 if 'very happy'
#' \item \strong{mothfath16:} =1 if live with mother and father at 16
#' \item \strong{black:} =1 if black
#' \item \strong{gwbush04:} =1 if voted for G.W. Bush in 2004
#' \item \strong{female:} =1 if female
#' \item \strong{blackfemale:} black*female
#' \item \strong{gwbush00:} =1 if voted for G.W. Bush in 2000
#' \item \strong{occattend:} =1 if attend is 3, 4, or 5
#' \item \strong{regattend:} =1 if attend is 6, 7, or 8
#' \item \strong{y94:} =1 if year == 1994
#' \item \strong{y96:}
#' \item \strong{y98:}
#' \item \strong{y00:}
#' \item \strong{y02:}
#' \item \strong{y04:}
#' \item \strong{y06:} =1 if year == 2006
#' \item \strong{unem10:} =1 if unemployed in last 10 years
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(happiness)
"happiness"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/happiness.R
|
#' hprice1
#'
#' Wooldridge Source: Collected from the real estate pages of the Boston Globe during 1990. These are homes that sold in the Boston, MA area. Data loads lazily.
#'
#' @section Notes: Typically, it is very easy to obtain data on selling prices and characteristics of homes, using publicly available data bases. It is interesting to match the information on houses with other information – such as local crime rates, quality of the local schools, pollution levels, and so on – and estimate the effects of such variables on housing prices.
#'
#' Used in Text: pages 110, 153-154, 160-161, 165, 211-212, 221, 222, 234, 278, 280, 299, 307
#'
#' @docType data
#'
#' @usage data('hprice1')
#'
#' @format A data.frame with 88 observations on 10 variables:
#' \itemize{
#' \item \strong{price:} house price, $1000s
#' \item \strong{assess:} assessed value, $1000s
#' \item \strong{bdrms:} number of bdrms
#' \item \strong{lotsize:} size of lot in square feet
#' \item \strong{sqrft:} size of house in square feet
#' \item \strong{colonial:} =1 if home is colonial style
#' \item \strong{lprice:} log(price)
#' \item \strong{lassess:} log(assess
#' \item \strong{llotsize:} log(lotsize)
#' \item \strong{lsqrft:} log(sqrft)
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(hprice1)
"hprice1"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/hprice1.R
|
#' hprice2
#'
#' Wooldridge Source: D. Harrison and D.L. Rubinfeld (1978), “Hedonic Housing Prices and the Demand for Clean Air,” by Harrison, D. and D.L.Rubinfeld, Journal of Environmental Economics and Management 5, 81-102. Diego Garcia, a former Ph.D. student in economics at MIT, kindly provided these data, which he obtained from the book Regression Diagnostics: Identifying Influential Data and Sources of Collinearity, by D.A. Belsey, E. Kuh, and R. Welsch, 1990. New York: Wiley. Data loads lazily.
#'
#' @section Notes: The census contains rich information on variables such as median housing prices, median income levels, average family size, and so on, for fairly small geographical areas. If such data can be merged with pollution data, one can update the Harrison and Rubinfeld study. Presumably, this has been done in academic journals.
#'
#' Used in Text: pages 108, 132-133, 190-191, 196-197.
#'
#' @docType data
#'
#' @usage data('hprice2')
#'
#' @format A data.frame with 506 observations on 12 variables:
#' \itemize{
#' \item \strong{price:} median housing price, $
#' \item \strong{crime:} crimes committed per capita
#' \item \strong{nox:} nit ox concen; parts per 100m
#' \item \strong{rooms:} avg number of rooms
#' \item \strong{dist:} wght dist to 5 employ centers
#' \item \strong{radial:} access. index to rad. hghwys
#' \item \strong{proptax:} property tax per $1000
#' \item \strong{stratio:} average student-teacher ratio
#' \item \strong{lowstat:} perc of people 'lower status'
#' \item \strong{lprice:} log(price)
#' \item \strong{lnox:} log(nox)
#' \item \strong{lproptax:} log(proptax)
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(hprice2)
"hprice2"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/hprice2.R
|
#' hprice3
#'
#' Wooldridge Data loads lazily.
#'
#' @section
#'
#'
#'
#' @docType data
#'
#' @usage data('hprice3')
#'
#' @format A data.frame with 321 observations on 19 variables:
#' \itemize{
#' \item \strong{year:} 1978, 1981
#' \item \strong{age:} age of house
#' \item \strong{agesq:} age^2
#' \item \strong{nbh:} neighborhood, 1-6
#' \item \strong{cbd:} dist. to cent. bus. dstrct, ft.
#' \item \strong{inst:} dist. to interstate, ft.
#' \item \strong{linst:} log(inst)
#' \item \strong{price:} selling price
#' \item \strong{rooms:} # rooms in house
#' \item \strong{area:} square footage of house
#' \item \strong{land:} square footage lot
#' \item \strong{baths:} # bathrooms
#' \item \strong{dist:} dist. from house to incin., ft.
#' \item \strong{ldist:} log(dist)
#' \item \strong{lprice:} log(price)
#' \item \strong{y81:} =1 if year = 1981
#' \item \strong{larea:} log(area)
#' \item \strong{lland:} log(land)
#' \item \strong{linstsq:} linst^2
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(hprice3)
"hprice3"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/hprice3.R
|
#' hseinv
#'
#' Wooldridge Source: D. McFadden (1994), “Demographics, the Housing Market, and the Welfare of the Elderly,” in D.A. Wise (ed.), Studies in the Economics of Aging. Chicago: University of Chicago Press, 225-285. The data are contained in the article. Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 367, 370, 407, 638-639, 822?
#'
#' @docType data
#'
#' @usage data('hseinv')
#'
#' @format A data.frame with 42 observations on 14 variables:
#' \itemize{
#' \item \strong{year:} 1947-1988
#' \item \strong{inv:} real housing inv, millions $
#' \item \strong{pop:} population, 1000s
#' \item \strong{price:} housing price index; 1982 = 1
#' \item \strong{linv:} log(inv)
#' \item \strong{lpop:} log(pop)
#' \item \strong{lprice:} log(price)
#' \item \strong{t:} time trend: t=1,...,42
#' \item \strong{invpc:} per capita inv: inv/pop
#' \item \strong{linvpc:} log(invpc)
#' \item \strong{lprice_1:} lprice[_n-1]
#' \item \strong{linvpc_1:} linvpc[_n-1]
#' \item \strong{gprice:} lprice - lprice_1
#' \item \strong{ginvpc:} linvpc - linvpc_1
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(hseinv)
"hseinv"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/hseinv.R
|
#' htv
#'
#' Wooldridge Source: J.J. Heckman, J.L. Tobias, and E. Vytlacil (2003), “Simple Estimators for Treatment Parameters in a Latent-Variable Framework,” Review of Economics and Statistics 85, 748-755. Professor Tobias kindly provided the data, which were obtained from the 1991 National Longitudinal Survey of Youth. All people in the sample are males age 26 to 34. For confidentiality reasons, I have included only a subset of the variables used by the authors. Data loads lazily.
#'
#' @section Notes: Because an ability measure is included in this data set, it can be used as another illustration of including proxy variables in regression models. See Chapter 9. Also, one can try the IV procedure with the ability measure included as an exogenous explanatory variable.
#'
#' Used in Text: pages 550, 628
#'
#' @docType data
#'
#' @usage data('htv')
#'
#' @format A data.frame with 1230 observations on 23 variables:
#' \itemize{
#' \item \strong{wage:} hourly wage, 1991
#' \item \strong{abil:} abil. measure, not standardized
#' \item \strong{educ:} highest grade completed by 1991
#' \item \strong{ne:} =1 if in northeast, 1991
#' \item \strong{nc:} =1 if in nrthcntrl, 1991
#' \item \strong{west:} =1 if in west, 1991
#' \item \strong{south:} =1 if in south, 1991
#' \item \strong{exper:} potential experience
#' \item \strong{motheduc:} highest grade, mother
#' \item \strong{fatheduc:} highest grade, father
#' \item \strong{brkhme14:} =1 if broken home, age 14
#' \item \strong{sibs:} number of siblings
#' \item \strong{urban:} =1 if in urban area, 1991
#' \item \strong{ne18:} =1 if in NE, age 18
#' \item \strong{nc18:} =1 if in NC, age 18
#' \item \strong{south18:} =1 if in south, age 18
#' \item \strong{west18:} =1 if in west, age 18
#' \item \strong{urban18:} =1 if in urban area, age 18
#' \item \strong{tuit17:} college tuition, age 17
#' \item \strong{tuit18:} college tuition, age 18
#' \item \strong{lwage:} log(wage)
#' \item \strong{expersq:} exper^2
#' \item \strong{ctuit:} tuit18 - tuit17
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(htv)
"htv"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/htv.R
|
#' infmrt
#'
#' Wooldridge Source: Statistical Abstract of the United States, 1990 and 1994. (For example, the infant mortality rates come from Table 113 in 1990 and Table 123 in 1994.) Data loads lazily.
#'
#' @section Notes: An interesting exercise is to add the percentage of the population on AFDC (afdcper) to the infant mortality equation. Pooled OLS and first differencing can give very different estimates. Adding the years 1998 and 2002 and applying fixed effects seems natural. Intervening years can be added, too, although variation in the key variables from year to year might be minimal.
#'
#' Used in Text: pages 330-331, 339
#'
#' @docType data
#'
#' @usage data('infmrt')
#'
#' @format A data.frame with 102 observations on 12 variables:
#' \itemize{
#' \item \strong{year:} 1987 or 1990
#' \item \strong{infmort:} deaths per 1,000 live births
#' \item \strong{afdcprt:} afdc partic., 1000s
#' \item \strong{popul:} population, 1000s
#' \item \strong{pcinc:} per capita income
#' \item \strong{physic:} drs. per 100,000 civilian pop.
#' \item \strong{afdcper:} percent on AFDC
#' \item \strong{d90:} =1 if year == 1990
#' \item \strong{lpcinc:} log(pcinc)
#' \item \strong{lphysic:} log(physic)
#' \item \strong{DC:} =1 for Washington DC
#' \item \strong{lpopul:} log(popul)
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(infmrt)
"infmrt"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/infmrt.R
|
#' injury
#'
#' Wooldridge Source: B.D. Meyer, W.K. Viscusi, and D.L. Durbin (1995), “Workers’ Compensation and Injury Duration: Evidence from a Natural Experiment,” American Economic Review 85, 322-340. Professor Meyer kindly provided the data. Data loads lazily.
#'
#' @section Notes: This data set also can be used to illustrate the Chow test in Chapter 7. In particular, students can test whether the regression functions differ between Kentucky and Michigan. Or, allowing for different intercepts for the two states, do the slopes differ? A good lesson from this example is that a small R-squared is compatible with the ability to estimate the effects of a policy. Of course, for the Michigan data, which has a smaller sample size, the estimated effect is much less precise (but of virtually identical magnitude).
#'
#' Used in Text: pages 458-459, 475-476
#'
#' @docType data
#'
#' @usage data('injury')
#'
#' @format A data.frame with 7150 observations on 30 variables:
#' \itemize{
#' \item \strong{durat:} duration of benefits
#' \item \strong{afchnge:} =1 if after change in benefits
#' \item \strong{highearn:} =1 if high earner
#' \item \strong{male:} =1 if male
#' \item \strong{married:} =1 if married
#' \item \strong{hosp:} =1 if inj. required hosp. stay
#' \item \strong{indust:} industry
#' \item \strong{injtype:} type of injury
#' \item \strong{age:} age at time of injury
#' \item \strong{prewage:} previous weekly wage, 1982 $
#' \item \strong{totmed:} total med. costs, 1982 $
#' \item \strong{injdes:} 4 digit injury description
#' \item \strong{benefit:} real dollar value of benefit
#' \item \strong{ky:} =1 for kentucky
#' \item \strong{mi:} =1 for michigan
#' \item \strong{ldurat:} log(durat)
#' \item \strong{afhigh:} afchnge*highearn
#' \item \strong{lprewage:} log(wage)
#' \item \strong{lage:} log(age)
#' \item \strong{ltotmed:} log(totmed); = 0 if totmed < 1
#' \item \strong{head:} =1 if head injury
#' \item \strong{neck:} =1 if neck injury
#' \item \strong{upextr:} =1 if upper extremities injury
#' \item \strong{trunk:} =1 if trunk injury
#' \item \strong{lowback:} =1 if lower back injury
#' \item \strong{lowextr:} =1 if lower extremities injury
#' \item \strong{occdis:} =1 if occupational disease
#' \item \strong{manuf:} =1 if manufacturing industry
#' \item \strong{construc:} =1 if construction industry
#' \item \strong{highlpre:} highearn*lprewage
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(injury)
"injury"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/injury.R
|
#' intdef
#'
#' Wooldridge Source: Economic Report of the President, 2004, Tables B-64, B-73, and B-79. Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 356, 377, 430, 547-548
#'
#' @docType data
#'
#' @usage data('intdef')
#'
#' @format A data.frame with 56 observations on 13 variables:
#' \itemize{
#' \item \strong{year:} 1948 to 2003
#' \item \strong{i3:} 3 month T-bill rate
#' \item \strong{inf:} CPI inflation rate
#' \item \strong{rec:} federal receipts, percent GDP
#' \item \strong{out:} federal outlays, percent GDP
#' \item \strong{def:} out - rec
#' \item \strong{i3_1:} i3[_n-1]
#' \item \strong{inf_1:} inf[_n-1]
#' \item \strong{def_1:} def[_n-1]
#' \item \strong{ci3:} i3 - i3_1
#' \item \strong{cinf:} inf - inf_1
#' \item \strong{cdef:} def - def_1
#' \item \strong{y77:} =1 if year >= 1977; change in FY
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(intdef)
"intdef"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/intdef.R
|
#' intqrt
#'
#' Wooldridge Source: From Salomon Brothers, Analytical Record of Yields and Yield Spreads, 1990. The folks at Salomon Brothers kindly provided the Record at no charge when I was an assistant professor at MIT. Data loads lazily.
#'
#' @section Notes: A nice feature of the Salomon Brothers data is that the interest rates are not averaged over a month or quarter – they are end-of-month or end-of-quarter rates. Asset pricing theories apply to such “point-sampled” data, and not to averages over a period. Most other sources report monthly or quarterly averages. This is a good data set to update and test whether current data are more or less supportive of basic asset pricing theories.
#'
#' Used in Text: pages 405-406, 641, 646-647, 650, 652, 672, 673
#'
#' @docType data
#'
#' @usage data('intqrt')
#'
#' @format A data.frame with 124 observations on 23 variables:
#' \itemize{
#' \item \strong{r3:} bond equiv. yield, 3 mo T-bill
#' \item \strong{r6:} bond equiv. yield, 6 mo T-bill
#' \item \strong{r12:} yield on 1 yr. bond
#' \item \strong{p3:} price of 3 mo. T-bill
#' \item \strong{p6:} price of 6 mo. T-bill
#' \item \strong{hy6:} 100*(p3 - p6[_n-1])/p6[_n-1])
#' \item \strong{hy3:} r3*(91/365)
#' \item \strong{spr63:} r6 - r3
#' \item \strong{hy3_1:} hy3[_n-1]
#' \item \strong{hy6_1:} hy6[_n-1]
#' \item \strong{spr63_1:} spr63[_n-1]
#' \item \strong{hy6hy3_1:} hy6 - hy3_1
#' \item \strong{cr3:} r3 - r3_1
#' \item \strong{r3_1:} r3[_n-1]
#' \item \strong{chy6:} hy6 - hy6_1
#' \item \strong{chy3:} hy3 - hy3_1
#' \item \strong{chy6_1:} chy6[_n-1]
#' \item \strong{chy3_1:} chy3[_n-1]
#' \item \strong{cr6:} r6 - r6_1
#' \item \strong{cr6_1:} cr6[_n-1]
#' \item \strong{cr3_1:} cr3[_n-1]
#' \item \strong{r6_1:} r6[_n-1]
#' \item \strong{cspr63:} spr63 - spr63_1
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(intqrt)
"intqrt"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/intqrt.R
|
#' inven
#'
#' Wooldridge Source: Economic Report of the President, 1997, Tables B-4, B-20, B-61, and B-71. Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 408, 444, 643, 830
#'
#' @docType data
#'
#' @usage data('inven')
#'
#' @format A data.frame with 37 observations on 13 variables:
#' \itemize{
#' \item \strong{year:} 1959-1995
#' \item \strong{i3:} 3 mo. T-bill rate
#' \item \strong{inf:} CPI inflation rate
#' \item \strong{inven:} inventories, billions '92 $
#' \item \strong{gdp:} GDP, billions '92 $
#' \item \strong{r3:} real interest: i3 - inf
#' \item \strong{cinven:} inven - inven[_n-1]
#' \item \strong{cgdp:} gdp - gdp[_n-1]
#' \item \strong{cr3:} r3 - r3[_n-1]
#' \item \strong{ci3:} i3 - i3[_n-1]
#' \item \strong{cinf:} inf - inf[_n-1]
#' \item \strong{ginven:} log(inven) - log(inven[_n-1])
#' \item \strong{ggdp:} log(gdp) - log(gdp[_n-1])
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(inven)
"inven"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/inven.R
|
#' jtrain
#'
#' Wooldridge Source: H. Holzer, R. Block, M. Cheatham, and J. Knott (1993), “Are Training Subsidies Effective? The Michigan Experience,” Industrial and Labor Relations Review 46, 625-636. The authors kindly provided the data. Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 137, 161, 233, 254, 339, 465-466, 479, 486-487, 492, 504, 541-542, 774-775, 786-787, 788, 819.
#'
#' @docType data
#'
#' @usage data('jtrain')
#'
#' @format A data.frame with 471 observations on 30 variables:
#' \itemize{
#' \item \strong{year:} 1987, 1988, or 1989
#' \item \strong{fcode:} firm code number
#' \item \strong{employ:} # employees at plant
#' \item \strong{sales:} annual sales, $
#' \item \strong{avgsal:} average employee salary
#' \item \strong{scrap:} scrap rate (per 100 items)
#' \item \strong{rework:} rework rate (per 100 items)
#' \item \strong{tothrs:} total hours training
#' \item \strong{union:} =1 if unionized
#' \item \strong{grant:} = 1 if received grant
#' \item \strong{d89:} = 1 if year = 1989
#' \item \strong{d88:} = 1 if year = 1988
#' \item \strong{totrain:} total employees trained
#' \item \strong{hrsemp:} tothrs/totrain
#' \item \strong{lscrap:} log(scrap)
#' \item \strong{lemploy:} log(employ)
#' \item \strong{lsales:} log(sales)
#' \item \strong{lrework:} log(rework)
#' \item \strong{lhrsemp:} log(1 + hrsemp)
#' \item \strong{lscrap_1:} lagged lscrap; missing 1987
#' \item \strong{grant_1:} lagged grant; assumed 0 in 1987
#' \item \strong{clscrap:} lscrap - lscrap_1; year > 1987
#' \item \strong{cgrant:} grant - grant_1
#' \item \strong{clemploy:} lemploy - lemploy[_n-1]
#' \item \strong{clsales:} lavgsal - lavgsal[_n-1]
#' \item \strong{lavgsal:} log(avgsal)
#' \item \strong{clavgsal:} lavgsal - lavgsal[_n-1]
#' \item \strong{cgrant_1:} cgrant[_n-1]
#' \item \strong{chrsemp:} hrsemp - hrsemp[_n-1]
#' \item \strong{clhrsemp:} lhrsemp - lhrsemp[_n-1]
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(jtrain)
"jtrain"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/jtrain.R
|
#' jtrain2
#'
#' Wooldridge Source: R.J. Lalonde (1986), “Evaluating the Econometric Evaluations of Training Programs with Experimental Data,” American Economic Review 76, 604-620. Professor Jeff Biddle, at MSU, kindly passed the data set along to me. He obtained it from Professor Lalonde. Data loads lazily.
#'
#' @section Notes: Professor Lalonde obtained the data from the National Supported Work Demonstration job-training program conducted by the Manpower Demonstration Research Corporation in the mid 1970s. Training status was randomly assigned, so this is essentially experimental data. Computer Exercise C17.8 looks only at the effects of training on subsequent unemployment probabilities. For illustrating the more advanced methods in Chapter 17, a good exercise would be to have the students estimate a Tobit of re78 on train, and obtain estimates of the expected values for those with and without training. These can be compared with the sample averages.
#'
#' Used in Text: pages 18, 340-341, 626
#'
#' @docType data
#'
#' @usage data('jtrain2')
#'
#' @format A data.frame with 445 observations on 19 variables:
#' \itemize{
#' \item \strong{train:} =1 if assigned to job training
#' \item \strong{age:} age in 1977
#' \item \strong{educ:} years of education
#' \item \strong{black:} =1 if black
#' \item \strong{hisp:} =1 if Hispanic
#' \item \strong{married:} =1 if married
#' \item \strong{nodegree:} =1 if no high school degree
#' \item \strong{mosinex:} # mnths prior to 1/78 in expmnt
#' \item \strong{re74:} real earns., 1974, $1000s
#' \item \strong{re75:} real earns., 1975, $1000s
#' \item \strong{re78:} real earns., 1978, $1000s
#' \item \strong{unem74:} =1 if unem. all of 1974
#' \item \strong{unem75:} =1 if unem. all of 1975
#' \item \strong{unem78:} =1 if unem. all of 1978
#' \item \strong{lre74:} log(re74); zero if re74 == 0
#' \item \strong{lre75:} log(re75); zero if re75 == 0
#' \item \strong{lre78:} log(re78); zero if re78 == 0
#' \item \strong{agesq:} age^2
#' \item \strong{mostrn:} months in training
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(jtrain2)
"jtrain2"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/jtrain2.R
|
#' jtrain3
#'
#' Wooldridge Source: R.H. Dehejia and S. Wahba (1999), “Causal Effects in Nonexperimental Studies: Reevaluating the Evaluation of Training Programs,” Journal of the American Statistical Association 94, 1053-1062. Professor Sergio Firpo, at the University of British Columbia, has used this data set in his recent work, and he kindly provided it to me. This data set is a subset of that originally used by Lalonde in the study cited for JTRAIN2.RAW. Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 340-341, 480-481
#'
#' @docType data
#'
#' @usage data('jtrain3')
#'
#' @format A data.frame with 2675 observations on 20 variables:
#' \itemize{
#' \item \strong{train:} =1 if in job training
#' \item \strong{age:} in years, 1977
#' \item \strong{educ:} years of schooling
#' \item \strong{black:} =1 if black
#' \item \strong{hisp:} =1 if Hispanic
#' \item \strong{married:} =1 if married
#' \item \strong{re74:} '74 earnings, $1000s '82
#' \item \strong{re75:} '75 earnings, $1000s '82
#' \item \strong{unem75:} =1 if unem. all of '75
#' \item \strong{unem74:} =1 if unem. all of '74
#' \item \strong{re78:} '78 earnings, $1000s '82
#' \item \strong{agesq:} age^2
#' \item \strong{trre74:} train*re74
#' \item \strong{trre75:} train*re75
#' \item \strong{trun74:} train*unem74
#' \item \strong{trun75:} train*unem75
#' \item \strong{avgre:} (re74 + re75)/2
#' \item \strong{travgre:} train*avgre
#' \item \strong{unem78:} =1 if unem. all of '78
#' \item \strong{em78:} 1 - unem78
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(jtrain3)
"jtrain3"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/jtrain3.R
|
#' jtrain98
#'
#' Wooldridge Source: This is a data set I created many years ago intended as an update to the files JTRAIN2 and JTRAIN3. While the data were partly generated by me, the data attributes are similar to data sets used to evaluate job training programs. Data loads lazily.
#'
#' @section Notes: The response variables, earn98 and unem98, both have discreteness: the former is a corner solutions (takes on the value zero and then a range of strictly positive values) and the latter is binary. One could use these in an exercise using methods in Chapter 17. unem98 can be used in a probit or logit model, earn98 in a Tobit model, or in Poisson regression (without assuming, of course, that the Poisson distribution is correct).
#'
#' Used in Text: 101-102, 248, 601
#'
#' @docType data
#'
#' @usage data('jtrain98')
#'
#' @format A data.frame with 1130 observations on 10 variables:
#' \itemize{
#' \item \strong{train: }{=1 if in job training}
#' \item \strong{age: }{in years}
#' \item \strong{educ: }{years of schooling}
#' \item \strong{black: }{=1 if black}
#' \item \strong{hisp: }{=1 if Hispanic}
#' \item \strong{married: }{=1 if married}
#' \item \strong{earn96: }{earnings in 1996, $1000s}
#' \item \strong{unem96: }{=1 if unemployed all of 1995}
#' \item \strong{earn98: }{earnings in 1998, $1000s}
#' \item \strong{unem98: }{=1 if unemployed all of 1998}
#' }
#' @source \url{http://www.cengage.com/c/introductory-econometrics-a-modern-approach-7e-wooldridge}
#' @examples str(jtrain98)
"jtrain98"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/jtrain98.R
|
#' k401k
#'
#' Wooldridge Source: L.E. Papke (1995), “Participation in and Contributions to 401(k) Pension Plans:Evidence from Plan Data,” Journal of Human Resources 30, 311-325. Professor Papke kindly provided these data. She gathered them from the Internal Revenue Service’s Form 5500 tapes. Data loads lazily.
#'
#' @section Notes: This data set is used in a variety of ways in the text. One additional possibility is to investigate whether the coefficients from the regression of prate on mrate, log(totemp) differ by whether the plan is a sole plan. The Chow test (see Section 7.4), and the less restrictive version that allows different intercepts, can be used.
#'
#' Used in Text: pages 63, 79, 136, 174, 219, 692
#'
#' @docType data
#'
#' @usage data('k401k')
#'
#' @format A data.frame with 1534 observations on 8 variables:
#' \itemize{
#' \item \strong{prate:} participation rate, percent
#' \item \strong{mrate:} 401k plan match rate
#' \item \strong{totpart:} total 401k participants
#' \item \strong{totelg:} total eligible for 401k plan
#' \item \strong{age:} age of 401k plan
#' \item \strong{totemp:} total number of firm employees
#' \item \strong{sole:} = 1 if 401k is firm's sole plan
#' \item \strong{ltotemp:} log of totemp
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(k401k)
"k401k"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/k401k.R
|
#' k401ksubs
#'
#' Wooldridge Source: A. Abadie (2003), “Semiparametric Instrumental Variable Estimation of Treatment Response Models,” Journal of Econometrics 113, 231-263. Professor Abadie kindly provided these data. He obtained them from the 1991 Survey of Income and Program Participation (SIPP). Data loads lazily.
#'
#' @section Notes: This data set can also be used to illustrate the binary response models, probit and logit, in Chapter 17, where, say, pira (an indicator for having an individual retirement account) is the dependent variable, and e401k [the 401(k) eligibility indicator] is the key explanatory variable.
#'
#' Used in Text: pages 166, 174, 223, 264, 283, 301-302, 340, 549
#'
#' @docType data
#'
#' @usage data('k401ksubs')
#'
#' @format A data.frame with 9275 observations on 11 variables:
#' \itemize{
#' \item \strong{e401k:} =1 if eligble for 401(k)
#' \item \strong{inc:} annual income, $1000s
#' \item \strong{marr:} =1 if married
#' \item \strong{male:} =1 if male respondent
#' \item \strong{age:} in years
#' \item \strong{fsize:} family size
#' \item \strong{nettfa:} net total fin. assets, $1000
#' \item \strong{p401k:} =1 if participate in 401(k)
#' \item \strong{pira:} =1 if have IRA
#' \item \strong{incsq:} inc^2
#' \item \strong{agesq:} age^2
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(k401ksubs)
"k401ksubs"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/k401ksubs.R
|
#' kielmc
#'
#' Wooldridge Source: K.A. Kiel and K.T. McClain (1995), “House Prices During Siting Decision Stages: The Case of an Incinerator from Rumor Through Operation,” Journal of Environmental Economics and Management 28, 241-255. Professor McClain kindly provided the data, of which I used only a subset. Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 220, 454-457, 475, 477
#'
#' @docType data
#'
#' @usage data('kielmc')
#'
#' @format A data.frame with 321 observations on 25 variables:
#' \itemize{
#' \item \strong{year:} 1978 or 1981
#' \item \strong{age:} age of house
#' \item \strong{agesq:} age^2
#' \item \strong{nbh:} neighborhood, 1-6
#' \item \strong{cbd:} dist. to cent. bus. dstrct, ft.
#' \item \strong{intst:} dist. to interstate, ft.
#' \item \strong{lintst:} log(intst)
#' \item \strong{price:} selling price
#' \item \strong{rooms:} # rooms in house
#' \item \strong{area:} square footage of house
#' \item \strong{land:} square footage lot
#' \item \strong{baths:} # bathrooms
#' \item \strong{dist:} dist. from house to incin., ft.
#' \item \strong{ldist:} log(dist)
#' \item \strong{wind:} prc. time wind incin. to house
#' \item \strong{lprice:} log(price)
#' \item \strong{y81:} =1 if year == 1981
#' \item \strong{larea:} log(area)
#' \item \strong{lland:} log(land)
#' \item \strong{y81ldist:} y81*ldist
#' \item \strong{lintstsq:} lintst^2
#' \item \strong{nearinc:} =1 if dist <= 15840
#' \item \strong{y81nrinc:} y81*nearinc
#' \item \strong{rprice:} price, 1978 dollars
#' \item \strong{lrprice:} log(rprice)
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(kielmc)
"kielmc"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/kielmc.R
|
#' labsup
#'
#' Wooldridge Source: The subset of data for black or Hispanic women used in J.A. Angrist and W.E. Evans (1998) Data loads lazily.
#'
#' @section Notes: This example can promote an interesting discussion of instrument validity, and in particular, how a variable that is beyond our control – for example, whether the first two children have the same gender – can, nevertheless, affect subsequent economic choices. Students are asked to think about such issues in Computer Exercise C13 in Chapter 15. A more egregious version of this mistake would be to treat a variable such as age as a suitable instrument because it is beyond our control: clearly age has a direct effect on many economic outcomes that would play the role of the dependent variable.
#'
#' Used in Text: pages 530-531
#'
#' @docType data
#'
#' @usage data('labsup')
#'
#' @format A data.frame with 31857 observations on 20 variables:
#' \itemize{
#' \item \strong{kids: }{number of kids}
#' \item \strong{morekids: }{had more than 2 kids}
#' \item \strong{boys2: }{first two births boys}
#' \item \strong{girls2: }{first two births girls}
#' \item \strong{boy1st: }{first birth boy}
#' \item \strong{boy2nd: }{second birth boy}
#' \item \strong{samesex: }{first two kids are of same sex}
#' \item \strong{multi2nd: }{=1 if 2nd birth is twin}
#' \item \strong{age: }{age of mom}
#' \item \strong{agefstm: }{age of mom at first birth}
#' \item \strong{black: }{=1 of black}
#' \item \strong{hispan: }{=1 if hispanic}
#' \item \strong{worked: }{mom worked last year}
#' \item \strong{weeks: }{weeks worked mom}
#' \item \strong{hours: }{hours of work per week, mom}
#' \item \strong{labinc: }{mom's labor income, $1000s}
#' \item \strong{faminc: }{family income, $1000s}
#' \item \strong{nonmomi: }{'non-mom' income, $1000s}
#' \item \strong{educ: }{mom's years of education}
#' \item \strong{agesq: }{}
#' }
#' @source \url{http://www.cengage.com/c/introductory-econometrics-a-modern-approach-7e-wooldridge}
#' @examples str(labsup)
"labsup"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/labsup.R
|
#' lawsch85
#'
#' Wooldridge Source: Collected by Kelly Barnett, an MSU economics student, for use in a term project. The data come from two sources: The Official Guide to U.S. Law Schools, 1986, Law School Admission Services, and The Gourman Report: A Ranking of Graduate and Professional Programs in American and International Universities, 1995, Washington, D.C. Data loads lazily.
#'
#' @section Notes: More recent versions of both cited documents are available. One could try a similar analysis for, say, MBA programs or Ph.D. programs in economics. Quality of placements may be a good dependent variable, and measures of business school or graduate program quality could be included among the explanatory variables. Of course, one would want to control for factors describing the incoming class so as to isolate the effect of the program itself.
#'
#' Used in Text: pages 107, 164-165, 239
#'
#' @docType data
#'
#' @usage data('lawsch85')
#'
#' @format A data.frame with 156 observations on 21 variables:
#' \itemize{
#' \item \strong{rank:} law school ranking
#' \item \strong{salary:} median starting salary
#' \item \strong{cost:} law school cost
#' \item \strong{LSAT:} median LSAT score
#' \item \strong{GPA:} median college GPA
#' \item \strong{libvol:} no. volumes in lib., 1000s
#' \item \strong{faculty:} no. of faculty
#' \item \strong{age:} age of law sch., years
#' \item \strong{clsize:} size of entering class
#' \item \strong{north:} =1 if law sch in north
#' \item \strong{south:} =1 if law sch in south
#' \item \strong{east:} =1 if law sch in east
#' \item \strong{west:} =1 if law sch in west
#' \item \strong{lsalary:} log(salary)
#' \item \strong{studfac:} student-faculty ratio
#' \item \strong{top10:} =1 if ranked in top 10
#' \item \strong{r11_25:} =1 if ranked 11-25
#' \item \strong{r26_40:} =1 if ranked 26-40
#' \item \strong{r41_60:} =1 if ranked 41-60
#' \item \strong{llibvol:} log(libvol)
#' \item \strong{lcost:} log(cost)
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(lawsch85)
"lawsch85"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/lawsch85.R
|
#' loanapp
#'
#' Wooldridge Source: W.C. Hunter and M.B. Walker (1996), “The Cultural Affinity Hypothesis and Mortgage Lending Decisions,” Journal of Real Estate Finance and Economics 13, 57-70. Professor Walker kindly provided the data. Data loads lazily.
#'
#' @section Notes: These data were originally used in a famous study by researchers at the Boston Federal Reserve Bank. See A. Munnell, G.M.B. Tootell, L.E. Browne, and J. McEneaney (1996), “Mortgage Lending in Boston: Interpreting HMDA Data,” American Economic Review 86, 25-53.
#'
#' Used in Text: pages 263-264, 300, 339-340, 624
#'
#' @docType data
#'
#' @usage data('loanapp')
#'
#' @format A data.frame with 1989 observations on 59 variables:
#' \itemize{
#' \item \strong{occ:} occupancy
#' \item \strong{loanamt:} loan amt in thousands
#' \item \strong{action:} type of action taken
#' \item \strong{msa:} msa number of property
#' \item \strong{suffolk:} =1 if property in suffolk co.
#' \item \strong{appinc:} applicant income, $1000s
#' \item \strong{typur:} type of purchaser of loan
#' \item \strong{unit:} number of units in property
#' \item \strong{married:} =1 if applicant married
#' \item \strong{dep:} number of dependents
#' \item \strong{emp:} years employed in line of work
#' \item \strong{yjob:} years at this job
#' \item \strong{self:} =1 if self employed
#' \item \strong{atotinc:} total monthly income
#' \item \strong{cototinc:} coapp total monthly income
#' \item \strong{hexp:} propose housing expense
#' \item \strong{price:} purchase price
#' \item \strong{other:} other financing, $1000s
#' \item \strong{liq:} liquid assets
#' \item \strong{rep:} no. of credit reports
#' \item \strong{gdlin:} credit history meets guidelines
#' \item \strong{lines:} no. of credit lines on reports
#' \item \strong{mortg:} credit history on mortgage paym
#' \item \strong{cons:} credit history on consumer stuf
#' \item \strong{pubrec:} =1 if filed bankruptcy
#' \item \strong{hrat:} housing exp, percent total inc
#' \item \strong{obrat:} other oblgs, percent total inc
#' \item \strong{fixadj:} fixed or adjustable rate?
#' \item \strong{term:} term of loan in months
#' \item \strong{apr:} appraised value
#' \item \strong{prop:} type of property
#' \item \strong{inss:} PMI sought
#' \item \strong{inson:} PMI approved
#' \item \strong{gift:} gift as down payment
#' \item \strong{cosign:} is there a cosigner
#' \item \strong{unver:} unverifiable info
#' \item \strong{review:} number of times reviewed
#' \item \strong{netw:} net worth
#' \item \strong{unem:} unemployment rate by industry
#' \item \strong{min30:} =1 if minority pop. > 30percent
#' \item \strong{bd:} =1 if boarded-up val > MSA med
#' \item \strong{mi:} =1 if tract inc > MSA median
#' \item \strong{old:} =1 if applic age > MSA median
#' \item \strong{vr:} =1 if tract vac rte > MSA med
#' \item \strong{sch:} =1 if > 12 years schooling
#' \item \strong{black:} =1 if applicant black
#' \item \strong{hispan:} =1 if applicant Hispanic
#' \item \strong{male:} =1 if applicant male
#' \item \strong{reject:} =1 if action == 3
#' \item \strong{approve:} =1 if action == 1 or 2
#' \item \strong{mortno:} no mortgage history
#' \item \strong{mortperf:} no late mort. payments
#' \item \strong{mortlat1:} one or two late payments
#' \item \strong{mortlat2:} > 2 late payments
#' \item \strong{chist:} =0 if accnts deliq. >= 60 days
#' \item \strong{multi:} =1 if two or more units
#' \item \strong{loanprc:} amt/price
#' \item \strong{thick:} =1 if rep > 2
#' \item \strong{white:} =1 if applicant white
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(loanapp)
"loanapp"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/loanapp.R
|
#' lowbrth
#'
#' Wooldridge Source: Source: Statistical Abstract of the United States, 1990, 1993, and 1994. Data loads lazily.
#'
#' @section Notes: This data set can be used very much like INFMRT.RAW. It contains two years of state-level panel data. In fact, it is a superset of INFMRT.RAW. The key is that it contains information on low birth weights, as well as infant mortality. It also contains state identifies, so that several years of more recent data could be added for a term project. Putting in the variable afcdprc and its square leads to some interesting findings for pooled OLS and fixed effects (first differencing). After differencing, you can even try using the change in the AFDC payments variable as an instrumental variable for the change in afdcprc.
#'
#' Used in Text: not used
#'
#' @docType data
#'
#' @usage data('lowbrth')
#'
#' @format A data.frame with 100 observations on 36 variables:
#' \itemize{
#' \item \strong{year:} 1987 or 1990
#' \item \strong{lowbrth:} perc births low weight
#' \item \strong{infmort:} infant mortality rate
#' \item \strong{afdcprt:} # participants in AFDC, 1000s
#' \item \strong{popul:} population, 1000s
#' \item \strong{pcinc:} per capita income
#' \item \strong{physic:} # physicians, 1000s
#' \item \strong{afdcprc:} percent of pop in AFDC
#' \item \strong{d90:} =1 if year == 1990
#' \item \strong{lpcinc:} log of pcinc
#' \item \strong{cafdcprc:} change in afdcprc
#' \item \strong{clpcinc:} change in lpcinc
#' \item \strong{lphysic:} log of physic
#' \item \strong{clphysic:} change in lphysic
#' \item \strong{clowbrth:} change in lowbrth
#' \item \strong{cinfmort:} change in infmort
#' \item \strong{afdcpay:} avg monthly AFDC payment
#' \item \strong{afdcinc:} afdcpay as percent pcinc
#' \item \strong{lafdcpay:} log of afdcpay
#' \item \strong{clafdcpy:} change in lafdcpay
#' \item \strong{cafdcinc:} change in afdcinc
#' \item \strong{stateabb:} state postal code
#' \item \strong{state:} name of state
#' \item \strong{beds:} # hospital beds, 1000s
#' \item \strong{bedspc:} beds per capita
#' \item \strong{lbedspc:} log(bedspc)
#' \item \strong{clbedspc:} change in lbedspc
#' \item \strong{povrate:} percent people below poverty line
#' \item \strong{cpovrate:} change in povrate
#' \item \strong{afdcpsq:} afdcper^2
#' \item \strong{cafdcpsq:} change in afdcpsq
#' \item \strong{physicpc:} physicians per capita
#' \item \strong{lphypc:} log(physicpc)
#' \item \strong{clphypc:} change in lphypc
#' \item \strong{lpopul:} log(popul)
#' \item \strong{clpopul:} change in lpopul
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(lowbrth)
"lowbrth"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/lowbrth.R
|
#' mathpnl
#'
#' Wooldridge Source: Leslie Papke, an economics professor at MSU, collected these data from Michigan Department of Education web site, www.michigan.gov/mde. These are district-level data, which Professor Papke kindly provided. She has used building-level data in “The Effects of Spending on Test Pass Rates: Evidence from Michigan” (2005), Journal of Public Economics 89, 821-839. Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 479-480, 505-506
#'
#' @docType data
#'
#' @usage data('mathpnl')
#'
#' @format A data.frame with 3850 observations on 52 variables:
#' \itemize{
#' \item \strong{distid:} district identifier
#' \item \strong{intid:} intermediate school district
#' \item \strong{lunch:} percent eligible for free lunch
#' \item \strong{enrol:} school enrollment
#' \item \strong{ptr:} pupil/teacher: 1995-98
#' \item \strong{found:} foundation grant, $: 1995-98
#' \item \strong{expp:} expenditure per pupil
#' \item \strong{revpp:} revenue per pupil
#' \item \strong{avgsal:} average teacher salary
#' \item \strong{drop:} high school dropout rate, percent
#' \item \strong{grad:} high school grad. rate, percent
#' \item \strong{math4:} percent satisfactory, 4th grade math
#' \item \strong{math7:} percent satisfactory, 7th grade math
#' \item \strong{choice:} number choice students
#' \item \strong{psa:} # public school academy studs.
#' \item \strong{year:} 1992-1998
#' \item \strong{staff:} staff per 1000 students
#' \item \strong{avgben:} avg teacher fringe benefits
#' \item \strong{y92:} =1 if year == 1992
#' \item \strong{y93:} =1 if year == 1993
#' \item \strong{y94:} =1 if year == 1994
#' \item \strong{y95:} =1 if year == 1995
#' \item \strong{y96:} =1 if year == 1996
#' \item \strong{y97:} =1 if year == 1997
#' \item \strong{y98:} =1 if year == 1998
#' \item \strong{lexpp:} log(expp)
#' \item \strong{lfound:} log(found)
#' \item \strong{lexpp_1:} lexpp[_n-1]
#' \item \strong{lfnd_1:} lfnd[_n-1]
#' \item \strong{lenrol:} log(enrol)
#' \item \strong{lenrolsq:} lenrol^2
#' \item \strong{lunchsq:} lunch^2
#' \item \strong{lfndsq:} lfnd^2
#' \item \strong{math4_1:} math4[_n-1]
#' \item \strong{cmath4:} math4 - math4_1
#' \item \strong{gexpp:} lexpp - lexpp_1
#' \item \strong{gexpp_1:} gexpp[_n-1
#' \item \strong{gfound:} lfound - lfnd_1
#' \item \strong{gfnd_1:} gfound[_n-1]
#' \item \strong{clunch:} lunch - lunch[_n-1]
#' \item \strong{clnchsq:} lunchsq - lunchsq[_n-1]
#' \item \strong{genrol:} lenrol - lenrol[_n-1]
#' \item \strong{genrolsq:} genrol^2
#' \item \strong{expp92:} expp in 1992
#' \item \strong{lexpp92:} log(expp92)
#' \item \strong{math4_92:} math4 in 1992
#' \item \strong{cpi:} consumer price index
#' \item \strong{rexpp:} real spending per pupil, 1997$
#' \item \strong{lrexpp:} log(rexpp)
#' \item \strong{lrexpp_1:} lrexpp[_n-1]
#' \item \strong{grexpp:} lrexpp - lrexpp_1
#' \item \strong{grexpp_1:} grexpp[_n-1]
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(mathpnl)
"mathpnl"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/mathpnl.R
|
#' meap00_01
#'
#' Wooldridge Source: Michigan Department of Education, www.michigan.gov/mde Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 224, 302
#'
#' @docType data
#'
#' @usage data('meap00_01')
#'
#' @format A data.frame with 1692 observations on 9 variables:
#' \itemize{
#' \item \strong{dcode:} district code
#' \item \strong{bcode:} building code
#' \item \strong{math4:} percent students satisfactory, 4th grade math
#' \item \strong{read4:} percent students satisfactory, 4th grade reading
#' \item \strong{lunch:} percent students eligible for free or reduced lunch
#' \item \strong{enroll:} school enrollment
#' \item \strong{exppp:} expenditures per pupil: expend/enroll
#' \item \strong{lenroll:} log(enroll)
#' \item \strong{lexppp:} log(exppp)
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(meap00_01)
"meap00_01"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/meap00_01.R
|
#' meap01
#'
#' Wooldridge Source: Michigan Department of Education, www.michigan.gov/mde Data loads lazily.
#'
#' @section Notes: This is another good data set to compare simple and multiple regression estimates. The expenditure variable (in logs, say) and the poverty measure (lunch) are negatively correlated in this data set. A simple regression of math4 on lexppp gives a negative coefficient. Controlling for lunch makes the spending coefficient positive and significant.
#'
#' Used in Text: page 18
#'
#' @docType data
#'
#' @usage data('meap01')
#'
#' @format A data.frame with 1823 observations on 11 variables:
#' \itemize{
#' \item \strong{dcode:} district code
#' \item \strong{bcode:} building code
#' \item \strong{math4:} percent students satisfactory, 4th grade math
#' \item \strong{read4:} percent students satisfactory, 4th grade reading
#' \item \strong{lunch:} percent students eligible for free or reduced lunch
#' \item \strong{enroll:} school enrollment
#' \item \strong{expend:} total spending, $
#' \item \strong{exppp:} expenditures per pupil: expend/enroll
#' \item \strong{lenroll:} log(enroll)
#' \item \strong{lexpend:} log(expend)
#' \item \strong{lexppp:} log(exppp)
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(meap01)
"meap01"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/meap01.R
|
#' meap93
#'
#' Wooldridge Source: I collected these data from the old Michigan Department of Education web site. See MATHPNL.RAW for the current web site. I used data on most high schools in the state of Michigan for 1993. I dropped some high schools that had suspicious-looking data. Data loads lazily.
#'
#' @section Notes: Many states have data, at either the district or building level, on student performance and spending. A good exercise in data collection and cleaning is to have students find such data for a particular state, and to put it into a form that can be used for econometric analysis.
#'
#' Used in Text: pages 50, 65, 111-112, 127-128, 155-156, 219, 336, 339, 696-697
#'
#' @docType data
#'
#' @usage data('meap93')
#'
#' @format A data.frame with 408 observations on 17 variables:
#' \itemize{
#' \item \strong{lnchprg:} perc of studs in sch lnch prog
#' \item \strong{enroll:} school enrollment
#' \item \strong{staff:} staff per 1000 students
#' \item \strong{expend:} expend. per stud, $
#' \item \strong{salary:} avg. teacher salary, $
#' \item \strong{benefits:} avg. teacher benefits, $
#' \item \strong{droprate:} school dropout rate, perc
#' \item \strong{gradrate:} school graduation rate, perc
#' \item \strong{math10:} perc studs passing MEAP math
#' \item \strong{sci11:} perc studs passing MEAP science
#' \item \strong{totcomp:} salary + benefits
#' \item \strong{ltotcomp:} log(totcomp)
#' \item \strong{lexpend:} log of expend
#' \item \strong{lenroll:} log(enroll)
#' \item \strong{lstaff:} log(staff)
#' \item \strong{bensal:} benefits/salary
#' \item \strong{lsalary:} log(salary)
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(meap93)
"meap93"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/meap93.R
|
#' meapsingle
#'
#' Wooldridge Source: Collected by Professor Leslie Papke, an economics professor at MSU, from the Michigan Department of Education web site, www.michigan.gov/mde, and the U.S. Census Bureau. Professor Papke kindly provided the data. Data loads lazily.
#'
#' @section
#'
#' Used in Text: 100, 145-146, 198
#'
#' @docType data
#'
#' @usage data('meapsingle')
#'
#' @format A data.frame with 229 observations on 18 variables:
#' \itemize{
#' \item \strong{dcode: }{district code}
#' \item \strong{bcode: }{building code}
#' \item \strong{math4: }{percent satisfactory, 4th grade math}
#' \item \strong{read4: }{percent satisfactory, 4th grade reading}
#' \item \strong{enroll: }{school enrollment}
#' \item \strong{exppp: }{expenditures per pupil, $}
#' \item \strong{free: }{percent eligible, free lunch}
#' \item \strong{reduced: }{percent eligible, reduced lunch}
#' \item \strong{lunch: }{free + reduced}
#' \item \strong{medinc: }{zipcode median family, $ (1999)}
#' \item \strong{totchild: }{# of children (in zipcode)}
#' \item \strong{married: }{# of children in married-couple families}
#' \item \strong{single: }{# of children not in married-couple families}
#' \item \strong{pctsgle: }{percent of children not in married-couple families}
#' \item \strong{zipcode: }{school zipcode}
#' \item \strong{lenroll: }{log(enroll)}
#' \item \strong{lexppp: }{log(exppp)}
#' \item \strong{lmedinc: }{log(medinc)}
#' }
#' @source \url{http://www.cengage.com/c/introductory-econometrics-a-modern-approach-6e-wooldridge}
#' @examples str(meapsingle)
"meapsingle"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/meapsingle.R
|
#' minwage
#'
#' Wooldridge Source: P. Wolfson and D. Belman (2004), “The Minimum Wage: Consequences for Prices and Quantities in Low-Wage Labor Markets,” Journal of Business & Economic Statistics 22, 296-311. Professor Belman kindly provided the data. Data loads lazily.
#'
#' @section Notes: The sectors corresponding to the different numbers in the data file are provided in the Wolfson and Bellman and article.
#'
#' Used in Text: pages 379, 410, 444-445, 674-675
#'
#' @docType data
#'
#' @usage data('minwage')
#'
#' @format A data.frame with 612 observations on 58 variables:
#' \itemize{
#' \item \strong{emp232:} employment, sector 232, 1000s
#' \item \strong{wage232:} hourly wage, sector 232, $
#' \item \strong{emp236:}
#' \item \strong{wage236:}
#' \item \strong{emp234:}
#' \item \strong{wage234:}
#' \item \strong{emp314:}
#' \item \strong{wage314:}
#' \item \strong{emp228:}
#' \item \strong{wage228:}
#' \item \strong{emp233:}
#' \item \strong{wage233:}
#' \item \strong{emp394:}
#' \item \strong{wage394:}
#' \item \strong{emp231:}
#' \item \strong{wage231:}
#' \item \strong{emp226:}
#' \item \strong{wage226:}
#' \item \strong{emp387:}
#' \item \strong{wage387:}
#' \item \strong{emp056:}
#' \item \strong{wage056:}
#' \item \strong{unem:} civilian unemployment rate, percent
#' \item \strong{cpi:} Consumer Price Index (urban), 1982-1984 = 100
#' \item \strong{minwage:} Federal minimum wage, $/hour
#' \item \strong{lemp232:} log(emp232)
#' \item \strong{lwage232:} log(wage232)
#' \item \strong{gemp232:} lemp232 - lemp232[_n-1]
#' \item \strong{gwage232:} lwage232 - lwage232[_n-1]
#' \item \strong{lminwage:} log(minwage)
#' \item \strong{gmwage:} lminwage - lminwage[_n-1]
#' \item \strong{gmwage_1:} gmwage[_n-1]
#' \item \strong{gmwage_2:}
#' \item \strong{gmwage_3:}
#' \item \strong{gmwage_4:}
#' \item \strong{gmwage_5:}
#' \item \strong{gmwage_6:}
#' \item \strong{gmwage_7:}
#' \item \strong{gmwage_8:}
#' \item \strong{gmwage_9:}
#' \item \strong{gmwage_10:}
#' \item \strong{gmwage_11:}
#' \item \strong{gmwage_12:}
#' \item \strong{lemp236:}
#' \item \strong{gcpi:} lcpi - lcpi[_n-1]
#' \item \strong{lcpi:} log(cpi)
#' \item \strong{lwage236:}
#' \item \strong{gemp236:}
#' \item \strong{gwage236:}
#' \item \strong{lemp234:}
#' \item \strong{lwage234:}
#' \item \strong{gemp234:}
#' \item \strong{gwage234:}
#' \item \strong{lemp314:}
#' \item \strong{lwage314:}
#' \item \strong{gemp314:}
#' \item \strong{gwage314:}
#' \item \strong{t:} linear time trend, 1 to 612
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(minwage)
"minwage"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/minwage.R
|
#' mlb1
#'
#' Wooldridge Source: Collected by G. Mark Holmes, a former MSU undergraduate, for a term project. The salary data were obtained from the New York Times, April 11, 1993. The baseball statistics are from The Baseball Encyclopedia, 9th edition, and the city population figures are from the Statistical Abstract of the United States. Data loads lazily.
#'
#' @section Notes: The baseball statistics are career statistics through the 1992 season. Players whose race or ethnicity could not be easily determined were not included. It should not be too difficult to obtain the city population and racial composition numbers for Montreal and Toronto for 1993. Of course, the data can be pretty easily obtained for more recent players.
#'
#' Used in Text: pages 143-149, 165, 244-245, 262
#'
#' @docType data
#'
#' @usage data('mlb1')
#'
#' @format A data.frame with 353 observations on 47 variables:
#' \itemize{
#' \item \strong{salary:} 1993 season salary
#' \item \strong{teamsal:} team payroll
#' \item \strong{nl:} =1 if national league
#' \item \strong{years:} years in major leagues
#' \item \strong{games:} career games played
#' \item \strong{atbats:} career at bats
#' \item \strong{runs:} career runs scored
#' \item \strong{hits:} career hits
#' \item \strong{doubles:} career doubles
#' \item \strong{triples:} career triples
#' \item \strong{hruns:} career home runs
#' \item \strong{rbis:} career runs batted in
#' \item \strong{bavg:} career batting average
#' \item \strong{bb:} career walks
#' \item \strong{so:} career strike outs
#' \item \strong{sbases:} career stolen bases
#' \item \strong{fldperc:} career fielding perc
#' \item \strong{frstbase:} = 1 if first base
#' \item \strong{scndbase:} =1 if second base
#' \item \strong{shrtstop:} =1 if shortstop
#' \item \strong{thrdbase:} =1 if third base
#' \item \strong{outfield:} =1 if outfield
#' \item \strong{catcher:} =1 if catcher
#' \item \strong{yrsallst:} years as all-star
#' \item \strong{hispan:} =1 if hispanic
#' \item \strong{black:} =1 if black
#' \item \strong{whitepop:} white pop. in city
#' \item \strong{blackpop:} black pop. in city
#' \item \strong{hisppop:} hispanic pop. in city
#' \item \strong{pcinc:} city per capita income
#' \item \strong{gamesyr:} games per year in league
#' \item \strong{hrunsyr:} home runs per year
#' \item \strong{atbatsyr:} at bats per year
#' \item \strong{allstar:} perc. of years an all-star
#' \item \strong{slugavg:} career slugging average
#' \item \strong{rbisyr:} rbis per year
#' \item \strong{sbasesyr:} stolen bases per year
#' \item \strong{runsyr:} runs scored per year
#' \item \strong{percwhte:} percent white in city
#' \item \strong{percblck:} percent black in city
#' \item \strong{perchisp:} percent hispanic in city
#' \item \strong{blckpb:} black*percblck
#' \item \strong{hispph:} hispan*perchisp
#' \item \strong{whtepw:} white*percwhte
#' \item \strong{blckph:} black*perchisp
#' \item \strong{hisppb:} hispan*percblck
#' \item \strong{lsalary:} log(salary)
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(mlb1)
"mlb1"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/mlb1.R
|
#' mroz
#'
#' Wooldridge Source: T.A. Mroz (1987), “The Sensitivity of an Empirical Model of Married Women’s Hours of Work to Economic and Statistical Assumptions,” Econometrica 55, 765-799. Professor Ernst R. Berndt, of MIT, kindly provided the data, which he obtained from Professor Mroz. Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 249-251, 260, 294, 519-520, 530, 535, 535-536, 565-566, 578-579, 593- 595, 601-603, 619-620, 625
#'
#' @docType data
#'
#' @usage data('mroz')
#'
#' @format A data.frame with 753 observations on 22 variables:
#' \itemize{
#' \item \strong{inlf:} =1 if in lab frce, 1975
#' \item \strong{hours:} hours worked, 1975
#' \item \strong{kidslt6:} # kids < 6 years
#' \item \strong{kidsge6:} # kids 6-18
#' \item \strong{age:} woman's age in yrs
#' \item \strong{educ:} years of schooling
#' \item \strong{wage:} est. wage from earn, hrs
#' \item \strong{repwage:} rep. wage at interview in 1976
#' \item \strong{hushrs:} hours worked by husband, 1975
#' \item \strong{husage:} husband's age
#' \item \strong{huseduc:} husband's years of schooling
#' \item \strong{huswage:} husband's hourly wage, 1975
#' \item \strong{faminc:} family income, 1975
#' \item \strong{mtr:} fed. marg. tax rte facing woman
#' \item \strong{motheduc:} mother's years of schooling
#' \item \strong{fatheduc:} father's years of schooling
#' \item \strong{unem:} unem. rate in county of resid.
#' \item \strong{city:} =1 if live in SMSA
#' \item \strong{exper:} actual labor mkt exper
#' \item \strong{nwifeinc:} (faminc - wage*hours)/1000
#' \item \strong{lwage:} log(wage)
#' \item \strong{expersq:} exper^2
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(mroz)
"mroz"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/mroz.R
|
#' murder
#'
#' Wooldridge Source: From the Statistical Abstract of the United States, 1995 (Tables 310 and 357), 1992 (Table 289). The execution data originally come from the U.S. Bureau of Justice Statistics, Capital Punishment Annual. Data loads lazily.
#'
#' @section Notes: Prosecutors in different counties might pursue the death penalty with different intensities, so it makes sense to collect murder and execution data at the county level. This could be combined with better demographic information at the county level, along with better economic data (say, on wages for various kinds of employment).
#'
#' Used in Text: pages 480, 505, 548
#'
#' @docType data
#'
#' @usage data('murder')
#'
#' @format A data.frame with 153 observations on 13 variables:
#' \itemize{
#' \item \strong{id:} state identifier
#' \item \strong{state:} postal code
#' \item \strong{year:} 87, 90, or 93
#' \item \strong{mrdrte:} murders per 100,000 people
#' \item \strong{exec:} total executions, past 3 years
#' \item \strong{unem:} annual unem. rate
#' \item \strong{d90:} =1 if year == 90
#' \item \strong{d93:} =1 if year == 93
#' \item \strong{cmrdrte:} mrdrte - mrdrte[_n-1]
#' \item \strong{cexec:} exec - exec[_n-1]
#' \item \strong{cunem:} unem - unem[_n-1]
#' \item \strong{cexec_1:} cexec[_n-1]
#' \item \strong{cunem_1:} cunem[_n-1]
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(murder)
"murder"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/murder.R
|
#' nbasal
#'
#' Wooldridge Source: Collected by Christopher Torrente, a former MSU undergraduate, for a term project. He obtained the salary data and the career statistics from The Complete Handbook of Pro Basketball, 1995, edited by Zander Hollander. New York: Signet. The demographic information (marital status, number of children, and so on) was obtained from the teams’ 1994-1995 media guides. Data loads lazily.
#'
#' @section Notes: A panel version of this data set could be useful for further isolating productivity effects of marital status. One would need to obtain information on enough different players in at least two years, where some players who were not married in the initial year are married in later years. Fixed effects (or first differencing, for two years) is the natural estimation method.
#'
#' Used in Text: pages 222-223, 264-265
#'
#' @docType data
#'
#' @usage data('nbasal')
#'
#' @format A data.frame with 269 observations on 22 variables:
#' \itemize{
#' \item \strong{marr:} =1 if married
#' \item \strong{wage:} annual salary, thousands $
#' \item \strong{exper:} years as professional player
#' \item \strong{age:} age in years
#' \item \strong{coll:} years played in college
#' \item \strong{games:} average games per year
#' \item \strong{minutes:} average minutes per year
#' \item \strong{guard:} =1 if guard
#' \item \strong{forward:} =1 if forward
#' \item \strong{center:} =1 if center
#' \item \strong{points:} points per game
#' \item \strong{rebounds:} rebounds per game
#' \item \strong{assists:} assists per game
#' \item \strong{draft:} draft number
#' \item \strong{allstar:} =1 if ever all star
#' \item \strong{avgmin:} minutes per game
#' \item \strong{lwage:} log(wage)
#' \item \strong{black:} =1 if black
#' \item \strong{children:} =1 if has children
#' \item \strong{expersq:} exper^2
#' \item \strong{agesq:} age^2
#' \item \strong{marrblck:} marr*black
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(nbasal)
"nbasal"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/nbasal.R
|
#' ncaa_rpi
#'
#' Wooldridge Source: Data on NCAA men’s basketball teams, collected by Weizhao Sun for a senior seminar project in sports economics at Michigan State University, Spring 2017. He used various sources, including www.espn.com and www.teamrankings.com/ncaa-basketball/rpi-ranking/rpi-rating-by-team. Data loads lazily.
#'
#' @section Notes: This is a nice example of how multiple regression analysis can be used to determine whether rankings compiled by experts – the so-called pre-season RPI in this case – provide additional information beyond what we can obtain from widely available data bases. A simple and interesting question is whether, once the previous year’s post-season RPI is controlled for, does the pre-season RPI – which is supposed to add information on recruiting and player development – help to predict performance (such as win percentage or making it to the NCAA men’s basketball tournament). For the binary outcome that indicates making it to the NCAA tournament, a probit or logit model can be used for courses that introduce more advanced methods. There are some other interesting variables, such as coaching experience, that can be included, too.
#'
#' Used in Text: not used
#'
#' @docType data
#'
#' @usage data('ncaa_rpi')
#'
#' @format A data.frame with 336 observations on 14 variables:
#' \itemize{
#' \item \strong{team: }{Name}
#' \item \strong{year: }{Year}
#' \item \strong{conference: }{Conference}
#' \item \strong{postrpi: }{Post Rank}
#' \item \strong{prerpi: }{Preseason Rank}
#' \item \strong{postrpi_1: }{Post Rank 1 yr ago}
#' \item \strong{postrpi_2: }{Post Rank 2 yrs ago}
#' \item \strong{recruitrank: }{Recruits Rank}
#' \item \strong{wins: }{Number of games won}
#' \item \strong{losses: }{Number of games lost}
#' \item \strong{winperc: }{Winning Percentage}
#' \item \strong{tourney: }{Tournament dummy}
#' \item \strong{coachexper: }{Coach Experience}
#' \item \strong{power5: }{PowerFive Dummy}
#' }
#' @source \url{http://www.cengage.com/c/introductory-econometrics-a-modern-approach-7e-wooldridge}
#' @examples str(ncaa_rpi)
"ncaa_rpi"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/ncaa_rpi.R
|
#' nyse
#'
#' Wooldridge Source: These are Wednesday closing prices of value-weighted NYSE average, available in many publications. I do not recall the particular source I used when I collected these data at MIT. Probably the easiest way to get similar data is to go to the NYSE web site, www.nyse.com. Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 388-389, 407, 436, 438, 440-441, 442, 663-664
#'
#' @docType data
#'
#' @usage data('nyse')
#'
#' @format A data.frame with 691 observations on 8 variables:
#' \itemize{
#' \item \strong{price:} NYSE stock price index
#' \item \strong{return:} 100*(p - p(-1))/p(-1))
#' \item \strong{return_1:} lagged return
#' \item \strong{t:}
#' \item \strong{price_1:}
#' \item \strong{price_2:}
#' \item \strong{cprice:} price - price_1
#' \item \strong{cprice_1:} lagged cprice
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(nyse)
"nyse"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/nyse.R
|
#' okun
#'
#' Wooldridge Source: Economic Report of the President, 2007, Tables B-4 and B-42. Data loads lazily.
#'
#' @section
#'
#' Used in Text: 410, 444
#'
#' @docType data
#'
#' @usage data('okun')
#'
#' @format A data.frame with 47 observations on 4 variables:
#' \itemize{
#' \item \strong{year:} 1959 through 2005
#' \item \strong{pcrgdp:} percentage change in real GDP
#' \item \strong{unem:} civilian unemployment rate
#' \item \strong{cunem:} unem - unem[_n-1]
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(okun)
"okun"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/okun.R
|
#' openness
#'
#' Wooldridge Source: D. Romer (1993), “Openness and Inflation: Theory and Evidence,” Quarterly Journal of Economics 108, 869-903. The data are included in the article. Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 566, 579
#'
#' @docType data
#'
#' @usage data('openness')
#'
#' @format A data.frame with 114 observations on 12 variables:
#' \itemize{
#' \item \strong{open:} imports as percent GDP, '73-
#' \item \strong{inf:} avg. annual inflation, '73-
#' \item \strong{pcinc:} 1980 per capita inc., U.S. $
#' \item \strong{land:} land area, square miles
#' \item \strong{oil:} =1 if major oil producer
#' \item \strong{good:} =1 if 'good' data
#' \item \strong{lpcinc:} log(pcinc)
#' \item \strong{lland:} log(land)
#' \item \strong{lopen:} log(open)
#' \item \strong{linf:} log(inf)
#' \item \strong{opendec:} open/100
#' \item \strong{linfdec:} log(inf/100)
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(openness)
"openness"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/openness.R
|
#' pension
#'
#' Wooldridge Source: L.E. Papke (2004), “Individual Financial Decisions in Retirement Saving: The Role of Participant-Direction,” Journal of Public Economics 88, 39-61. Professor Papke kindly provided the data. She collected them from the National Longitudinal Survey of Mature Women, 1991. Data loads lazily.
#'
#' @section
#'
#' Used in Text: page 506
#'
#' @docType data
#'
#' @usage data('pension')
#'
#' @format A data.frame with 194 observations on 19 variables:
#' \itemize{
#' \item \strong{id:} family identifier
#' \item \strong{pyears:} years in pension plan
#' \item \strong{prftshr:} =1 if profit sharing plan
#' \item \strong{choice:} =1 if can choose method invest
#' \item \strong{female:} =1 if female
#' \item \strong{married:} =1 if married
#' \item \strong{age:} age in years
#' \item \strong{educ:} highest grade completed
#' \item \strong{finc25:} $15,000 < faminc92 <= $25,000
#' \item \strong{finc35:} $25,000 < faminc92 <= $35,000
#' \item \strong{finc50:} $35,000 < faminc92 <= $50,000
#' \item \strong{finc75:} $50,000 < faminc92 <= $75,000
#' \item \strong{finc100:} $75,000 < faminc92 <= $100,000
#' \item \strong{finc101:} $100,000 < faminc92
#' \item \strong{wealth89:} net worth, 1989, $1000
#' \item \strong{black:} =1 if black
#' \item \strong{stckin89:} =1 if owned stock in 1989
#' \item \strong{irain89:} =1 if had IRA in 1989
#' \item \strong{pctstck:} 0=mstbnds,50=mixed,100=mststcks
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(pension)
"pension"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/pension.R
|
#' phillips
#'
#' Wooldridge Source: Economic Report of the President, 2004, Tables B-42 and B-64. Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 355-356, 379, 390-391, 408, 409, 409, 418, 428, 443, 548-549, 642, 656, 659, 662, 672, 817.
#'
#' @docType data
#'
#' @usage data('phillips')
#'
#' @format A data.frame with 56 observations on 7 variables:
#' \itemize{
#' \item \strong{year:} 1948 through 2003
#' \item \strong{unem:} civilian unemployment rate, percent
#' \item \strong{inf:} percentage change in CPI
#' \item \strong{inf_1:} inf[_n-1]
#' \item \strong{unem_1:} unem[_n-1]
#' \item \strong{cinf:} inf - inf_1
#' \item \strong{cunem:} unem - unem_1
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(phillips)
"phillips"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/phillips.R
|
#' pntsprd
#'
#' Wooldridge Source: Collected by Scott Resnick, a former MSU undergraduate, from various newspaper sources. Data loads lazily.
#'
#' @section Notes: The data are for the 1994-1995 men’s college basketball seasons. The spread is for the day before the game was played. One might collect more recent data and determine whether the spread has become a less accurate predictor of the actual outcome in more recent years. In other words, in the simple regression of the actual score differential on the spread, is the variance larger in more recent years. (We should fully expect the slope coefficient not to be statistically different from one.)
#'
#' Used in Text: pages 300, 624, 697
#'
#' @docType data
#'
#' @usage data('pntsprd')
#'
#' @format A data.frame with 553 observations on 12 variables:
#' \itemize{
#' \item \strong{favscr:} favored team's score
#' \item \strong{undscr:} underdog's score
#' \item \strong{spread:} las vegas spread
#' \item \strong{favhome:} =1 if favored team at home
#' \item \strong{neutral:} =1 if neutral site
#' \item \strong{fav25:} =1 if favored team in top 25
#' \item \strong{und25:} =1 if underdog in top 25
#' \item \strong{fregion:} favorite's region of country
#' \item \strong{uregion:} underdog's region of country
#' \item \strong{scrdiff:} favscr - undscr
#' \item \strong{sprdcvr:} =1 if spread covered
#' \item \strong{favwin:} =1 if favored team wins
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(pntsprd)
"pntsprd"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/pntsprd.R
|
#' prison
#'
#' Wooldridge Source: S.D. Levitt (1996), “The Effect of Prison Population Size on Crime Rates: Evidence from Prison Overcrowding Legislation,” Quarterly Journal of Economics 111, 319-351. Professor Levitt kindly provided me with the data, of which I used a subset. Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 573-574
#'
#' @docType data
#'
#' @usage data('prison')
#'
#' @format A data.frame with 714 observations on 45 variables:
#' \itemize{
#' \item \strong{state:} alphabetical; DC = 9
#' \item \strong{year:} 80 to 93
#' \item \strong{govelec:} =1 if gubernatorial election
#' \item \strong{black:} proportion black
#' \item \strong{metro:} proportion in metro. areas
#' \item \strong{unem:} proportion unemployed
#' \item \strong{criv:} viol. crimes per 100,000
#' \item \strong{crip:} prop. crimes per 100,000
#' \item \strong{lcriv:} log(criv)
#' \item \strong{lcrip:} log(crip)
#' \item \strong{gcriv:} lcriv - lcriv_1
#' \item \strong{gcrip:} lcrip - lcrip_1
#' \item \strong{y81:} =1 if year == 81
#' \item \strong{y82:}
#' \item \strong{y83:}
#' \item \strong{y84:}
#' \item \strong{y85:}
#' \item \strong{y86:}
#' \item \strong{y87:}
#' \item \strong{y88:}
#' \item \strong{y89:}
#' \item \strong{y90:}
#' \item \strong{y91:}
#' \item \strong{y92:}
#' \item \strong{y93:}
#' \item \strong{ag0_14:} prop. pop. 0 to 14 yrs
#' \item \strong{ag15_17:} prop. pop. 15 to 17 yrs
#' \item \strong{ag18_24:} prop. pop. 18 to 24 yrs
#' \item \strong{ag25_34:} prop. pop. 25 to 34 yrs
#' \item \strong{incpc:} per capita income, nominal
#' \item \strong{polpc:} police per 100,000 residents
#' \item \strong{gincpc:} log(incpc) - log(incpc_1)
#' \item \strong{gpolpc:} lpolpc - lpolpc_1
#' \item \strong{cag0_14:} change in ag0_14
#' \item \strong{cag15_17:} change in ag15_17
#' \item \strong{cag18_24:} change in ag18_24
#' \item \strong{cag25_34:} change in ag25_34
#' \item \strong{cunem:} change in unem
#' \item \strong{cblack:} change in black
#' \item \strong{cmetro:} change in metro
#' \item \strong{pris:} prison pop. per 100,000
#' \item \strong{lpris:} log(pris)
#' \item \strong{gpris:} lpris - lpris[_n-1]
#' \item \strong{final1:} =1 if fnl dec on litig, curr yr
#' \item \strong{final2:} =1 if dec on litig, prev 2 yrs
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(prison)
"prison"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/prison.R
|
#' prminwge
#'
#' Wooldridge Source: A.J. Castillo-Freeman and R.B. Freeman (1992), “When the Minimum Wage Really Bites: The Effect of the U.S.-Level Minimum Wage on Puerto Rico,” in Immigration and the Work Force, edited by G.J. Borjas and R.B. Freeman, 177-211. Chicago: University of Chicago Press. The data are reported in the article. Data loads lazily.
#'
#' @section Notes: Given the ongoing debate on the employment effects of the minimum wage, this would be a great data set to try to update. The coverage rates are the most difficult variables to construct.
#'
#' Used in Text: pages 356-357, 369-370, 420-421, 434
#'
#' @docType data
#'
#' @usage data('prminwge')
#'
#' @format A data.frame with 38 observations on 25 variables:
#' \itemize{
#' \item \strong{year:} 1950-1987
#' \item \strong{avgmin:} weighted avg min wge, 44 indust
#' \item \strong{avgwage:} wghted avg hrly wge, 44 indust
#' \item \strong{kaitz:} Kaitz min wage index
#' \item \strong{avgcov:} wghted avg coverage, 8 indust
#' \item \strong{covt:} economy-wide coverage of min wg
#' \item \strong{mfgwage:} avg manuf. wage
#' \item \strong{prdef:} Puerto Rican price deflator
#' \item \strong{prepop:} PR employ/popul ratio
#' \item \strong{prepopf:} PR employ/popul ratio, alter.
#' \item \strong{prgnp:} PR GNP
#' \item \strong{prunemp:} PR unemployment rate
#' \item \strong{usgnp:} US GNP
#' \item \strong{t:} time trend: 1 to 38
#' \item \strong{post74:} time trend: starts in 1974
#' \item \strong{lprunemp:} log(prunemp)
#' \item \strong{lprgnp:} log(prgnp)
#' \item \strong{lusgnp:} log(usgnp)
#' \item \strong{lkaitz:} log(kaitz)
#' \item \strong{lprun_1:} lprunemp[_n-1]
#' \item \strong{lprepop:} log(prepop)
#' \item \strong{lprep_1:} lprepop[_n-1]
#' \item \strong{mincov:} (avgmin/avgwage)*avgcov
#' \item \strong{lmincov:} log(mincov)
#' \item \strong{lavgmin:} log(avgmin)
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(prminwge)
"prminwge"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/prminwge.R
|
#' rdchem
#'
#' Wooldridge Source: From Businessweek R&D Scoreboard, October 25, 1991. Data loads lazily.
#'
#' @section Notes: It would be interesting to collect more recent data and see whether the R&D/firm size relationship has changed over time.
#'
#' Used in Text: pages 64, 139-140, 159-160, 204, 218, 327-329, 339
#'
#' @docType data
#'
#' @usage data('rdchem')
#'
#' @format A data.frame with 32 observations on 8 variables:
#' \itemize{
#' \item \strong{rd:} R&D spending, millions
#' \item \strong{sales:} firm sales, millions
#' \item \strong{profits:} profits, millions
#' \item \strong{rdintens:} rd as percent of sales
#' \item \strong{profmarg:} profits as percent of sales
#' \item \strong{salessq:} sales^2
#' \item \strong{lsales:} log(sales)
#' \item \strong{lrd:} log(rd)
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(rdchem)
"rdchem"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/rdchem.R
|
#' rdtelec
#'
#' Wooldridge Source: See RDCHEM.RAW Data loads lazily.
#'
#' @section Notes: According to these data, the R&D/firm size relationship is different in the telecommunications industry than in the chemical industry: there is pretty strong evidence that R&D intensity decreases with firm size in telecommunications. Of course, that was in 1991. The data could easily be updated, and a panel data set could be constructed.
#'
#' Used in Text: not used
#'
#' @docType data
#'
#' @usage data('rdtelec')
#'
#' @format A data.frame with 29 observations on 6 variables:
#' \itemize{
#' \item \strong{rd:} R&D spending, millions $
#' \item \strong{sales:} firm sales, millions $
#' \item \strong{rdintens:} rd as percent of sales
#' \item \strong{lrd:} log(rd)
#' \item \strong{lsales:} log(sales)
#' \item \strong{salessq:} sales^2
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(rdtelec)
"rdtelec"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/rdtelec.R
|
#' recid
#'
#' Wooldridge Source: C.-F. Chung, P. Schmidt, and A.D. Witte (1991), “Survival Analysis: A Survey,” Journal of Quantitative Criminology 7, 59-98. Professor Chung kindly provided the data. Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 611-612, 625
#'
#' @docType data
#'
#' @usage data('recid')
#'
#' @format A data.frame with 1445 observations on 18 variables:
#' \itemize{
#' \item \strong{black:} =1 if black
#' \item \strong{alcohol:} =1 if alcohol problems
#' \item \strong{drugs:} =1 if drug history
#' \item \strong{super:} =1 if release supervised
#' \item \strong{married:} =1 if married when incarc.
#' \item \strong{felon:} =1 if felony sentence
#' \item \strong{workprg:} =1 if in N.C. pris. work prg.
#' \item \strong{property:} =1 if property crime
#' \item \strong{person:} =1 if crime against person
#' \item \strong{priors:} # prior convictions
#' \item \strong{educ:} years of schooling
#' \item \strong{rules:} # rules violations in prison
#' \item \strong{age:} in months
#' \item \strong{tserved:} time served, rounded to months
#' \item \strong{follow:} length follow period, months
#' \item \strong{durat:} min(time until return, follow)
#' \item \strong{cens:} =1 if duration right censored
#' \item \strong{ldurat:} log(durat)
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(recid)
"recid"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/recid.R
|
#' rental
#'
#' Wooldridge Source: David Harvey, a former MSU undergraduate, collected the data for 64 “college towns” from the 1980 and 1990 United States censuses. Data loads lazily.
#'
#' @section Notes: These data can be used in a somewhat crude simultaneous equations analysis, either focusing on one year or pooling the two years. (In the latter case, in an advanced class, you might have students compute the standard errors robust to serial correlation across the two time periods.) The demand equation would have ltothsg as a function of lrent, lavginc, and lpop. The supply equation would have ltothsg as a function of lrent, pctst, and lpop. Thus, in estimating the demand function, pctstu is used as an IV for lrent. Clearly one can quibble with excluding pctstu from the demand equation, but the estimated demand function gives a negative price effect. Getting information for 2000, and adding many more college towns, would make for a much better analysis. Information on number of spaces in on-campus dormitories would be a big improvement, too.
#'
#' Used in Text: pages 160, 477, 503-504
#'
#' @docType data
#'
#' @usage data('rental')
#'
#' @format A data.frame with 128 observations on 23 variables:
#' \itemize{
#' \item \strong{city:} city label, 1 to 64
#' \item \strong{year:} 80 or 90
#' \item \strong{pop:} city population
#' \item \strong{enroll:} # college students enrolled
#' \item \strong{rent:} average rent
#' \item \strong{rnthsg:} renter occupied units
#' \item \strong{tothsg:} occupied housing units
#' \item \strong{avginc:} per capita income
#' \item \strong{lenroll:} log(enroll)
#' \item \strong{lpop:} log(pop)
#' \item \strong{lrent:} log(rent)
#' \item \strong{ltothsg:} log(tothsg)
#' \item \strong{lrnthsg:} log(rnthsg)
#' \item \strong{lavginc:} log(avginc)
#' \item \strong{clenroll:} change in lrent from 80 to 90
#' \item \strong{clpop:} change in lpop
#' \item \strong{clrent:} change in lrent
#' \item \strong{cltothsg:} change in ltothsg
#' \item \strong{clrnthsg:} change in lrnthsg
#' \item \strong{clavginc:} change in lavginc
#' \item \strong{pctstu:} percent of population students
#' \item \strong{cpctstu:} change in pctstu
#' \item \strong{y90:} =1 if year == 90
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(rental)
"rental"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/rental.R
|
#' return
#'
#' Wooldridge Source: Collected by Stephanie Balys, a former MSU undergraduate, from the New York Stock Exchange and Compustat. Data loads lazily.
#'
#' @section Notes: More can be done with this data set. Recently, I discovered that lsp90 does appear to predict return (and the log of the 1990 stock price works better than sp90). I am a little suspicious, but you could use the negative coefficient on lsp90 to illustrate “reversion to the mean.”
#'
#' Used in Text: page 162-163
#'
#' @docType data
#'
#' @usage data('return')
#'
#' @format A data.frame with 142 observations on 12 variables:
#' \itemize{
#' \item \strong{roe:} return on equity, 1990
#' \item \strong{rok:} return on capital, 1990
#' \item \strong{dkr:} debt/capital, 1990
#' \item \strong{eps:} earnings per share, 1990
#' \item \strong{netinc:} net income, 1990 (mills.)
#' \item \strong{sp90:} stock price, end 1990
#' \item \strong{sp94:} stock price, end 1994
#' \item \strong{salary:} CEO salary, 1990 (thous.)
#' \item \strong{return:} percent change s.p., 90-94
#' \item \strong{lsalary:} log(salary)
#' \item \strong{lsp90:} log(sp90)
#' \item \strong{lnetinc:} log(netinc)
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(return)
"return"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/return.R
|
#' saving
#'
#' Wooldridge Source: Unknown Data loads lazily.
#'
#' @section Notes: I remember entering this data set in the late 1980s, and I am pretty sure it came directly from an introductory econometrics text. But so far my search has been fruitless. If anyone runs across this data set, I would appreciate knowing about it.
#'
#' Used in Text: not used
#'
#' @docType data
#'
#' @usage data('saving')
#'
#' @format A data.frame with 100 observations on 7 variables:
#' \itemize{
#' \item \strong{sav:} annual savings, $
#' \item \strong{inc:} annual income, $
#' \item \strong{size:} family size
#' \item \strong{educ:} years educ, household head
#' \item \strong{age:} age of household head
#' \item \strong{black:} =1 if household head is black
#' \item \strong{cons:} annual consumption, $
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(saving)
"saving"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/saving.R
|
#' school93_98
#'
#' Wooldridge Source: L.E. Papke (2005), “The Effects of Spending on Test Pass Rates: Evidence from Michigan,” Journal of Public Economics 89, 821-839. Data loads lazily.
#'
#' @section Notes: This is closer to the data actually used in the Papke paper as it is at the school (building) level. It is unbalanced because data on scores and some of the spending and other variables is missing for some schools. While the usual RE and FE methods can be applied directly, obtaining the correlated random effects version of the Hausman test is more advance. Computer Exercise 17 in Chapter 14 walks the reader through it.
#'
#' Used in Text: page 491
#'
#' @docType data
#'
#' @usage data('school93_98')
#'
#' @format A data.frame with 10668 observations on 18 variables:
#' \itemize{
#' \item \strong{distid: }{}
#' \item \strong{schid: }{}
#' \item \strong{lunch: }{percent eligible for free lunch}
#' \item \strong{enrol: }{number of students}
#' \item \strong{exppp: }{exp per pupil}
#' \item \strong{math4: }{}
#' \item \strong{year: }{1993 = school year 1992-1993}
#' \item \strong{y93: }{}
#' \item \strong{y94: }{}
#' \item \strong{y95: }{}
#' \item \strong{y96: }{}
#' \item \strong{y97: }{}
#' \item \strong{y98: }{}
#' \item \strong{rexpp: }{(exppp/cpi)1.605: 1997 $}
#' \item \strong{found: }{}
#' \item \strong{lenrol: }{log(enrol)}
#' \item \strong{lrexpp: }{log(rexpp)}
#' \item \strong{lavgrexpp: }{log((rexpp + L.rexpp)/2)}
#' }
#' @source \url{http://www.cengage.com/c/introductory-econometrics-a-modern-approach-7e-wooldridge}
#' @examples str(school93_98)
"school93_98"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/school93_98.R
|
#' sleep75
#'
#' Wooldridge Source: J.E. Biddle and D.S. Hamermesh (1990), “Sleep and the Allocation of Time,” Journal of Political Economy 98, 922-943. Professor Biddle kindly provided the data. Data loads lazily.
#'
#' @section Notes: In their article, Biddle and Hamermesh include an hourly wage measure in the sleep equation. An econometric problem that arises is that the hourly wage is missing for those who do not work. Plus, the wage offer may be endogenous (even if it were always observed). Biddle and Hamermesh employ extensions of the sample selection methods in Section 17.5. See their article for details.
#'
#' Used in Text: pages 64, 106-107, 162, 259, 263, 299
#'
#' @docType data
#'
#' @usage data('sleep75')
#'
#' @format A data.frame with 706 observations on 34 variables:
#' \itemize{
#' \item \strong{age:} in years
#' \item \strong{black:} =1 if black
#' \item \strong{case:} identifier
#' \item \strong{clerical:} =1 if clerical worker
#' \item \strong{construc:} =1 if construction worker
#' \item \strong{educ:} years of schooling
#' \item \strong{earns74:} total earnings, 1974
#' \item \strong{gdhlth:} =1 if in good or excel. health
#' \item \strong{inlf:} =1 if in labor force
#' \item \strong{leis1:} sleep - totwrk
#' \item \strong{leis2:} slpnaps - totwrk
#' \item \strong{leis3:} rlxall - totwrk
#' \item \strong{smsa:} =1 if live in smsa
#' \item \strong{lhrwage:} log hourly wage
#' \item \strong{lothinc:} log othinc, unless othinc < 0
#' \item \strong{male:} =1 if male
#' \item \strong{marr:} =1 if married
#' \item \strong{prot:} =1 if Protestant
#' \item \strong{rlxall:} slpnaps + personal activs
#' \item \strong{selfe:} =1 if self employed
#' \item \strong{sleep:} mins sleep at night, per wk
#' \item \strong{slpnaps:} minutes sleep, inc. naps
#' \item \strong{south:} =1 if live in south
#' \item \strong{spsepay:} spousal wage income
#' \item \strong{spwrk75:} =1 if spouse works
#' \item \strong{totwrk:} mins worked per week
#' \item \strong{union:} =1 if belong to union
#' \item \strong{worknrm:} mins work main job
#' \item \strong{workscnd:} mins work second job
#' \item \strong{exper:} age - educ - 6
#' \item \strong{yngkid:} =1 if children < 3 present
#' \item \strong{yrsmarr:} years married
#' \item \strong{hrwage:} hourly wage
#' \item \strong{agesq:} age^2
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(sleep75)
"sleep75"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/sleep75.R
|
#' slp75_81
#'
#' Wooldridge Source: See SLEEP75.RAW Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 463-464
#'
#' @docType data
#'
#' @usage data('slp75_81')
#'
#' @format A data.frame with 239 observations on 20 variables:
#' \itemize{
#' \item \strong{age75:} age in 1975
#' \item \strong{educ75:} years educ in '75
#' \item \strong{educ81:} years educ in '81
#' \item \strong{gdhlth75:} = 1 if good hlth in '75
#' \item \strong{gdhlth81:} =1 if good hlth in '81
#' \item \strong{male:} =1 if male
#' \item \strong{marr75:} = 1 if married in '75
#' \item \strong{marr81:} =1 if married in '81
#' \item \strong{slpnap75:} mins slp wk, inc naps, '75
#' \item \strong{slpnap81:} mins slp wk, inc naps, '81
#' \item \strong{totwrk75:} minutes worked per week, '75
#' \item \strong{totwrk81:} minutes worked per week, '81
#' \item \strong{yngkid75:} = 1 if child < 3, '75
#' \item \strong{yngkid81:} =1 if child < 3, '81
#' \item \strong{ceduc:} change in educ
#' \item \strong{cgdhlth:} change in gdhlth
#' \item \strong{cmarr:} change in marr
#' \item \strong{cslpnap:} change in slpnap
#' \item \strong{ctotwrk:} change in totwrk
#' \item \strong{cyngkid:} change in yngkid
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(slp75_81)
"slp75_81"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/slp75_81.R
|
#' smoke
#'
#' Wooldridge Source: J. Mullahy (1997), “Instrumental-Variable Estimation of Count Data Models: Applications to Models of Cigarette Smoking Behavior,” Review of Economics and Statistics 79, 596-593. Professor Mullahy kindly provided the data. Data loads lazily.
#'
#' @section Notes: If you want to do a “fancy” IV version of Computer Exercise C16.1, you could estimate a reduced form count model for cigs using the Poisson regression methods in Section 17.3, and then use the fitted values as an IV for cigs. Presumably, this would be for a fairly advanced class.
#'
#' Used in Text: pages 183, 288-289, 298, 301, 578, 627
#'
#' @docType data
#'
#' @usage data('smoke')
#'
#' @format A data.frame with 807 observations on 10 variables:
#' \itemize{
#' \item \strong{educ:} years of schooling
#' \item \strong{cigpric:} state cig. price, cents/pack
#' \item \strong{white:} =1 if white
#' \item \strong{age:} in years
#' \item \strong{income:} annual income, $
#' \item \strong{cigs:} cigs. smoked per day
#' \item \strong{restaurn:} =1 if rest. smk. restrictions
#' \item \strong{lincome:} log(income)
#' \item \strong{agesq:} age^2
#' \item \strong{lcigpric:} log(cigprice)
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(smoke)
"smoke"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/smoke.R
|
#' traffic1
#'
#' Wooldridge Source: I collected these data from two sources, the 1992 Statistical Abstract of the United States (Tables 1009, 1012) and A Digest of State Alcohol-Highway Safety Related Legislation, 1985 and 1990, published by the U.S. National Highway Traffic Safety Administration. Data loads lazily.
#'
#' @section Notes: In addition to adding recent years, this data set could really use state-level tax rates on alcohol. Other important law changes include defining driving under the influence as having a blood alcohol level of .08 or more, which many states have adopted since the 1980s. The trend really picked up in the 1990s and continued through the 2000s.
#'
#' Used in Text: pages 467-468, 688?
#'
#' @docType data
#'
#' @usage data('traffic1')
#'
#' @format A data.frame with 51 observations on 13 variables:
#' \itemize{
#' \item \strong{state:}
#' \item \strong{admn90:} =1 if admin. revoc., '90
#' \item \strong{admn85:} =1 if admin. revoc., '85
#' \item \strong{open90:} =1 if open cont. law, '90
#' \item \strong{open85:} =1 if open cont. law, '85
#' \item \strong{dthrte90:} deaths per 100 mill. miles, '90
#' \item \strong{dthrte85:} deaths per 100 mill. miles, '85
#' \item \strong{speed90:} =1 if 65 mph, 1990
#' \item \strong{speed85:} =0 always
#' \item \strong{cdthrte:} dthrte90 - dthrte85
#' \item \strong{cadmn:} admn90 - admn85
#' \item \strong{copen:} open90 - open85
#' \item \strong{cspeed:} speed90 - speed85
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(traffic1)
"traffic1"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/traffic1.R
|
#' traffic2
#'
#' Wooldridge Source: P.S. McCarthy (1994), “Relaxed Speed Limits and Highway Safety: New Evidence from California,” Economics Letters 46, 173-179. Professor McCarthy kindly provided the data. Data loads lazily.
#'
#' @section Notes: Many states have changed maximum speed limits and imposed seat belt laws over the past 25 years. Data similar to those in TRAFFIC2.RAW should be fairly easy to obtain for a particular state. One should combine this information with changes in a state’s blood alcohol limit and the passage of per se and open container laws.
#'
#' Used in Text: pages 378-379, 409, 443, 674, 695-696
#'
#' @docType data
#'
#' @usage data('traffic2')
#'
#' @format A data.frame with 108 observations on 48 variables:
#' \itemize{
#' \item \strong{year:} 1981 to 1989
#' \item \strong{totacc:} statewide total accidents
#' \item \strong{fatacc:} statewide fatal accidents
#' \item \strong{injacc:} statewide injury accidents
#' \item \strong{pdoacc:} property damage only accidents
#' \item \strong{ntotacc:} noninterstate total acc.
#' \item \strong{nfatacc:} noninterstate fatal acc.
#' \item \strong{ninjacc:} noninterstate injur acc.
#' \item \strong{npdoacc:} noninterstate property acc.
#' \item \strong{rtotacc:} tot. acc. on rural 65 mph roads
#' \item \strong{rfatacc:} fat. acc. on rural 65 mph roads
#' \item \strong{rinjacc:} inj. acc. on rural 65 mph roads
#' \item \strong{rpdoacc:} prp. acc. on rural 65 mph roads
#' \item \strong{ushigh:} acc. on U.S. highways
#' \item \strong{cntyrds:} acc. on county roads
#' \item \strong{strtes:} acc. on state routes
#' \item \strong{t:} time trend
#' \item \strong{tsq:} t^2
#' \item \strong{unem:} state unemployment rate
#' \item \strong{spdlaw:} =1 after 65 mph in effect
#' \item \strong{beltlaw:} =1 after seatbelt law
#' \item \strong{wkends:} # weekends in month
#' \item \strong{feb:} =1 if month is Feb.
#' \item \strong{mar:}
#' \item \strong{apr:}
#' \item \strong{may:}
#' \item \strong{jun:}
#' \item \strong{jul:}
#' \item \strong{aug:}
#' \item \strong{sep:}
#' \item \strong{oct:}
#' \item \strong{nov:}
#' \item \strong{dec:}
#' \item \strong{ltotacc:} log(totacc)
#' \item \strong{lfatacc:} log(fatacc)
#' \item \strong{prcfat:} 100*(fatacc/totacc)
#' \item \strong{prcrfat:} 100*(rfatacc/rtotacc)
#' \item \strong{lrtotacc:} log(rtotacc)
#' \item \strong{lrfatacc:} log(rfatacc)
#' \item \strong{lntotacc:} log(ntotacc)
#' \item \strong{lnfatacc:} log(nfatacc)
#' \item \strong{prcnfat:} 100*(nfatacc/ntotacc)
#' \item \strong{lushigh:} log(ushigh)
#' \item \strong{lcntyrds:} log(cntyrds)
#' \item \strong{lstrtes:} log(strtes)
#' \item \strong{spdt:} spdlaw*t
#' \item \strong{beltt:} beltlaw*t
#' \item \strong{prcfat_1:} prcfat[_n-1]
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(traffic2)
"traffic2"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/traffic2.R
|
#' twoyear
#'
#' Wooldridge Source: T.J. Kane and C.E. Rouse (1995), Labor-Market Returns to Two- and Four-Year Colleges, American Economic Review 85, 600-614. With Professor Rouse’s kind assistance, I obtained the data from her web site at Princeton University. Data loads lazily.
#'
#' @section Notes: As possible extensions, students can explore whether the returns to two-year or four-year colleges depend on race or gender. This is partly done in Problem 7.9 but where college is aggregated into one number. Also, should experience appear as a quadratic in the wage specification?
#'
#' Used in Text: pages 140-143, 165, 261, 340
#'
#' @docType data
#'
#' @usage data('twoyear')
#'
#' @format A data.frame with 6763 observations on 23 variables:
#' \itemize{
#' \item \strong{female:} =1 if female
#' \item \strong{phsrank:} percent high school rank; 100 = best
#' \item \strong{BA:} =1 if Bachelor's degree
#' \item \strong{AA:} =1 if Associate's degree
#' \item \strong{black:} =1 if African-American
#' \item \strong{hispanic:} =1 if Hispanic
#' \item \strong{id:} ID Number
#' \item \strong{exper:} total (actual) work experience
#' \item \strong{jc:} total 2-year credits
#' \item \strong{univ:} total 4-year credits
#' \item \strong{lwage:} log hourly wage
#' \item \strong{stotal:} total standardized test score
#' \item \strong{smcity:} =1 if small city, 1972
#' \item \strong{medcity:} =1 if med. city, 1972
#' \item \strong{submed:} =1 if suburb med. city, 1972
#' \item \strong{lgcity:} =1 if large city, 1972
#' \item \strong{sublg:} =1 if suburb large city, 1972
#' \item \strong{vlgcity:} =1 if very large city, 1972
#' \item \strong{subvlg:} =1 if sub. very lge. city, 1972
#' \item \strong{ne:} =1 if northeast
#' \item \strong{nc:} =1 if north central
#' \item \strong{south:} =1 if south
#' \item \strong{totcoll:} jc + univ
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(twoyear)
"twoyear"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/twoyear.R
|
#' volat
#'
#' Wooldridge Source: J.D. Hamilton and L. Gang (1996), “Stock Market Volatility and the Business Cycle,” Journal of Applied Econometrics 11, 573-593. I obtained these data from the Journal of Applied Econometrics data archive at http://qed.econ.queensu.ca/jae/ Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 378, 670, 671, 674
#'
#' @docType data
#'
#' @usage data('volat')
#'
#' @format A data.frame with 558 observations on 17 variables:
#' \itemize{
#' \item \strong{date:} 1947.01 to 1993.06
#' \item \strong{sp500:} S&P 500 index
#' \item \strong{divyld:} div. yield annualized rate
#' \item \strong{i3:} 3 mo. T-bill annualized rate
#' \item \strong{ip:} index of industrial production
#' \item \strong{pcsp:} pct chg, sp500, ann rate
#' \item \strong{rsp500:} return on sp500: pcsp + divyld
#' \item \strong{pcip:} pct chg, IP, ann rate
#' \item \strong{ci3:} i3 - i3[_n-1]
#' \item \strong{ci3_1:} ci3[_n-1]
#' \item \strong{ci3_2:} ci3[_n-2]
#' \item \strong{pcip_1:} pcip[_n-1]
#' \item \strong{pcip_2:} pcip[_n-2]
#' \item \strong{pcip_3:} pcip[_n-3]
#' \item \strong{pcsp_1:} pcip[_n-1]
#' \item \strong{pcsp_2:} pcip[_n-2]
#' \item \strong{pcsp_3:} pcip[_n-3]
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(volat)
"volat"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/volat.R
|
#' vote1
#'
#' Wooldridge Source: From M. Barone and G. Ujifusa, The Almanac of American Politics, 1992. Washington, DC: National Journal. Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 34, 39, 164, 221-222, 299, 699
#'
#' @docType data
#'
#' @usage data('vote1')
#'
#' @format A data.frame with 173 observations on 10 variables:
#' \itemize{
#' \item \strong{state:} state postal code
#' \item \strong{district:} congressional district
#' \item \strong{democA:} =1 if A is democrat
#' \item \strong{voteA:} percent vote for A
#' \item \strong{expendA:} camp. expends. by A, $1000s
#' \item \strong{expendB:} camp. expends. by B, $1000s
#' \item \strong{prtystrA:} percent vote for president
#' \item \strong{lexpendA:} log(expendA)
#' \item \strong{lexpendB:} log(expendB)
#' \item \strong{shareA:} 100*(expendA/(expendA+expendB))
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(vote1)
"vote1"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/vote1.R
|
#' vote2
#'
#' Wooldridge Source: See VOTE1.RAW Data loads lazily.
#'
#' @section Notes: These are panel data, at the Congressional district level, collected for the 1988 and 1990 U.S. House of Representative elections. Of course, much more recent data are available, possibly even in electronic form.
#'
#' Used in Text: pages 335-336, 478, 699
#'
#' @docType data
#'
#' @usage data('vote2')
#'
#' @format A data.frame with 186 observations on 26 variables:
#' \itemize{
#' \item \strong{state:} state postal code
#' \item \strong{district:} U.S. Congressional district
#' \item \strong{democ:} =1 if incumbent democrat
#' \item \strong{vote90:} inc. share two-party vote, 1990
#' \item \strong{vote88:} inc. share two-party vote, 1988
#' \item \strong{inexp90:} inc. camp. expends., 1990
#' \item \strong{chexp90:} chl. camp. expends., 1990
#' \item \strong{inexp88:} inc. camp. expends., 1988
#' \item \strong{chexp88:} chl. camp. expends., 1988
#' \item \strong{prtystr:} percent vote pres., same party, 1988
#' \item \strong{rptchall:} =1 if a repeat challenger
#' \item \strong{tenure:} years in H.R.
#' \item \strong{lawyer:} =1 if law degree
#' \item \strong{linexp90:} log(inexp90)
#' \item \strong{lchexp90:} log(chexp90)
#' \item \strong{linexp88:} log(inexp88)
#' \item \strong{lchexp88:} log(chexp88)
#' \item \strong{incshr90:} 100*(inexp90/(inexp90+chexp90))
#' \item \strong{incshr88:} 100*(inexp88/(inexp88+chexp88))
#' \item \strong{cvote:} vote90 - vote88
#' \item \strong{clinexp:} linexp90 - linexp88
#' \item \strong{clchexp:} lchexp90 - lchexp88
#' \item \strong{cincshr:} incshr90 - incshr88
#' \item \strong{win88:} =1 by definition
#' \item \strong{win90:} =1 if inc. wins, 1990
#' \item \strong{cwin:} win90 - win88
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(vote2)
"vote2"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/vote2.R
|
#' voucher
#'
#' Wooldridge Source: Rouse, C.E. (1998), “Private School Vouchers and Student Achievement: An Evaluation of the Milwaukee Parental Choice Program,” Quarterly Journal of Economics 113, 553-602. Professor Rouse kindly provided the original data set from her paper. Data loads lazily.
#'
#' @section Notes: This is a condensed version of the data set used by Professor Rouse. The original data set had missing information on many variables, including post-policy and pre-policy test scores. I did not impute any missing data and have dropped observations that were unusable without filling in missing data. There are 990 students in the current data set but pre-policy test scores are available for only 328 of them. This is a good example of where eligibility for a program is randomized but participation need not be. In addition, even if we look at just the effect of eligibility (captured in the variable selectyrs) on the math test score (mnce), we need to confront the fact that attrition (students leaving the district) can bias the results. Controlling for the pre-policy test score, mnce90, can help – but at the cost of losing two-thirds of the observations. A simple regression of mnce on selectyrs followed by a multiple regression that adds mnce90 as a control is informative. The selectyrs dummy variables can be used as instrumental variables for the choiceyrs variable to try to estimate the effect of actually participating in the program (rather than estimating the so- called intention-to-treat effect). Computer Exercise C15.11 steps through the details.
#'
#' Used in Text: pages 550-551
#'
#' @docType data
#'
#' @usage data('voucher')
#'
#' @format A data.frame with 990 observations on 19 variables:
#' \itemize{
#' \item \strong{studyid:} student identifier
#' \item \strong{black:} = 1 if African-American
#' \item \strong{hispanic:} = 1 if Hispanic
#' \item \strong{female:} = 1 if female
#' \item \strong{appyear:} year of first application: 90 to 93
#' \item \strong{mnce:} math NCE test score, 1994
#' \item \strong{select:} = 1 if ever selected to attend choice school
#' \item \strong{choice:} = 1 if attending choice school, 1994
#' \item \strong{selectyrs:} years selected to attend choice school
#' \item \strong{choiceyrs:} years attended choice school
#' \item \strong{mnce90:} mnce in 1990
#' \item \strong{selectyrs1:} = 1 if selectyrs == 1
#' \item \strong{selectyrs2:} = 1 if selectyrs == 2
#' \item \strong{selectyrs3:} = 1 if selectyrs == 3
#' \item \strong{selectyrs4:} = 1 if selectyrs == 4
#' \item \strong{choiceyrs1:} = 1 if choiceyrs == 1
#' \item \strong{choiceyrs2:} = 1 if choiceyrs == 2
#' \item \strong{choiceyrs3:} = 1 if choiceyrs == 3
#' \item \strong{choiceyrs4:} = 1 if choiceyrs == 4
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(voucher)
"voucher"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/voucher.R
|
#' wage1
#'
#' Wooldridge Source: These are data from the 1976 Current Population Survey, collected by Henry Farber when he and I were colleagues at MIT in 1988. Data loads lazily.
#'
#' @section Notes: Barry Murphy, of the University of Portsmouth in the UK, has pointed out that for several observations the values for exper and tenure are in logical conflict. In particular, for some workers the number of years with current employer (tenure) is greater than overall work experience (exper). At least some of these conflicts are due to the definition of exper as “potential” work experience, but probably not all. Nevertheless, I am using the data set as it was supplied to me.
#'
#' Used in Text: pages 7, 17, 33-34, 37, 76, 91, 125, 183, 194-195, 220, 231, 234, 235-236, 240-241, 243-244, 263, 272, 326, 678
#'
#' @docType data
#'
#' @usage data('wage1')
#'
#' @format A data.frame with 526 observations on 24 variables:
#' \itemize{
#' \item \strong{wage:} average hourly earnings
#' \item \strong{educ:} years of education
#' \item \strong{exper:} years potential experience
#' \item \strong{tenure:} years with current employer
#' \item \strong{nonwhite:} =1 if nonwhite
#' \item \strong{female:} =1 if female
#' \item \strong{married:} =1 if married
#' \item \strong{numdep:} number of dependents
#' \item \strong{smsa:} =1 if live in SMSA
#' \item \strong{northcen:} =1 if live in north central U.S
#' \item \strong{south:} =1 if live in southern region
#' \item \strong{west:} =1 if live in western region
#' \item \strong{construc:} =1 if work in construc. indus.
#' \item \strong{ndurman:} =1 if in nondur. manuf. indus.
#' \item \strong{trcommpu:} =1 if in trans, commun, pub ut
#' \item \strong{trade:} =1 if in wholesale or retail
#' \item \strong{services:} =1 if in services indus.
#' \item \strong{profserv:} =1 if in prof. serv. indus.
#' \item \strong{profocc:} =1 if in profess. occupation
#' \item \strong{clerocc:} =1 if in clerical occupation
#' \item \strong{servocc:} =1 if in service occupation
#' \item \strong{lwage:} log(wage)
#' \item \strong{expersq:} exper^2
#' \item \strong{tenursq:} tenure^2
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(wage1)
"wage1"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/wage1.R
|
#' wage2
#'
#' Wooldridge Source: M. Blackburn and D. Neumark (1992), “Unobserved Ability, Efficiency Wages, and Interindustry Wage Differentials,” Quarterly Journal of Economics 107, 1421-1436. Professor Neumark kindly provided the data, of which I used just the data for 1980. Data loads lazily.
#'
#' @section Notes: As with WAGE1.RAW, there are some clear inconsistencies among the variables tenure, exper, and age. I have not been able to track down the causes, and so any changes would be effectively arbitrary. Instead, I am using the data as provided by the authors of the above QJE article.
#'
#' Used in Text: pages 64, 106, 111, 165, 218-219, 220-221, 262, 310-312, 338, 519-520, 534, 546-547, 549, 678
#'
#' @docType data
#'
#' @usage data('wage2')
#'
#' @format A data.frame with 935 observations on 17 variables:
#' \itemize{
#' \item \strong{wage:} monthly earnings
#' \item \strong{hours:} average weekly hours
#' \item \strong{IQ:} IQ score
#' \item \strong{KWW:} knowledge of world work score
#' \item \strong{educ:} years of education
#' \item \strong{exper:} years of work experience
#' \item \strong{tenure:} years with current employer
#' \item \strong{age:} age in years
#' \item \strong{married:} =1 if married
#' \item \strong{black:} =1 if black
#' \item \strong{south:} =1 if live in south
#' \item \strong{urban:} =1 if live in SMSA
#' \item \strong{sibs:} number of siblings
#' \item \strong{brthord:} birth order
#' \item \strong{meduc:} mother's education
#' \item \strong{feduc:} father's education
#' \item \strong{lwage:} natural log of wage
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(wage2)
"wage2"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/wage2.R
|
#' wagepan
#'
#' Wooldridge Source: F. Vella and M. Verbeek (1998), “Whose Wages Do Unions Raise? A Dynamic Model of Unionism and Wage Rate Determination for Young Men,” Journal of Applied Econometrics 13, 163-183. I obtained the data from the Journal of Applied Econometrics data archive at http://qed.econ.queensu.ca/jae/. This is generally a nice resource for undergraduates looking to replicate or extend a published study. Data loads lazily.
#'
#' @section
#'
#' Used in Text: pages 480, 494-495, 505
#'
#' @docType data
#'
#' @usage data('wagepan')
#'
#' @format A data.frame with 4360 observations on 44 variables:
#' \itemize{
#' \item \strong{nr:} person identifier
#' \item \strong{year:} 1980 to 1987
#' \item \strong{agric:} =1 if in agriculture
#' \item \strong{black:} =1 if black
#' \item \strong{bus:}
#' \item \strong{construc:} =1 if in construction
#' \item \strong{ent:}
#' \item \strong{exper:} labor mkt experience
#' \item \strong{fin:}
#' \item \strong{hisp:} =1 if Hispanic
#' \item \strong{poorhlth:} =1 if in poor health
#' \item \strong{hours:} annual hours worked
#' \item \strong{manuf:} =1 if in manufacturing
#' \item \strong{married:} =1 if married
#' \item \strong{min:}
#' \item \strong{nrthcen:} =1 if north central
#' \item \strong{nrtheast:} =1 if north east
#' \item \strong{occ1:}
#' \item \strong{occ2:}
#' \item \strong{occ3:}
#' \item \strong{occ4:}
#' \item \strong{occ5:}
#' \item \strong{occ6:}
#' \item \strong{occ7:}
#' \item \strong{occ8:}
#' \item \strong{occ9:}
#' \item \strong{per:}
#' \item \strong{pro:}
#' \item \strong{pub:}
#' \item \strong{rur:}
#' \item \strong{south:} =1 if south
#' \item \strong{educ:} years of schooling
#' \item \strong{tra:}
#' \item \strong{trad:}
#' \item \strong{union:} =1 if in union
#' \item \strong{lwage:} log(wage)
#' \item \strong{d81:} =1 if year == 1981
#' \item \strong{d82:}
#' \item \strong{d83:}
#' \item \strong{d84:}
#' \item \strong{d85:}
#' \item \strong{d86:}
#' \item \strong{d87:}
#' \item \strong{expersq:} exper^2
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(wagepan)
"wagepan"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/wagepan.R
|
#' wageprc
#'
#' Wooldridge Source: Economic Report of the President, various years. Data loads lazily.
#'
#' @section Notes: These monthly data run from January 1964 through October 1987. The consumer price index averages to 100 in 1967.
#'
#' Used in Text: pages 405, 444-445, 671.
#'
#' @docType data
#'
#' @usage data('wageprc')
#'
#' @format A data.frame with 286 observations on 20 variables:
#' \itemize{
#' \item \strong{price:} consumer price index
#' \item \strong{wage:} nominal hourly wage
#' \item \strong{t:} time trend = 1, 2 , 3, ...
#' \item \strong{lprice:} log(price)
#' \item \strong{lwage:} log(wage)
#' \item \strong{gprice:} lprice - lprice[_n-1]
#' \item \strong{gwage:} lwage - lwage[_n-1]
#' \item \strong{gwage_1:} gwage[_n-1]
#' \item \strong{gwage_2:} gwage[_n-2]
#' \item \strong{gwage_3:}
#' \item \strong{gwage_4:}
#' \item \strong{gwage_5:}
#' \item \strong{gwage_6:}
#' \item \strong{gwage_7:}
#' \item \strong{gwage_8:}
#' \item \strong{gwage_9:}
#' \item \strong{gwage_10:}
#' \item \strong{gwage_11:}
#' \item \strong{gwage_12:}
#' \item \strong{gprice_1:} gprice[_n-1]
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(wageprc)
"wageprc"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/wageprc.R
|
#' wine
#'
#' Wooldridge Source: These data were reported in a New York Times article, December 28, 1994. Data loads lazily.
#'
#' @section Notes: The dependent variables deaths, heart, and liver can be each regressed against alcohol as nice simple regression examples. The conventional wisdom is that wine is good for the heart but not for the liver, something that is apparent in the regressions. Because the number of observations is small, this can be a good data set to illustrate calculation of the OLS estimates and statistics.
#'
#' Used in Text: not used
#'
#' @docType data
#'
#' @usage data('wine')
#'
#' @format A data.frame with 21 observations on 5 variables:
#' \itemize{
#' \item \strong{country:}
#' \item \strong{alcohol:} liters alcohol from wine, per capita
#' \item \strong{deaths:} deaths per 100,000
#' \item \strong{heart:} heart disease dths per 100,000
#' \item \strong{liver:} liver disease dths per 100,000
#' }
#' @source \url{https://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20b&product_isbn_issn=9781111531041}
#' @examples str(wine)
"wine"
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/R/wine.R
|
## ---- echo = TRUE, eval = FALSE, warning=FALSE--------------------------------
# install.packages("wooldridge")
## ---- echo = TRUE, eval = TRUE, warning=FALSE, message=FALSE------------------
library(wooldridge)
## ---- echo=FALSE, eval=TRUE, warning=FALSE, message=FALSE---------------------
library(stargazer)
library(knitr)
## ---- message=FALSE, eval=FALSE-----------------------------------------------
# data("wage1")
#
# ?wage1
## ---- echo=FALSE--------------------------------------------------------------
plot(y = wage1$wage, x = wage1$educ, col = "darkgreen", pch = 21, bg = "lightgrey",
cex=1.25, xaxt="n", frame = FALSE, main = "Wages vs. Education, 1976",
xlab = "years of education", ylab = "Hourly wages")
axis(side = 1, at = c(0,6,12,18))
rug(wage1$wage, side=2, col="darkgreen")
## -----------------------------------------------------------------------------
log_wage_model <- lm(lwage ~ educ, data = wage1)
## ---- echo = TRUE, eval = FALSE, warning=FALSE--------------------------------
# summary(log_wage_model)
## ---- results='asis', echo=FALSE, warning=FALSE, message=FALSE----------------
stargazer(type = "html", log_wage_model, single.row = TRUE, header = FALSE, digits = 5)
## ---- echo=FALSE--------------------------------------------------------------
plot(y = wage1$lwage, x = wage1$educ, main = "A Log Wage Equation",
col = "darkgreen", pch = 21, bg = "lightgrey", cex=1.25,
xlab = "years of education", ylab = "log of average hourly wages",
xaxt="n", frame = FALSE)
axis(side = 1, at = c(0,6,12,18))
abline(log_wage_model, col = "blue", lwd=2)
rug(wage1$lwage, side=2, col="darkgreen")
## ---- eval=FALSE--------------------------------------------------------------
# ?wage1
## ---- fig.height=3, echo=FALSE------------------------------------------------
par(mfrow=c(1,3))
plot(y = wage1$lwage, x = wage1$educ, col="darkgreen", xaxt="n", frame = FALSE, main = "years of education", xlab = "", ylab = "")
mtext(side=2, line=2.5, "Hourly wages", cex=1.25)
axis(side = 1, at = c(0,6,12,18))
abline(lm(lwage ~ educ, data=wage1), col = "darkblue", lwd=2)
plot(y = wage1$lwage, x = wage1$exper, col="darkgreen", xaxt="n", frame = FALSE, main = "years of experience", xlab = "", ylab = "")
axis(side = 1, at = c(0,12.5,25,37.5,50))
abline(lm(lwage ~ exper, data=wage1), col = "darkblue", lwd=2)
plot(y = wage1$lwage, x = wage1$tenure, col="darkgreen", xaxt="n", frame = FALSE, main = "years with employer", xlab = "", ylab = "")
axis(side = 1, at = c(0,11,22,33,44))
abline(lm(lwage ~ tenure, data=wage1), col = "darkblue", lwd=2)
## -----------------------------------------------------------------------------
hourly_wage_model <- lm(lwage ~ educ + exper + tenure, data = wage1)
## ---- eval=FALSE--------------------------------------------------------------
# coefficients(hourly_wage_model)
## ---- echo=FALSE--------------------------------------------------------------
kable(coefficients(hourly_wage_model), digits=4, col.names = "Coefficients", align = 'l')
## ---- echo=FALSE--------------------------------------------------------------
barplot(sort(100*hourly_wage_model$coefficients[-1]), horiz=TRUE, las=1,
ylab = " ", main = "Coefficients of Hourly Wage Equation")
## ---- eval=FALSE--------------------------------------------------------------
# summary(hourly_wage_model)
## ---- results='asis', echo=FALSE, warning=FALSE, message=FALSE----------------
stargazer(type = "html", hourly_wage_model, single.row = TRUE, header = FALSE, digits=5)
## ---- eval=FALSE--------------------------------------------------------------
# summary(hourly_wage_model)$coefficients
## ---- echo=FALSE--------------------------------------------------------------
kable(summary(hourly_wage_model)$coefficients, align="l", digits=5)
## ---- fig.height=8, eval=FALSE, echo=FALSE------------------------------------
# par(mfrow=c(2,2))
#
# plot(y = hourly_wage_model$residuals, x = hourly_wage_model$fitted.values , col="darkgreen", xaxt="n",
# frame = FALSE, main = "Fitted Values", xlab = "", ylab = "")
# mtext(side=2, line=2.5, "Model Residuals", cex=1.25)
# abline(0, 0, col = "darkblue", lty=2, lwd=2)
#
# plot(y = hourly_wage_model$residuals, x = wage1$educ, col="darkgreen", xaxt="n",
# frame = FALSE, main = "years of education", xlab = "", ylab = "")
# axis(side = 1, at = c(0,6,12,18))
# abline(0, 0, col = "darkblue", lty=2, lwd=2)
#
# plot(y = hourly_wage_model$residuals, x = wage1$exper, col="darkgreen", xaxt="n",
# frame = FALSE, main = "years of experience", xlab = "", ylab = "")
# mtext(side=2, line=2.5, "Model Residuals", cex=1.25)
# axis(side = 1, at = c(0,12.5,25,37.5,50))
# abline(0, 0, col = "darkblue", lty=2, lwd=2)
#
# plot(y = hourly_wage_model$residuals, x = wage1$tenure, col="darkgreen", xaxt="n",
# frame = FALSE, main = "years with employer", xlab = "", ylab = "")
# axis(side = 1, at = c(0,11,22,33,44))
# abline(0, 0, col = "darkblue", lty=2, lwd=2)
## ---- echo=FALSE--------------------------------------------------------------
barplot(sort(summary(hourly_wage_model)$coefficients[-1, "t value"]), horiz=TRUE, las=1,
ylab = " ", main = "t statistics of Hourly Wage Equation")
## ---- echo = TRUE, eval = TRUE, warning=FALSE, message=FALSE------------------
data("jtrain")
## ---- echo = TRUE, eval = FALSE, warning=FALSE--------------------------------
# ?jtrain
## -----------------------------------------------------------------------------
jtrain_subset <- subset(jtrain, subset = (year == 1987 & union == 0),
select = c(year, union, lscrap, hrsemp, lsales, lemploy))
## -----------------------------------------------------------------------------
sum(is.na(jtrain_subset))
## -----------------------------------------------------------------------------
jtrain_clean <- na.omit(jtrain_subset)
## ---- echo=FALSE, fig.height=3------------------------------------------------
par(mfrow=c(1,3))
point_size <- 1.75
plot(y = jtrain_clean$lscrap, x = jtrain_clean$hrsemp, frame = FALSE,
main = "Total (hours/employees) trained", ylab = "", xlab="", pch = 21, bg = "lightgrey", cex=point_size)
mtext(side=2, line=2, "Log(scrap rate)", cex=1.25)
abline(lm(lscrap ~ hrsemp, data=jtrain_clean), col = "blue", lwd=2)
plot(y = jtrain_clean$lscrap, x = jtrain_clean$lsales, frame = FALSE, main = "Log(annual sales $)", ylab = " ", xlab="", pch = 21, bg = "lightgrey", cex=point_size)
abline(lm(lscrap ~ lsales, data=jtrain_clean), col = "blue", lwd=2)
plot(y = jtrain_clean$lscrap, x = jtrain_clean$lemploy, frame = FALSE, main = "Log(# employees at plant)", ylab = " ", xlab="", pch = 21, bg = "lightgrey", cex=point_size)
abline(lm(lscrap ~ lemploy, data=jtrain_clean), col = "blue", lwd=2)
## -----------------------------------------------------------------------------
linear_model <- lm(lscrap ~ hrsemp + lsales + lemploy, data = jtrain_clean)
## ---- eval=FALSE, warning=FALSE, message=FALSE--------------------------------
# summary(linear_model)
## ---- results='asis', echo=FALSE, warning=FALSE, message=FALSE----------------
stargazer(type = "html", linear_model, single.row = TRUE, header = FALSE, digits=5)
## ---- echo=FALSE, eval=FALSE--------------------------------------------------
# #Plot the coefficients, representing the impact of each variable on $log($`scrap`$)$ for a quick comparison. As you can observe, for some variables, the confidence intervals are wider than others.
# coefficient <- coef(linear_model)[-1]
# confidence <- confint(linear_model, level = 0.95)[-1,]
#
# graph <- drop(barplot(coefficient, ylim = range(c(confidence)),
# main = "Coefficients & 95% C.I. of variables on Firm Scrap Rates"))
#
# arrows(graph, coefficient, graph, confidence[,1], angle=90, length=0.55, col="blue", lwd=2)
# arrows(graph, coefficient, graph, confidence[,2], angle=90, length=0.55, col="blue", lwd=2)
#
## -----------------------------------------------------------------------------
data("hprice3")
## ---- echo=FALSE, fig.align='center'------------------------------------------
par(mfrow=c(1,2))
plot(y = hprice3$price, x = hprice3$dist, main = " ", xlab = "Distance to Incinerator in feet", ylab = "Selling Price", frame = FALSE, pch = 21, bg = "lightgrey")
abline(lm(price ~ dist, data=hprice3), col = "blue", lwd=2)
## -----------------------------------------------------------------------------
price_dist_model <- lm(lprice ~ ldist, data = hprice3)
## -----------------------------------------------------------------------------
price_area_model <- lm(lprice ~ ldist + larea, data = hprice3)
## ---- eval=FALSE--------------------------------------------------------------
# summary(price_dist_model)
# summary(price_area_model)
## ---- results='asis', echo=FALSE, warning=FALSE, message=FALSE----------------
stargazer(type = "html",price_dist_model, price_area_model, single.row = TRUE, header = FALSE, digits=5)
## ---- echo=FALSE--------------------------------------------------------------
par(mfrow=c(1,2))
point_size <- 0.80
plot(y = hprice3$lprice, x = hprice3$ldist, frame = FALSE,
main = "Log(distance from incinerator)", ylab = "", xlab="",
pch = 21, bg = "lightgrey", cex=point_size)
mtext(side=2, line=2, "Log( selling price )", cex=1.25)
abline(lm(lprice ~ ldist, data=hprice3), col = "blue", lwd=2)
plot(y = hprice3$lprice, x = hprice3$larea, frame = FALSE, main = "Log(square footage of house)", ylab = " ", xlab="", pch = 21, bg = "lightgrey", cex=point_size)
abline(lm(lprice ~ larea, data=hprice3), col = "blue", lwd=2)
## ---- message=FALSE, eval=FALSE-----------------------------------------------
# data("hprice2")
# ?hprice2
## -----------------------------------------------------------------------------
housing_level <- lm(price ~ nox + crime + rooms + dist + stratio, data = hprice2)
## -----------------------------------------------------------------------------
housing_standardized <- lm(scale(price) ~ 0 + scale(nox) + scale(crime) + scale(rooms) + scale(dist) + scale(stratio), data = hprice2)
## ---- eval=FALSE--------------------------------------------------------------
# summary(housing_level)
# summary(housing_standardized)
## ---- results='asis', echo=FALSE, warning=FALSE, message=FALSE----------------
stargazer(type = "html",housing_level, housing_standardized, single.row = TRUE, header = FALSE, digits=5)
## -----------------------------------------------------------------------------
housing_model_4.5 <- lm(lprice ~ lnox + log(dist) + rooms + stratio, data = hprice2)
housing_model_6.2 <- lm(lprice ~ lnox + log(dist) + rooms + I(rooms^2) + stratio,
data = hprice2)
## ---- eval=FALSE--------------------------------------------------------------
# summary(housing_model_4.5)
# summary(housing_model_6.2)
## ---- results='asis', echo=FALSE, warning=FALSE, message=FALSE----------------
stargazer(type = "html", housing_model_4.5 , housing_model_6.2, single.row = TRUE, header = FALSE, digits=5)
## -----------------------------------------------------------------------------
beta_1 <- summary(housing_model_6.2)$coefficients["rooms",1]
beta_2 <- summary(housing_model_6.2)$coefficients["I(rooms^2)",1]
turning_point <- abs(beta_1 / (2*beta_2))
print(turning_point)
## -----------------------------------------------------------------------------
Rooms <- c(min(hprice2$rooms), 4, turning_point, 5, 5.5, 6.45, 7.5, max(hprice2$rooms))
Percent.Change <- 100*(beta_1 + 2*beta_2*Rooms)
kable(data.frame(Rooms, Percent.Change))
## ---- echo=FALSE--------------------------------------------------------------
from <- min(hprice2$rooms)
to <- max(hprice2$rooms)
rooms <- seq(from=from, to =to, by = ((to - from)/(NROW(hprice2)-1)))
quadratic <- abs(100*summary(housing_model_6.2)$coefficients["rooms",1] + 200*summary(housing_model_6.2)$coefficients["I(rooms^2)",1]*rooms)
housing_model_frame <- model.frame(housing_model_6.2)
housing_sq <- abs(beta_1*housing_model_frame[,"rooms"]) +
beta_2*housing_model_frame[,"I(rooms^2)"]
## ---- echo=FALSE--------------------------------------------------------------
rooms_interaction <- lm(lprice ~ rooms + I(rooms^2), data = hprice2)
par(mfrow=c(1,2))
plot(y = hprice2$lprice, x = hprice2$rooms, xaxt="n", pch = 21, bg = "lightgrey",
frame = FALSE, main = "lprice ~ rooms", xlab = "Rooms", ylab = "")
mtext(side=2, line=2, "Log( selling price )", cex=1.25)
axis(side = 1, at = c(min(hprice2$rooms), 4, 5, 6, 7, 8, max(hprice2$rooms)))
abline(lm(lprice ~ rooms, data = hprice2), col="red", lwd=2.5)
plot(y = hprice2$lprice, x = hprice2$rooms, xaxt="n", pch = 21, bg = "lightgrey",
frame = FALSE, main = "lprice ~ rooms + I(rooms^2)", xlab = "Rooms", ylab = " ")
axis(side = 1, at = c(min(hprice2$rooms), 4, 5, 6, 7, 8, max(hprice2$rooms)))
lines(sort(hprice2$rooms), sort(fitted(rooms_interaction)), col = "red", lwd=2.5)
## -----------------------------------------------------------------------------
data("hprice1")
## ---- eval=FALSE--------------------------------------------------------------
# ?hprice1
## ---- fig.height=8, eval=FALSE, echo=FALSE------------------------------------
# par(mfrow=c(2,2))
#
# palette(rainbow(6, alpha = 0.8))
# plot(y = hprice1$lprice, x = hprice1$llotsize, col=hprice1$bdrms, pch = 19,
# frame = FALSE, main = "Log(lot size)", xlab = "", ylab = "")
# mtext(side=2, line=2, "Log( selling price )", cex=1.25)
#
#
# plot(y = hprice1$lprice, x = hprice1$lsqrft, col=hprice1$bdrms, pch=19,
# frame = FALSE, main = "Log(home size)", xlab = "Rooms", ylab = " ")
# legend(8, 5.8, sort(unique(hprice1$bdrms)), col = 1:length(hprice1$bdrms),
# pch=19, title = "bdrms")
#
#
# hprice1$colonial <- as.factor(hprice1$colonial)
#
# palette(rainbow(2, alpha = 0.8))
# plot(y = hprice1$lprice, x = hprice1$llotsize, col=hprice1$colonial, pch = 19, bg = "lightgrey",
# frame = FALSE, main = "Log(lot size)", xlab = "", ylab = "")
# mtext(side=2, line=2, "Log( selling price )", cex=1.25)
#
#
# plot(y = hprice1$lprice, x = hprice1$lsqrft, col=hprice1$colonial, pch=19,
# frame = FALSE, main = "Log(home size)", xlab = "Rooms", ylab = " ")
# legend(8, 5.25, unique(hprice1$colonial), col=1:length(hprice1$colonial), pch=19, title = "colonial")
## -----------------------------------------------------------------------------
housing_qualitative <- lm(lprice ~ llotsize + lsqrft + bdrms + colonial, data = hprice1)
## ---- eval=FALSE--------------------------------------------------------------
# summary(housing_qualitative)
## ---- results='asis', echo=FALSE, warning=FALSE, message=FALSE----------------
stargazer(type = "html",housing_qualitative, single.row = TRUE, header = FALSE, digits=5)
## ---- message=FALSE-----------------------------------------------------------
data("gpa1")
gpa1$parcoll <- as.integer(gpa1$fathcoll==1 | gpa1$mothcoll)
GPA_OLS <- lm(PC ~ hsGPA + ACT + parcoll, data = gpa1)
## -----------------------------------------------------------------------------
weights <- GPA_OLS$fitted.values * (1-GPA_OLS$fitted.values)
GPA_WLS <- lm(PC ~ hsGPA + ACT + parcoll, data = gpa1, weights = 1/weights)
## ---- results='asis', echo=FALSE, warning=FALSE, message=FALSE----------------
stargazer(type = "html",GPA_OLS, GPA_WLS, single.row = TRUE, header = FALSE, digits=5)
## ---- message=FALSE-----------------------------------------------------------
data("rdchem")
all_rdchem <- lm(rdintens ~ sales + profmarg, data = rdchem)
## ---- echo=FALSE--------------------------------------------------------------
plot_title <- "FIGURE 9.1: Scatterplot of R&D intensity against firm sales"
x_axis <- "firm sales (in millions of dollars)"
y_axis <- "R&D as a percentage of sales"
plot(rdintens ~ sales, pch = 21, bg = "lightgrey", data = rdchem, main = plot_title, xlab = x_axis, ylab = y_axis)
## -----------------------------------------------------------------------------
smallest_rdchem <- lm(rdintens ~ sales + profmarg, data = rdchem,
subset = (sales < max(sales)))
## ---- results='asis', echo=FALSE, warning=FALSE, message=FALSE----------------
stargazer(type = "html",all_rdchem, smallest_rdchem, single.row = TRUE, header = FALSE, digits=5)
## ---- message=FALSE-----------------------------------------------------------
data("intdef") # load data
# load eXtensible Time Series package.
# xts is excellent for time series plots and
# properly indexing time series.
library(xts)
# create xts object from data.frame
# First, index year as yearmon class of monthly data.
# Note: I add 11/12 to set the month to December, end of year.
index <- zoo::as.yearmon(intdef$year + 11/12)
# Next, create the xts object, ordering by the index above.
intdef.xts <- xts(intdef[ ,-1], order.by = index)
# extract 3-month Tbill, inflation, and deficit data
intdef.xts <- intdef.xts[ ,c("i3", "inf", "def")]
# rename with clearer names
colnames(intdef.xts) <- c("Tbill3mo", "cpi", "deficit")
# plot the object, add a title, and place legend at top left.
plot(x = intdef.xts,
main = "Inflation, Deficits, and Interest Rates",
legend.loc = "topleft")
# Run a Linear regression model
tbill_model <- lm(Tbill3mo ~ cpi + deficit, data = intdef.xts)
## ---- results='asis', echo=FALSE, warning=FALSE, message=FALSE----------------
stargazer(type = "html",tbill_model, single.row = TRUE, header = FALSE, digits=5)
## ----eval=FALSE, FALSE, message=FALSE, warning=FALSE--------------------------
# # DO NOT RUN
# library(quantmod)
#
# # Tbill, 3 month
# getSymbols("TB3MS", src = "FRED")
# # convert to annual observations and convert index to type `yearmon`.
# TB3MS <- to.yearly(TB3MS, OHLC=FALSE, drop.time = TRUE)
# index(TB3MS) <- zoo::as.yearmon(index(TB3MS))
#
# # Inflation
# getSymbols("FPCPITOTLZGUSA", src = "FRED")
# # Convert the index to yearmon and shift FRED's Jan 1st to Dec
# index(FPCPITOTLZGUSA) <- zoo::as.yearmon(index(FPCPITOTLZGUSA)) + 11/12
# # Rename and update column names
# inflation <- FPCPITOTLZGUSA
# colnames(inflation) <- "inflation"
#
# ## Deficit, percent of GDP: Federal outlays - federal receipts
# # Download outlays
# getSymbols("FYFRGDA188S", src = "FRED")
# # Lets move the index from Jan 1st to Dec 30th/31st
# index(FYFRGDA188S) <- zoo::as.yearmon(index(FYFRGDA188S)) + 11/12
# # Rename and update column names
# outlays <- FYFRGDA188S
# colnames(outlays) <- "outlays"
#
# # Download receipts
# getSymbols("FYONGDA188S", src = "FRED")
# # Lets move the index from Jan 1st to Dec 30th/31st
# index(FYONGDA188S) <- zoo::as.yearmon(index(FYONGDA188S)) + 11/12
# # Rename and update column names
# receipts <- FYONGDA188S
# colnames(receipts) <- "receipts"
## ----eval=FALSE, message=FALSE, warning=FALSE---------------------------------
# # DO NOT RUN
# # create deficits from outlays - receipts
# # xts objects respect their indexing and outline the future
# deficit <- outlays - receipts
# colnames(deficit) <- "deficit"
#
# # Merge and remove leading and trailing NAs for a balanced data matrix
# intdef_updated <- merge(TB3MS, inflation, deficit)
# intdef_updated <- zoo::na.trim(intdef_updated)
#
# #Plot all
# plot(intdef_updated,
# main = "T-bill (3mo rate), inflation, and deficit (% of GDP)",
# legend.loc = "topright",)
## ----eval=FALSE---------------------------------------------------------------
# # DO NOT RUN
# updated_model <- lm(TB3MS ~ inflation + deficit, data = intdef_updated)
## ---- eval=FALSE, results='asis', echo=FALSE, warning=FALSE, message=FALSE----
# #DO NOT RUN
# updated_model <- lm(TB3MS ~ inflation + deficit, data = intdef_updated)
#
# stargazer(type = "html", updated_model, single.row = TRUE, header = FALSE, digits=5)
## ---- message=FALSE-----------------------------------------------------------
data("earns")
wage_time <- lm(lhrwage ~ loutphr + t, data = earns)
## -----------------------------------------------------------------------------
wage_diff <- lm(diff(lhrwage) ~ diff(loutphr), data = earns)
## ---- results='asis', echo=FALSE, warning=FALSE, message=FALSE----------------
stargazer(type = "html",wage_time, wage_diff, single.row = TRUE, header = FALSE, digits=5)
## ---- message=FALSE, eval=FALSE-----------------------------------------------
# data("nyse")
# ?nyse
## -----------------------------------------------------------------------------
return_AR1 <-lm(return ~ return_1, data = nyse)
## -----------------------------------------------------------------------------
return_mu <- residuals(return_AR1)
mu2_hat_model <- lm(return_mu^2 ~ return_1, data = return_AR1$model)
## ---- results='asis', echo=FALSE, warning=FALSE, message=FALSE----------------
stargazer(type = "html",return_AR1, mu2_hat_model, single.row = TRUE, header = FALSE, digits=5)
## -----------------------------------------------------------------------------
mu2_hat <- return_mu[-1]^2
mu2_hat_1 <- return_mu[-NROW(return_mu)]^2
arch_model <- lm(mu2_hat ~ mu2_hat_1)
## ---- results='asis', echo=FALSE, warning=FALSE, message=FALSE----------------
stargazer(type = "html",arch_model, single.row = TRUE, header = FALSE, digits=5)
## ---- message=FALSE, eval=FALSE-----------------------------------------------
# data("traffic1")
# ?traffic1
## -----------------------------------------------------------------------------
DD_model <- lm(cdthrte ~ copen + cadmn, data = traffic1)
## ---- results='asis', echo=FALSE, warning=FALSE, message=FALSE----------------
stargazer(type = "html",DD_model, single.row = TRUE, header = FALSE, digits=5)
## ---- message=FALSE, eval=FALSE-----------------------------------------------
# data("phillips")
# ?phillips
## -----------------------------------------------------------------------------
phillips_train <- subset(phillips, year <= 1996)
unem_AR1 <- lm(unem ~ unem_1, data = phillips_train)
## -----------------------------------------------------------------------------
unem_inf_VAR1 <- lm(unem ~ unem_1 + inf_1, data = phillips_train)
## ---- results='asis', echo=FALSE, warning=FALSE, message=FALSE----------------
stargazer(type = "html",unem_AR1, unem_inf_VAR1, single.row = TRUE, header = FALSE, digits=5)
## ---- warning=FALSE, message=FALSE, echo=TRUE---------------------------------
phillips_test <- subset(phillips, year >= 1997)
AR1_forecast <- predict.lm(unem_AR1, newdata = phillips_test)
VAR1_forecast <- predict.lm(unem_inf_VAR1, newdata = phillips_test)
kable(cbind(phillips_test[ ,c("year", "unem")], AR1_forecast, VAR1_forecast))
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/inst/doc/Introductory-Econometrics-Examples.R
|
---
title: "Introductory Econometrics Examples"
author: "Justin M Shea"
date: ' '
output:
rmarkdown::html_vignette:
toc: yes
vignette: >
%\VignetteIndexEntry{Introductory Econometrics Examples}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
\newpage
## Introduction
This vignette reproduces examples from various chapters of _Introductory Econometrics: A Modern Approach, 7e_ by Jeffrey M. Wooldridge. Each example illustrates how to load data, build econometric models, and compute estimates with **R**.
In addition, the **Appendix** cites a few sources using **R** for econometrics. Of note, in 2020 Florian Heiss published a 2nd edition of [_Using R for Introductory Econometrics_](http://www.urfie.net/); it is excellent. The Heiss text is a companion to wooldridge for `R` users, and offers an in depth treatment with several worked examples from each chapter. Indeed, his example use this `wooldridge` package as well.
Now, install and load the `wooldridge` package and lets get started!
```{r, echo = TRUE, eval = FALSE, warning=FALSE}
install.packages("wooldridge")
```
```{r, echo = TRUE, eval = TRUE, warning=FALSE, message=FALSE}
library(wooldridge)
```
```{r, echo=FALSE, eval=TRUE, warning=FALSE, message=FALSE}
library(stargazer)
library(knitr)
```
\newpage
## Chapter 2: The Simple Regression Model
### **`Example 2.10:` A Log Wage Equation**
Load the `wage1` data and check out the documentation.
```{r, message=FALSE, eval=FALSE}
data("wage1")
?wage1
```
The documentation indicates these are data from the 1976 Current Population Survey, collected by Henry Farber when he and Wooldridge were colleagues at MIT in 1988.
**$educ$:** years of education
**$wage$:** average hourly earnings
**$lwage$:** log of the average hourly earnings
First, make a scatter-plot of the two variables and look for possible patterns in the relationship between them.
```{r, echo=FALSE}
plot(y = wage1$wage, x = wage1$educ, col = "darkgreen", pch = 21, bg = "lightgrey",
cex=1.25, xaxt="n", frame = FALSE, main = "Wages vs. Education, 1976",
xlab = "years of education", ylab = "Hourly wages")
axis(side = 1, at = c(0,6,12,18))
rug(wage1$wage, side=2, col="darkgreen")
```
It appears that _**on average**_, more years of education, leads to higher wages.
The example in the text is interested in the _return to another year of education_, or what the _**percentage**_ change in wages one might expect for each additional year of education. To do so, one must use the $log($`wage`$)$. This has already been computed in the data set and is defined as `lwage`.
The textbook provides excellent discussions around these topics, so please consult it.
Build a linear model to estimate the relationship between the _log of wage_ (`lwage`) and _education_ (`educ`).
$$\widehat{log(wage)} = \beta_0 + \beta_1educ$$
```{r}
log_wage_model <- lm(lwage ~ educ, data = wage1)
```
Print the `summary` of the results.
```{r, echo = TRUE, eval = FALSE, warning=FALSE}
summary(log_wage_model)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html", log_wage_model, single.row = TRUE, header = FALSE, digits = 5)
```
Plot the $log($`wage`$)$ vs `educ`. The blue line represents the least squares fit.
```{r, echo=FALSE}
plot(y = wage1$lwage, x = wage1$educ, main = "A Log Wage Equation",
col = "darkgreen", pch = 21, bg = "lightgrey", cex=1.25,
xlab = "years of education", ylab = "log of average hourly wages",
xaxt="n", frame = FALSE)
axis(side = 1, at = c(0,6,12,18))
abline(log_wage_model, col = "blue", lwd=2)
rug(wage1$lwage, side=2, col="darkgreen")
```
\newpage
## Chapter 3: Multiple Regression Analysis: Estimation
### **`Example 3.2:` Hourly Wage Equation**
Check the documentation for variable information
```{r, eval=FALSE}
?wage1
```
**$lwage$:** log of the average hourly earnings
**$educ$:** years of education
**$exper$:** years of potential experience
**$tenutre$:** years with current employer
Plot the variables against `lwage` and compare their distributions
and slope ($\beta$) of the simple regression lines.
```{r, fig.height=3, echo=FALSE}
par(mfrow=c(1,3))
plot(y = wage1$lwage, x = wage1$educ, col="darkgreen", xaxt="n", frame = FALSE, main = "years of education", xlab = "", ylab = "")
mtext(side=2, line=2.5, "Hourly wages", cex=1.25)
axis(side = 1, at = c(0,6,12,18))
abline(lm(lwage ~ educ, data=wage1), col = "darkblue", lwd=2)
plot(y = wage1$lwage, x = wage1$exper, col="darkgreen", xaxt="n", frame = FALSE, main = "years of experience", xlab = "", ylab = "")
axis(side = 1, at = c(0,12.5,25,37.5,50))
abline(lm(lwage ~ exper, data=wage1), col = "darkblue", lwd=2)
plot(y = wage1$lwage, x = wage1$tenure, col="darkgreen", xaxt="n", frame = FALSE, main = "years with employer", xlab = "", ylab = "")
axis(side = 1, at = c(0,11,22,33,44))
abline(lm(lwage ~ tenure, data=wage1), col = "darkblue", lwd=2)
```
Estimate the model regressing _educ_, _exper_, and _tenure_ against _log(wage)_.
$$\widehat{log(wage)} = \beta_0 + \beta_1educ + \beta_3exper + \beta_4tenure$$
```{r}
hourly_wage_model <- lm(lwage ~ educ + exper + tenure, data = wage1)
```
Print the estimated model coefficients:
```{r, eval=FALSE}
coefficients(hourly_wage_model)
```
```{r, echo=FALSE}
kable(coefficients(hourly_wage_model), digits=4, col.names = "Coefficients", align = 'l')
```
Plot the coefficients, representing percentage impact of each variable on $log($`wage`$)$ for a quick comparison.
```{r, echo=FALSE}
barplot(sort(100*hourly_wage_model$coefficients[-1]), horiz=TRUE, las=1,
ylab = " ", main = "Coefficients of Hourly Wage Equation")
```
## Chapter 4: Multiple Regression Analysis: Inference
### **`Example 4.1` Hourly Wage Equation**
Using the same model estimated in **`example: 3.2`**, examine and compare the standard errors associated with each coefficient. Like the textbook, these are contained in parenthesis next to each associated coefficient.
```{r, eval=FALSE}
summary(hourly_wage_model)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html", hourly_wage_model, single.row = TRUE, header = FALSE, digits=5)
```
For the years of experience variable, or `exper`, use coefficient and Standard Error
to compute the $t$ statistic:
$$t_{exper} = \frac{0.004121}{0.001723} = 2.391$$
Fortunately, `R` includes $t$ statistics in the `summary` of model diagnostics.
```{r, eval=FALSE}
summary(hourly_wage_model)$coefficients
```
```{r, echo=FALSE}
kable(summary(hourly_wage_model)$coefficients, align="l", digits=5)
```
```{r, fig.height=8, eval=FALSE, echo=FALSE}
par(mfrow=c(2,2))
plot(y = hourly_wage_model$residuals, x = hourly_wage_model$fitted.values , col="darkgreen", xaxt="n",
frame = FALSE, main = "Fitted Values", xlab = "", ylab = "")
mtext(side=2, line=2.5, "Model Residuals", cex=1.25)
abline(0, 0, col = "darkblue", lty=2, lwd=2)
plot(y = hourly_wage_model$residuals, x = wage1$educ, col="darkgreen", xaxt="n",
frame = FALSE, main = "years of education", xlab = "", ylab = "")
axis(side = 1, at = c(0,6,12,18))
abline(0, 0, col = "darkblue", lty=2, lwd=2)
plot(y = hourly_wage_model$residuals, x = wage1$exper, col="darkgreen", xaxt="n",
frame = FALSE, main = "years of experience", xlab = "", ylab = "")
mtext(side=2, line=2.5, "Model Residuals", cex=1.25)
axis(side = 1, at = c(0,12.5,25,37.5,50))
abline(0, 0, col = "darkblue", lty=2, lwd=2)
plot(y = hourly_wage_model$residuals, x = wage1$tenure, col="darkgreen", xaxt="n",
frame = FALSE, main = "years with employer", xlab = "", ylab = "")
axis(side = 1, at = c(0,11,22,33,44))
abline(0, 0, col = "darkblue", lty=2, lwd=2)
```
Plot the $t$ statistics for a visual comparison:
```{r, echo=FALSE}
barplot(sort(summary(hourly_wage_model)$coefficients[-1, "t value"]), horiz=TRUE, las=1,
ylab = " ", main = "t statistics of Hourly Wage Equation")
```
### **`Example 4.7` Effect of Job Training on Firm Scrap Rates**
Load the `jtrain` data set.
```{r, echo = TRUE, eval = TRUE, warning=FALSE, message=FALSE}
data("jtrain")
```
```{r, echo = TRUE, eval = FALSE, warning=FALSE}
?jtrain
```
From H. Holzer, R. Block, M. Cheatham, and J. Knott (1993), _Are Training Subsidies Effective? The Michigan Experience_, Industrial and Labor Relations Review 46, 625-636. The authors kindly provided the data.
**$year:$** 1987, 1988, or 1989
**$union:$** =1 if unionized
**$lscrap:$** Log(scrap rate per 100 items)
**$hrsemp:$** (total hours training) / (total employees trained)
**$lsales:$** Log(annual sales, $)
**$lemploy:$** Log(umber of employees at plant)
First, use the `subset` function and it's argument by the same name to return
observations which occurred in **1987** and are not **union**. At the same time, use
the `select` argument to return only the variables of interest for this problem.
```{r}
jtrain_subset <- subset(jtrain, subset = (year == 1987 & union == 0),
select = c(year, union, lscrap, hrsemp, lsales, lemploy))
```
Next, test for missing values. One can "eyeball" these with R Studio's `View`
function, but a more precise approach combines the `sum` and `is.na` functions
to return the total number of observations equal to `NA`.
```{r}
sum(is.na(jtrain_subset))
```
While `R`'s `lm` function will automatically remove missing `NA` values, eliminating
these manually will produce more clearly proportioned graphs for exploratory analysis.
Call the `na.omit` function to remove all missing values and assign the new
`data.frame` object the name **`jtrain_clean`**.
```{r}
jtrain_clean <- na.omit(jtrain_subset)
```
Use `jtrain_clean` to plot the variables of interest against `lscrap`. Visually
observe the respective distributions for each variable, and compare the slope
($\beta$) of the simple regression lines.
```{r, echo=FALSE, fig.height=3}
par(mfrow=c(1,3))
point_size <- 1.75
plot(y = jtrain_clean$lscrap, x = jtrain_clean$hrsemp, frame = FALSE,
main = "Total (hours/employees) trained", ylab = "", xlab="", pch = 21, bg = "lightgrey", cex=point_size)
mtext(side=2, line=2, "Log(scrap rate)", cex=1.25)
abline(lm(lscrap ~ hrsemp, data=jtrain_clean), col = "blue", lwd=2)
plot(y = jtrain_clean$lscrap, x = jtrain_clean$lsales, frame = FALSE, main = "Log(annual sales $)", ylab = " ", xlab="", pch = 21, bg = "lightgrey", cex=point_size)
abline(lm(lscrap ~ lsales, data=jtrain_clean), col = "blue", lwd=2)
plot(y = jtrain_clean$lscrap, x = jtrain_clean$lemploy, frame = FALSE, main = "Log(# employees at plant)", ylab = " ", xlab="", pch = 21, bg = "lightgrey", cex=point_size)
abline(lm(lscrap ~ lemploy, data=jtrain_clean), col = "blue", lwd=2)
```
Now create the linear model regressing `hrsemp`(total hours training/total employees trained), `lsales`(log of annual sales), and `lemploy`(the log of the number of the employees), against `lscrap`(the log of the scrape rate).
$$lscrap = \alpha + \beta_1 hrsemp + \beta_2 lsales + \beta_3 lemploy$$
```{r}
linear_model <- lm(lscrap ~ hrsemp + lsales + lemploy, data = jtrain_clean)
```
Finally, print the complete summary diagnostics of the model.
```{r, eval=FALSE, warning=FALSE, message=FALSE}
summary(linear_model)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html", linear_model, single.row = TRUE, header = FALSE, digits=5)
```
```{r, echo=FALSE, eval=FALSE}
#Plot the coefficients, representing the impact of each variable on $log($`scrap`$)$ for a quick comparison. As you can observe, for some variables, the confidence intervals are wider than others.
coefficient <- coef(linear_model)[-1]
confidence <- confint(linear_model, level = 0.95)[-1,]
graph <- drop(barplot(coefficient, ylim = range(c(confidence)),
main = "Coefficients & 95% C.I. of variables on Firm Scrap Rates"))
arrows(graph, coefficient, graph, confidence[,1], angle=90, length=0.55, col="blue", lwd=2)
arrows(graph, coefficient, graph, confidence[,2], angle=90, length=0.55, col="blue", lwd=2)
```
## Chapter 5: Multiple Regression Analysis: OLS Asymptotics
### **`Example 5.1:` Housing Prices and Distance From an Incinerator**
Load the `hprice3` data set.
```{r}
data("hprice3")
```
**$lprice:$** Log(selling price)
**$ldist:$** Log(distance from house to incinerator, feet)
**$larea:$** Log(square footage of house)
Graph the prices of housing against distance from an incinerator:
```{r, echo=FALSE, fig.align='center'}
par(mfrow=c(1,2))
plot(y = hprice3$price, x = hprice3$dist, main = " ", xlab = "Distance to Incinerator in feet", ylab = "Selling Price", frame = FALSE, pch = 21, bg = "lightgrey")
abline(lm(price ~ dist, data=hprice3), col = "blue", lwd=2)
```
Next, model the $log($`price`$)$ against the $log($`dist`$)$ to estimate the percentage relationship between the two.
$$price = \alpha + \beta_1 dist$$
```{r}
price_dist_model <- lm(lprice ~ ldist, data = hprice3)
```
Create another model that controls for "quality" variables, such as square footage `area` per house.
$$price = \alpha + \beta_1 dist + \beta_2 area$$
```{r}
price_area_model <- lm(lprice ~ ldist + larea, data = hprice3)
```
Compare the coefficients of both models. Notice that adding `area` improves the quality of the model, but also reduces the coefficient size of `dist`.
```{r, eval=FALSE}
summary(price_dist_model)
summary(price_area_model)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",price_dist_model, price_area_model, single.row = TRUE, header = FALSE, digits=5)
```
Graphing illustrates the larger coefficient for `area`.
```{r, echo=FALSE}
par(mfrow=c(1,2))
point_size <- 0.80
plot(y = hprice3$lprice, x = hprice3$ldist, frame = FALSE,
main = "Log(distance from incinerator)", ylab = "", xlab="",
pch = 21, bg = "lightgrey", cex=point_size)
mtext(side=2, line=2, "Log( selling price )", cex=1.25)
abline(lm(lprice ~ ldist, data=hprice3), col = "blue", lwd=2)
plot(y = hprice3$lprice, x = hprice3$larea, frame = FALSE, main = "Log(square footage of house)", ylab = " ", xlab="", pch = 21, bg = "lightgrey", cex=point_size)
abline(lm(lprice ~ larea, data=hprice3), col = "blue", lwd=2)
```
\newpage
## Chapter 6: Multiple Regression: Further Issues
### **`Example 6.1:` Effects of Pollution on Housing Prices, standardized.**
Load the `hprice2` data and view the documentation.
```{r, message=FALSE, eval=FALSE}
data("hprice2")
?hprice2
```
Data from _Hedonic Housing Prices and the Demand for Clean Air_, by Harrison, D. and D.L.Rubinfeld, Journal of Environmental Economics and Management 5, 81-102. Diego Garcia, a former Ph.D. student in economics at MIT, kindly provided these data, which he obtained from the book Regression Diagnostics: Identifying Influential Data and Sources of Collinearity, by D.A. Belsey, E. Kuh, and R. Welsch, 1990. New York: Wiley.
$price$: median housing price.
$nox$: Nitrous Oxide concentration; parts per million.
$crime$: number of reported crimes per capita.
$rooms$: average number of rooms in houses in the community.
$dist$: weighted distance of the community to 5 employment centers.
$stratio$: average student-teacher ratio of schools in the community.
$$price = \beta_0 + \beta_1nox + \beta_2crime + \beta_3rooms + \beta_4dist + \beta_5stratio + \mu$$
Estimate the usual `lm` model.
```{r}
housing_level <- lm(price ~ nox + crime + rooms + dist + stratio, data = hprice2)
```
Estimate the same model, but standardized coefficients by wrapping each variable
with R's `scale` function:
$$\widehat{zprice} = \beta_1znox + \beta_2zcrime + \beta_3zrooms + \beta_4zdist + \beta_5zstratio$$
```{r}
housing_standardized <- lm(scale(price) ~ 0 + scale(nox) + scale(crime) + scale(rooms) + scale(dist) + scale(stratio), data = hprice2)
```
Compare results, and observe
```{r, eval=FALSE}
summary(housing_level)
summary(housing_standardized)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",housing_level, housing_standardized, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
### **`Example 6.2:` Effects of Pollution on Housing Prices, Quadratic Interactive Term**
Modify the housing model from **`example 4.5`**, adding a quadratic term in _rooms_:
$$log(price) = \beta_0 + \beta_1log(nox) + \beta_2log(dist) + \beta_3rooms + \beta_4rooms^2 + \beta_5stratio + \mu$$
```{r}
housing_model_4.5 <- lm(lprice ~ lnox + log(dist) + rooms + stratio, data = hprice2)
housing_model_6.2 <- lm(lprice ~ lnox + log(dist) + rooms + I(rooms^2) + stratio,
data = hprice2)
```
Compare the results with the model from `example 6.1`.
```{r, eval=FALSE}
summary(housing_model_4.5)
summary(housing_model_6.2)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html", housing_model_4.5 , housing_model_6.2, single.row = TRUE, header = FALSE, digits=5)
```
Estimate the minimum turning point at which the `rooms` interactive term changes
from negative to positive.
$$x = \frac{\hat{\beta_1}}{2\hat{\beta_2}}$$
```{r}
beta_1 <- summary(housing_model_6.2)$coefficients["rooms",1]
beta_2 <- summary(housing_model_6.2)$coefficients["I(rooms^2)",1]
turning_point <- abs(beta_1 / (2*beta_2))
print(turning_point)
```
Compute the percent change across a range of average rooms. Include the smallest,
turning point, and largest.
```{r}
Rooms <- c(min(hprice2$rooms), 4, turning_point, 5, 5.5, 6.45, 7.5, max(hprice2$rooms))
Percent.Change <- 100*(beta_1 + 2*beta_2*Rooms)
kable(data.frame(Rooms, Percent.Change))
```
```{r, echo=FALSE}
from <- min(hprice2$rooms)
to <- max(hprice2$rooms)
rooms <- seq(from=from, to =to, by = ((to - from)/(NROW(hprice2)-1)))
quadratic <- abs(100*summary(housing_model_6.2)$coefficients["rooms",1] + 200*summary(housing_model_6.2)$coefficients["I(rooms^2)",1]*rooms)
housing_model_frame <- model.frame(housing_model_6.2)
housing_sq <- abs(beta_1*housing_model_frame[,"rooms"]) +
beta_2*housing_model_frame[,"I(rooms^2)"]
```
Graph the log of the selling price against the number of rooms. Superimpose a
simple model as well as a quadratic model and examine the difference.
```{r, echo=FALSE}
rooms_interaction <- lm(lprice ~ rooms + I(rooms^2), data = hprice2)
par(mfrow=c(1,2))
plot(y = hprice2$lprice, x = hprice2$rooms, xaxt="n", pch = 21, bg = "lightgrey",
frame = FALSE, main = "lprice ~ rooms", xlab = "Rooms", ylab = "")
mtext(side=2, line=2, "Log( selling price )", cex=1.25)
axis(side = 1, at = c(min(hprice2$rooms), 4, 5, 6, 7, 8, max(hprice2$rooms)))
abline(lm(lprice ~ rooms, data = hprice2), col="red", lwd=2.5)
plot(y = hprice2$lprice, x = hprice2$rooms, xaxt="n", pch = 21, bg = "lightgrey",
frame = FALSE, main = "lprice ~ rooms + I(rooms^2)", xlab = "Rooms", ylab = " ")
axis(side = 1, at = c(min(hprice2$rooms), 4, 5, 6, 7, 8, max(hprice2$rooms)))
lines(sort(hprice2$rooms), sort(fitted(rooms_interaction)), col = "red", lwd=2.5)
```
\newpage
## Chapter 7: Multiple Regression Analysis with Qualitative Information
### **`Example 7.4:` Housing Price Regression, Qualitative Binary variable**
This time, use the `hrprice1` data.
```{r}
data("hprice1")
```
```{r, eval=FALSE}
?hprice1
```
Data collected from the real estate pages of the Boston Globe during 1990.
These are homes that sold in the Boston, MA area.
**$lprice:$** Log(house price, $1000s)
**$llotsize:$** Log(size of lot in square feet)
**$lsqrft:$** Log(size of house in square feet)
**$bdrms:$** number of bdrms
**$colonial:$** =1 if home is colonial style
```{r, fig.height=8, eval=FALSE, echo=FALSE}
par(mfrow=c(2,2))
palette(rainbow(6, alpha = 0.8))
plot(y = hprice1$lprice, x = hprice1$llotsize, col=hprice1$bdrms, pch = 19,
frame = FALSE, main = "Log(lot size)", xlab = "", ylab = "")
mtext(side=2, line=2, "Log( selling price )", cex=1.25)
plot(y = hprice1$lprice, x = hprice1$lsqrft, col=hprice1$bdrms, pch=19,
frame = FALSE, main = "Log(home size)", xlab = "Rooms", ylab = " ")
legend(8, 5.8, sort(unique(hprice1$bdrms)), col = 1:length(hprice1$bdrms),
pch=19, title = "bdrms")
hprice1$colonial <- as.factor(hprice1$colonial)
palette(rainbow(2, alpha = 0.8))
plot(y = hprice1$lprice, x = hprice1$llotsize, col=hprice1$colonial, pch = 19, bg = "lightgrey",
frame = FALSE, main = "Log(lot size)", xlab = "", ylab = "")
mtext(side=2, line=2, "Log( selling price )", cex=1.25)
plot(y = hprice1$lprice, x = hprice1$lsqrft, col=hprice1$colonial, pch=19,
frame = FALSE, main = "Log(home size)", xlab = "Rooms", ylab = " ")
legend(8, 5.25, unique(hprice1$colonial), col=1:length(hprice1$colonial), pch=19, title = "colonial")
```
$$\widehat{log(price)} = \beta_0 + \beta_1log(lotsize) + \beta_2log(sqrft) + \beta_3bdrms + \beta_4colonial $$
Estimate the coefficients of the above linear model on the `hprice` data set.
```{r}
housing_qualitative <- lm(lprice ~ llotsize + lsqrft + bdrms + colonial, data = hprice1)
```
```{r, eval=FALSE}
summary(housing_qualitative)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",housing_qualitative, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
## Chapter 8: Heteroskedasticity
### **`Example 8.9:` Determinants of Personal Computer Ownership**
$$\widehat{PC} = \beta_0 + \beta_1hsGPA + \beta_2ACT + \beta_3parcoll + \beta_4colonial $$
Christopher Lemmon, a former MSU undergraduate, collected these data from a survey he took of MSU students in Fall 1994. Load `gpa1` and create a new variable combining the `fathcoll` and `mothcoll`, into `parcoll`. This new column indicates if either parent went to college.
```{r, message=FALSE}
data("gpa1")
gpa1$parcoll <- as.integer(gpa1$fathcoll==1 | gpa1$mothcoll)
GPA_OLS <- lm(PC ~ hsGPA + ACT + parcoll, data = gpa1)
```
Calculate the weights and then pass them to the `weights` argument.
```{r}
weights <- GPA_OLS$fitted.values * (1-GPA_OLS$fitted.values)
GPA_WLS <- lm(PC ~ hsGPA + ACT + parcoll, data = gpa1, weights = 1/weights)
```
Compare the OLS and WLS model in the table below:
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",GPA_OLS, GPA_WLS, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
## Chapter 9: More on Specification and Data Issues
### **`Example 9.8:` R&D Intensity and Firm Size**
$$rdintens = \beta_0 + \beta_1sales + \beta_2profmarg + \mu$$
From _Businessweek R&D Scoreboard_, October 25, 1991. Load the data and estimate the model.
```{r, message=FALSE}
data("rdchem")
all_rdchem <- lm(rdintens ~ sales + profmarg, data = rdchem)
```
Plotting the data reveals the outlier on the far right of the plot, which will skew the results of our model.
```{r, echo=FALSE}
plot_title <- "FIGURE 9.1: Scatterplot of R&D intensity against firm sales"
x_axis <- "firm sales (in millions of dollars)"
y_axis <- "R&D as a percentage of sales"
plot(rdintens ~ sales, pch = 21, bg = "lightgrey", data = rdchem, main = plot_title, xlab = x_axis, ylab = y_axis)
```
So, we can estimate the model without that data point to gain a better understanding of how `sales` and `profmarg` describe `rdintens` for most firms. We can use the `subset` argument of the linear model function to indicate that we only want to estimate the model using data that is less than the highest sales.
```{r}
smallest_rdchem <- lm(rdintens ~ sales + profmarg, data = rdchem,
subset = (sales < max(sales)))
```
The table below compares the results of both models side by side. By removing the outlier firm, $sales$ become a more significant determination of R&D expenditures.
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",all_rdchem, smallest_rdchem, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
## Chapter 10: Basic Regression Analysis with Time Series Data
### **`Example 10.2:` Effects of Inflation and Deficits on Interest Rates**
$$\widehat{i3} = \beta_0 + \beta_1inf_t + \beta_2def_t$$
Data from the _Economic Report of the President, 2004_, Tables B-64, B-73, and B-79.
```{r, message=FALSE}
data("intdef") # load data
# load eXtensible Time Series package.
# xts is excellent for time series plots and
# properly indexing time series.
library(xts)
# create xts object from data.frame
# First, index year as yearmon class of monthly data.
# Note: I add 11/12 to set the month to December, end of year.
index <- zoo::as.yearmon(intdef$year + 11/12)
# Next, create the xts object, ordering by the index above.
intdef.xts <- xts(intdef[ ,-1], order.by = index)
# extract 3-month Tbill, inflation, and deficit data
intdef.xts <- intdef.xts[ ,c("i3", "inf", "def")]
# rename with clearer names
colnames(intdef.xts) <- c("Tbill3mo", "cpi", "deficit")
# plot the object, add a title, and place legend at top left.
plot(x = intdef.xts,
main = "Inflation, Deficits, and Interest Rates",
legend.loc = "topleft")
# Run a Linear regression model
tbill_model <- lm(Tbill3mo ~ cpi + deficit, data = intdef.xts)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",tbill_model, single.row = TRUE, header = FALSE, digits=5)
```
Now lets update the example with current data, pull from the Federal Reserve Economic Research (FRED) using the [quantmod package](https://CRAN.R-project.org/package=quantmod). Other than the convenient API, the package also formats time series data into [xts: eXtensible Time Series](https://CRAN.R-project.org/package=xts) objects, which add many feature and benefits when working with time series.
```{r eval=FALSE, FALSE, message=FALSE, warning=FALSE}
# DO NOT RUN
library(quantmod)
# Tbill, 3 month
getSymbols("TB3MS", src = "FRED")
# convert to annual observations and convert index to type `yearmon`.
TB3MS <- to.yearly(TB3MS, OHLC=FALSE, drop.time = TRUE)
index(TB3MS) <- zoo::as.yearmon(index(TB3MS))
# Inflation
getSymbols("FPCPITOTLZGUSA", src = "FRED")
# Convert the index to yearmon and shift FRED's Jan 1st to Dec
index(FPCPITOTLZGUSA) <- zoo::as.yearmon(index(FPCPITOTLZGUSA)) + 11/12
# Rename and update column names
inflation <- FPCPITOTLZGUSA
colnames(inflation) <- "inflation"
## Deficit, percent of GDP: Federal outlays - federal receipts
# Download outlays
getSymbols("FYFRGDA188S", src = "FRED")
# Lets move the index from Jan 1st to Dec 30th/31st
index(FYFRGDA188S) <- zoo::as.yearmon(index(FYFRGDA188S)) + 11/12
# Rename and update column names
outlays <- FYFRGDA188S
colnames(outlays) <- "outlays"
# Download receipts
getSymbols("FYONGDA188S", src = "FRED")
# Lets move the index from Jan 1st to Dec 30th/31st
index(FYONGDA188S) <- zoo::as.yearmon(index(FYONGDA188S)) + 11/12
# Rename and update column names
receipts <- FYONGDA188S
colnames(receipts) <- "receipts"
```
Now that all data has been downloaded, we can calculate the deficit from the federal `outlays` and `receipts` data. Next, we will merge our new `deficit` variable with `inflation` and `TB3MS` variables. As these are all `xts` times series objects, the `merge` function will automatically key off each series time date index, insuring integrity and alignment among each observation and its respective date. Additionally, xts provides easy chart construction with its plot method.
```{r eval=FALSE, message=FALSE, warning=FALSE}
# DO NOT RUN
# create deficits from outlays - receipts
# xts objects respect their indexing and outline the future
deficit <- outlays - receipts
colnames(deficit) <- "deficit"
# Merge and remove leading and trailing NAs for a balanced data matrix
intdef_updated <- merge(TB3MS, inflation, deficit)
intdef_updated <- zoo::na.trim(intdef_updated)
#Plot all
plot(intdef_updated,
main = "T-bill (3mo rate), inflation, and deficit (% of GDP)",
legend.loc = "topright",)
```
Now lets run the model again. Inflation plays a much more prominent role in the 3 month T-bill rate, than the deficit.
```{r eval=FALSE}
# DO NOT RUN
updated_model <- lm(TB3MS ~ inflation + deficit, data = intdef_updated)
```
```{r, eval=FALSE, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
#DO NOT RUN
updated_model <- lm(TB3MS ~ inflation + deficit, data = intdef_updated)
stargazer(type = "html", updated_model, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
## Chapter 11: Further Issues in Using OLS with with Time Series Data
### **`Example 11.7:` Wages and Productivity**
$$\widehat{log(hrwage_t)} = \beta_0 + \beta_1log(outphr_t) + \beta_2t + \mu_t$$
Data from the _Economic Report of the President, 1989_, Table B-47. The data are for the non-farm business sector.
```{r, message=FALSE}
data("earns")
wage_time <- lm(lhrwage ~ loutphr + t, data = earns)
```
```{r}
wage_diff <- lm(diff(lhrwage) ~ diff(loutphr), data = earns)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",wage_time, wage_diff, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
## Chapter 12: Serial Correlation and Heteroskedasticiy in Time Series Regressions
### **`Example 12.8:` Heteroskedasticity and the Efficient Markets Hypothesis**
These are Wednesday closing prices of value-weighted NYSE average, available in many publications. Wooldridge does not recall the particular source used when he collected these data at MIT, but notes probably the easiest way to get similar data is to go to the NYSE web site, [www.nyse.com](https://www.nyse.com/data-and-tech).
$$return_t = \beta_0 + \beta_1return_{t-1} + \mu_t$$
```{r, message=FALSE, eval=FALSE}
data("nyse")
?nyse
```
```{r}
return_AR1 <-lm(return ~ return_1, data = nyse)
```
$$\hat{\mu^2_t} = \beta_0 + \beta_1return_{t-1} + residual_t$$
```{r}
return_mu <- residuals(return_AR1)
mu2_hat_model <- lm(return_mu^2 ~ return_1, data = return_AR1$model)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",return_AR1, mu2_hat_model, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
### **`Example 12.9:` ARCH in Stock Returns**
$$\hat{\mu^2_t} = \beta_0 + \hat{\mu^2_{t-1}} + residual_t$$
We still have `return_mu` in the working environment so we can use it to create $\hat{\mu^2_t}$, (`mu2_hat`) and $\hat{\mu^2_{t-1}}$ (`mu2_hat_1`). Notice the use `R`'s matrix subset operations to perform the lag operation. We drop the first observation of `mu2_hat` and squared the results. Next, we remove the last observation of `mu2_hat_1` using the subtraction operator combined with a call to the `NROW` function on `return_mu`. Now, both contain $688$ observations and we can estimate a standard linear model.
```{r}
mu2_hat <- return_mu[-1]^2
mu2_hat_1 <- return_mu[-NROW(return_mu)]^2
arch_model <- lm(mu2_hat ~ mu2_hat_1)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",arch_model, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
## Chapter 13: Pooling Cross Sections across Time: Simple Panel Data Methods
### **`Example 13.7:` Effect of Drunk Driving Laws on Traffic Fatalities**
Wooldridge collected these data from two sources, the 1992 _Statistical Abstract of the United States_ (Tables 1009, 1012) and _A Digest of State Alcohol-Highway Safety Related Legislation_, 1985 and 1990, published by the U.S. National Highway Traffic Safety Administration.
$$\widehat{\Delta{dthrte}} = \beta_0 + \Delta{open} + \Delta{admin}$$
```{r, message=FALSE, eval=FALSE}
data("traffic1")
?traffic1
```
```{r}
DD_model <- lm(cdthrte ~ copen + cadmn, data = traffic1)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",DD_model, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
## Chapter 18: Advanced Time Series Topics
### **`Example 18.8:` FORECASTING THE U.S. UNEMPLOYMENT RATE**
Data from _Economic Report of the President, 2004_, Tables B-42 and B-64.
```{r, message=FALSE, eval=FALSE}
data("phillips")
?phillips
```
$$\widehat{unemp_t} = \beta_0 + \beta_1unem_{t-1}$$
Estimate the linear model in the usual way and note the use of the `subset` argument to define data equal to and before the year 1996.
```{r}
phillips_train <- subset(phillips, year <= 1996)
unem_AR1 <- lm(unem ~ unem_1, data = phillips_train)
```
$$\widehat{unemp_t} = \beta_0 + \beta_1unem_{t-1} + \beta_2inf_{t-1}$$
```{r}
unem_inf_VAR1 <- lm(unem ~ unem_1 + inf_1, data = phillips_train)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",unem_AR1, unem_inf_VAR1, single.row = TRUE, header = FALSE, digits=5)
```
Now, use the `subset` argument to create our testing data set containing observation after 1996.
Next, pass the both the model object and the test set to the `predict` function for both models.
Finally, `cbind` or "column bind" both forecasts as well as the year and unemployment rate of the test set.
```{r, warning=FALSE, message=FALSE, echo=TRUE}
phillips_test <- subset(phillips, year >= 1997)
AR1_forecast <- predict.lm(unem_AR1, newdata = phillips_test)
VAR1_forecast <- predict.lm(unem_inf_VAR1, newdata = phillips_test)
kable(cbind(phillips_test[ ,c("year", "unem")], AR1_forecast, VAR1_forecast))
```
\newpage
# Appendix
### Using R for Introductory Econometrics
This is an excellent open source complimentary text to "Introductory Econometrics" by Jeffrey M. Wooldridge and should be your number one resource. This excerpt from the book's website:
> This book introduces the popular, powerful and free programming language and software package R with a focus on the implementation of standard tools and methods used in econometrics. Unlike other books on similar topics, it does not attempt to provide a self-contained discussion of econometric models and methods. Instead, it builds on the excellent and popular textbook "Introductory Econometrics" by Jeffrey M. Wooldridge.
Heiss, Florian. _Using R for Introductory Econometrics_. ISBN: 979-8648424364, CreateSpace Independent Publishing Platform, 2020, Dusseldorf, Germany.
[url: http://www.urfie.net/](http://www.urfie.net/).
### Applied Econometrics with R
From the publisher's website:
> This is the first book on applied econometrics using the R system for statistical computing and graphics. It presents hands-on examples for a wide range of econometric models, from classical linear regression models for cross-section, time series or panel data and the common non-linear models of microeconometrics such as logit, probit and tobit models, to recent semiparametric extensions. In addition, it provides a chapter on programming, including simulations, optimization, and an introduction to R tools enabling reproducible econometric research. An R package accompanying this book, AER, is available from the Comprehensive R Archive Network (CRAN) at https://CRAN.R-project.org/package=AER.
Kleiber, Christian and Achim Zeileis. _Applied Econometrics with R_. ISBN 978-0-387-77316-2,
Springer-Verlag, 2008, New York. [https://link.springer.com/book/10.1007/978-0-387-77318-6](https://link.springer.com/book/10.1007/978-0-387-77318-6)
\newpage
## Bibliography
Jeffrey M. Wooldridge (2020). _Introductory Econometrics: A Modern Approach, 7th edition_. ISBN-13: 978-1-337-55886-0. Mason, Ohio :South-Western Cengage Learning.
Jeffrey A. Ryan and Joshua M. Ulrich (2020). quantmod:
Quantitative Financial Modelling Framework. R package version
0.4.18. https://CRAN.R-project.org/package=quantmod
R Core Team (2021). R: A language and environment for
statistical computing. R Foundation for Statistical Computing,
Vienna, Austria. URL https://www.R-project.org/.
Marek Hlavac (2018). _stargazer: Well-Formatted Regression and Summary Statistics Tables_. R package version 5.2.2. URL: https://CRAN.R-project.org/package=stargazer
van der Loo M (2020). tinytest “A method for deriving information from
running R code.” _The R Journal_, Accepted for publication. <URL:
https://arxiv.org/abs/2002.07472>.
Yihui Xie (2021). _knitr: A General-Purpose Package for Dynamic
Report Generation in R_. R package version 1.33. https://CRAN.R-project.org/package=knitr
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/inst/doc/Introductory-Econometrics-Examples.Rmd
|
data_folder <- paste0(getwd(),"//data")
# Are there 115 data sets with .Rdata files?
expect_equal(length(data(list = c(data(package = "wooldridge")$results[,3]))), 115,
info = "Are there 115 data sets in the package?")
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/inst/tinytest/test_wooldridge.R
|
---
title: "Introductory Econometrics Examples"
author: "Justin M Shea"
date: ' '
output:
rmarkdown::html_vignette:
toc: yes
vignette: >
%\VignetteIndexEntry{Introductory Econometrics Examples}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
\newpage
## Introduction
This vignette reproduces examples from various chapters of _Introductory Econometrics: A Modern Approach, 7e_ by Jeffrey M. Wooldridge. Each example illustrates how to load data, build econometric models, and compute estimates with **R**.
In addition, the **Appendix** cites a few sources using **R** for econometrics. Of note, in 2020 Florian Heiss published a 2nd edition of [_Using R for Introductory Econometrics_](http://www.urfie.net/); it is excellent. The Heiss text is a companion to wooldridge for `R` users, and offers an in depth treatment with several worked examples from each chapter. Indeed, his example use this `wooldridge` package as well.
Now, install and load the `wooldridge` package and lets get started!
```{r, echo = TRUE, eval = FALSE, warning=FALSE}
install.packages("wooldridge")
```
```{r, echo = TRUE, eval = TRUE, warning=FALSE, message=FALSE}
library(wooldridge)
```
```{r, echo=FALSE, eval=TRUE, warning=FALSE, message=FALSE}
library(stargazer)
library(knitr)
```
\newpage
## Chapter 2: The Simple Regression Model
### **`Example 2.10:` A Log Wage Equation**
Load the `wage1` data and check out the documentation.
```{r, message=FALSE, eval=FALSE}
data("wage1")
?wage1
```
The documentation indicates these are data from the 1976 Current Population Survey, collected by Henry Farber when he and Wooldridge were colleagues at MIT in 1988.
**$educ$:** years of education
**$wage$:** average hourly earnings
**$lwage$:** log of the average hourly earnings
First, make a scatter-plot of the two variables and look for possible patterns in the relationship between them.
```{r, echo=FALSE}
plot(y = wage1$wage, x = wage1$educ, col = "darkgreen", pch = 21, bg = "lightgrey",
cex=1.25, xaxt="n", frame = FALSE, main = "Wages vs. Education, 1976",
xlab = "years of education", ylab = "Hourly wages")
axis(side = 1, at = c(0,6,12,18))
rug(wage1$wage, side=2, col="darkgreen")
```
It appears that _**on average**_, more years of education, leads to higher wages.
The example in the text is interested in the _return to another year of education_, or what the _**percentage**_ change in wages one might expect for each additional year of education. To do so, one must use the $log($`wage`$)$. This has already been computed in the data set and is defined as `lwage`.
The textbook provides excellent discussions around these topics, so please consult it.
Build a linear model to estimate the relationship between the _log of wage_ (`lwage`) and _education_ (`educ`).
$$\widehat{log(wage)} = \beta_0 + \beta_1educ$$
```{r}
log_wage_model <- lm(lwage ~ educ, data = wage1)
```
Print the `summary` of the results.
```{r, echo = TRUE, eval = FALSE, warning=FALSE}
summary(log_wage_model)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html", log_wage_model, single.row = TRUE, header = FALSE, digits = 5)
```
Plot the $log($`wage`$)$ vs `educ`. The blue line represents the least squares fit.
```{r, echo=FALSE}
plot(y = wage1$lwage, x = wage1$educ, main = "A Log Wage Equation",
col = "darkgreen", pch = 21, bg = "lightgrey", cex=1.25,
xlab = "years of education", ylab = "log of average hourly wages",
xaxt="n", frame = FALSE)
axis(side = 1, at = c(0,6,12,18))
abline(log_wage_model, col = "blue", lwd=2)
rug(wage1$lwage, side=2, col="darkgreen")
```
\newpage
## Chapter 3: Multiple Regression Analysis: Estimation
### **`Example 3.2:` Hourly Wage Equation**
Check the documentation for variable information
```{r, eval=FALSE}
?wage1
```
**$lwage$:** log of the average hourly earnings
**$educ$:** years of education
**$exper$:** years of potential experience
**$tenutre$:** years with current employer
Plot the variables against `lwage` and compare their distributions
and slope ($\beta$) of the simple regression lines.
```{r, fig.height=3, echo=FALSE}
par(mfrow=c(1,3))
plot(y = wage1$lwage, x = wage1$educ, col="darkgreen", xaxt="n", frame = FALSE, main = "years of education", xlab = "", ylab = "")
mtext(side=2, line=2.5, "Hourly wages", cex=1.25)
axis(side = 1, at = c(0,6,12,18))
abline(lm(lwage ~ educ, data=wage1), col = "darkblue", lwd=2)
plot(y = wage1$lwage, x = wage1$exper, col="darkgreen", xaxt="n", frame = FALSE, main = "years of experience", xlab = "", ylab = "")
axis(side = 1, at = c(0,12.5,25,37.5,50))
abline(lm(lwage ~ exper, data=wage1), col = "darkblue", lwd=2)
plot(y = wage1$lwage, x = wage1$tenure, col="darkgreen", xaxt="n", frame = FALSE, main = "years with employer", xlab = "", ylab = "")
axis(side = 1, at = c(0,11,22,33,44))
abline(lm(lwage ~ tenure, data=wage1), col = "darkblue", lwd=2)
```
Estimate the model regressing _educ_, _exper_, and _tenure_ against _log(wage)_.
$$\widehat{log(wage)} = \beta_0 + \beta_1educ + \beta_3exper + \beta_4tenure$$
```{r}
hourly_wage_model <- lm(lwage ~ educ + exper + tenure, data = wage1)
```
Print the estimated model coefficients:
```{r, eval=FALSE}
coefficients(hourly_wage_model)
```
```{r, echo=FALSE}
kable(coefficients(hourly_wage_model), digits=4, col.names = "Coefficients", align = 'l')
```
Plot the coefficients, representing percentage impact of each variable on $log($`wage`$)$ for a quick comparison.
```{r, echo=FALSE}
barplot(sort(100*hourly_wage_model$coefficients[-1]), horiz=TRUE, las=1,
ylab = " ", main = "Coefficients of Hourly Wage Equation")
```
## Chapter 4: Multiple Regression Analysis: Inference
### **`Example 4.1` Hourly Wage Equation**
Using the same model estimated in **`example: 3.2`**, examine and compare the standard errors associated with each coefficient. Like the textbook, these are contained in parenthesis next to each associated coefficient.
```{r, eval=FALSE}
summary(hourly_wage_model)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html", hourly_wage_model, single.row = TRUE, header = FALSE, digits=5)
```
For the years of experience variable, or `exper`, use coefficient and Standard Error
to compute the $t$ statistic:
$$t_{exper} = \frac{0.004121}{0.001723} = 2.391$$
Fortunately, `R` includes $t$ statistics in the `summary` of model diagnostics.
```{r, eval=FALSE}
summary(hourly_wage_model)$coefficients
```
```{r, echo=FALSE}
kable(summary(hourly_wage_model)$coefficients, align="l", digits=5)
```
```{r, fig.height=8, eval=FALSE, echo=FALSE}
par(mfrow=c(2,2))
plot(y = hourly_wage_model$residuals, x = hourly_wage_model$fitted.values , col="darkgreen", xaxt="n",
frame = FALSE, main = "Fitted Values", xlab = "", ylab = "")
mtext(side=2, line=2.5, "Model Residuals", cex=1.25)
abline(0, 0, col = "darkblue", lty=2, lwd=2)
plot(y = hourly_wage_model$residuals, x = wage1$educ, col="darkgreen", xaxt="n",
frame = FALSE, main = "years of education", xlab = "", ylab = "")
axis(side = 1, at = c(0,6,12,18))
abline(0, 0, col = "darkblue", lty=2, lwd=2)
plot(y = hourly_wage_model$residuals, x = wage1$exper, col="darkgreen", xaxt="n",
frame = FALSE, main = "years of experience", xlab = "", ylab = "")
mtext(side=2, line=2.5, "Model Residuals", cex=1.25)
axis(side = 1, at = c(0,12.5,25,37.5,50))
abline(0, 0, col = "darkblue", lty=2, lwd=2)
plot(y = hourly_wage_model$residuals, x = wage1$tenure, col="darkgreen", xaxt="n",
frame = FALSE, main = "years with employer", xlab = "", ylab = "")
axis(side = 1, at = c(0,11,22,33,44))
abline(0, 0, col = "darkblue", lty=2, lwd=2)
```
Plot the $t$ statistics for a visual comparison:
```{r, echo=FALSE}
barplot(sort(summary(hourly_wage_model)$coefficients[-1, "t value"]), horiz=TRUE, las=1,
ylab = " ", main = "t statistics of Hourly Wage Equation")
```
### **`Example 4.7` Effect of Job Training on Firm Scrap Rates**
Load the `jtrain` data set.
```{r, echo = TRUE, eval = TRUE, warning=FALSE, message=FALSE}
data("jtrain")
```
```{r, echo = TRUE, eval = FALSE, warning=FALSE}
?jtrain
```
From H. Holzer, R. Block, M. Cheatham, and J. Knott (1993), _Are Training Subsidies Effective? The Michigan Experience_, Industrial and Labor Relations Review 46, 625-636. The authors kindly provided the data.
**$year:$** 1987, 1988, or 1989
**$union:$** =1 if unionized
**$lscrap:$** Log(scrap rate per 100 items)
**$hrsemp:$** (total hours training) / (total employees trained)
**$lsales:$** Log(annual sales, $)
**$lemploy:$** Log(umber of employees at plant)
First, use the `subset` function and it's argument by the same name to return
observations which occurred in **1987** and are not **union**. At the same time, use
the `select` argument to return only the variables of interest for this problem.
```{r}
jtrain_subset <- subset(jtrain, subset = (year == 1987 & union == 0),
select = c(year, union, lscrap, hrsemp, lsales, lemploy))
```
Next, test for missing values. One can "eyeball" these with R Studio's `View`
function, but a more precise approach combines the `sum` and `is.na` functions
to return the total number of observations equal to `NA`.
```{r}
sum(is.na(jtrain_subset))
```
While `R`'s `lm` function will automatically remove missing `NA` values, eliminating
these manually will produce more clearly proportioned graphs for exploratory analysis.
Call the `na.omit` function to remove all missing values and assign the new
`data.frame` object the name **`jtrain_clean`**.
```{r}
jtrain_clean <- na.omit(jtrain_subset)
```
Use `jtrain_clean` to plot the variables of interest against `lscrap`. Visually
observe the respective distributions for each variable, and compare the slope
($\beta$) of the simple regression lines.
```{r, echo=FALSE, fig.height=3}
par(mfrow=c(1,3))
point_size <- 1.75
plot(y = jtrain_clean$lscrap, x = jtrain_clean$hrsemp, frame = FALSE,
main = "Total (hours/employees) trained", ylab = "", xlab="", pch = 21, bg = "lightgrey", cex=point_size)
mtext(side=2, line=2, "Log(scrap rate)", cex=1.25)
abline(lm(lscrap ~ hrsemp, data=jtrain_clean), col = "blue", lwd=2)
plot(y = jtrain_clean$lscrap, x = jtrain_clean$lsales, frame = FALSE, main = "Log(annual sales $)", ylab = " ", xlab="", pch = 21, bg = "lightgrey", cex=point_size)
abline(lm(lscrap ~ lsales, data=jtrain_clean), col = "blue", lwd=2)
plot(y = jtrain_clean$lscrap, x = jtrain_clean$lemploy, frame = FALSE, main = "Log(# employees at plant)", ylab = " ", xlab="", pch = 21, bg = "lightgrey", cex=point_size)
abline(lm(lscrap ~ lemploy, data=jtrain_clean), col = "blue", lwd=2)
```
Now create the linear model regressing `hrsemp`(total hours training/total employees trained), `lsales`(log of annual sales), and `lemploy`(the log of the number of the employees), against `lscrap`(the log of the scrape rate).
$$lscrap = \alpha + \beta_1 hrsemp + \beta_2 lsales + \beta_3 lemploy$$
```{r}
linear_model <- lm(lscrap ~ hrsemp + lsales + lemploy, data = jtrain_clean)
```
Finally, print the complete summary diagnostics of the model.
```{r, eval=FALSE, warning=FALSE, message=FALSE}
summary(linear_model)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html", linear_model, single.row = TRUE, header = FALSE, digits=5)
```
```{r, echo=FALSE, eval=FALSE}
#Plot the coefficients, representing the impact of each variable on $log($`scrap`$)$ for a quick comparison. As you can observe, for some variables, the confidence intervals are wider than others.
coefficient <- coef(linear_model)[-1]
confidence <- confint(linear_model, level = 0.95)[-1,]
graph <- drop(barplot(coefficient, ylim = range(c(confidence)),
main = "Coefficients & 95% C.I. of variables on Firm Scrap Rates"))
arrows(graph, coefficient, graph, confidence[,1], angle=90, length=0.55, col="blue", lwd=2)
arrows(graph, coefficient, graph, confidence[,2], angle=90, length=0.55, col="blue", lwd=2)
```
## Chapter 5: Multiple Regression Analysis: OLS Asymptotics
### **`Example 5.1:` Housing Prices and Distance From an Incinerator**
Load the `hprice3` data set.
```{r}
data("hprice3")
```
**$lprice:$** Log(selling price)
**$ldist:$** Log(distance from house to incinerator, feet)
**$larea:$** Log(square footage of house)
Graph the prices of housing against distance from an incinerator:
```{r, echo=FALSE, fig.align='center'}
par(mfrow=c(1,2))
plot(y = hprice3$price, x = hprice3$dist, main = " ", xlab = "Distance to Incinerator in feet", ylab = "Selling Price", frame = FALSE, pch = 21, bg = "lightgrey")
abline(lm(price ~ dist, data=hprice3), col = "blue", lwd=2)
```
Next, model the $log($`price`$)$ against the $log($`dist`$)$ to estimate the percentage relationship between the two.
$$price = \alpha + \beta_1 dist$$
```{r}
price_dist_model <- lm(lprice ~ ldist, data = hprice3)
```
Create another model that controls for "quality" variables, such as square footage `area` per house.
$$price = \alpha + \beta_1 dist + \beta_2 area$$
```{r}
price_area_model <- lm(lprice ~ ldist + larea, data = hprice3)
```
Compare the coefficients of both models. Notice that adding `area` improves the quality of the model, but also reduces the coefficient size of `dist`.
```{r, eval=FALSE}
summary(price_dist_model)
summary(price_area_model)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",price_dist_model, price_area_model, single.row = TRUE, header = FALSE, digits=5)
```
Graphing illustrates the larger coefficient for `area`.
```{r, echo=FALSE}
par(mfrow=c(1,2))
point_size <- 0.80
plot(y = hprice3$lprice, x = hprice3$ldist, frame = FALSE,
main = "Log(distance from incinerator)", ylab = "", xlab="",
pch = 21, bg = "lightgrey", cex=point_size)
mtext(side=2, line=2, "Log( selling price )", cex=1.25)
abline(lm(lprice ~ ldist, data=hprice3), col = "blue", lwd=2)
plot(y = hprice3$lprice, x = hprice3$larea, frame = FALSE, main = "Log(square footage of house)", ylab = " ", xlab="", pch = 21, bg = "lightgrey", cex=point_size)
abline(lm(lprice ~ larea, data=hprice3), col = "blue", lwd=2)
```
\newpage
## Chapter 6: Multiple Regression: Further Issues
### **`Example 6.1:` Effects of Pollution on Housing Prices, standardized.**
Load the `hprice2` data and view the documentation.
```{r, message=FALSE, eval=FALSE}
data("hprice2")
?hprice2
```
Data from _Hedonic Housing Prices and the Demand for Clean Air_, by Harrison, D. and D.L.Rubinfeld, Journal of Environmental Economics and Management 5, 81-102. Diego Garcia, a former Ph.D. student in economics at MIT, kindly provided these data, which he obtained from the book Regression Diagnostics: Identifying Influential Data and Sources of Collinearity, by D.A. Belsey, E. Kuh, and R. Welsch, 1990. New York: Wiley.
$price$: median housing price.
$nox$: Nitrous Oxide concentration; parts per million.
$crime$: number of reported crimes per capita.
$rooms$: average number of rooms in houses in the community.
$dist$: weighted distance of the community to 5 employment centers.
$stratio$: average student-teacher ratio of schools in the community.
$$price = \beta_0 + \beta_1nox + \beta_2crime + \beta_3rooms + \beta_4dist + \beta_5stratio + \mu$$
Estimate the usual `lm` model.
```{r}
housing_level <- lm(price ~ nox + crime + rooms + dist + stratio, data = hprice2)
```
Estimate the same model, but standardized coefficients by wrapping each variable
with R's `scale` function:
$$\widehat{zprice} = \beta_1znox + \beta_2zcrime + \beta_3zrooms + \beta_4zdist + \beta_5zstratio$$
```{r}
housing_standardized <- lm(scale(price) ~ 0 + scale(nox) + scale(crime) + scale(rooms) + scale(dist) + scale(stratio), data = hprice2)
```
Compare results, and observe
```{r, eval=FALSE}
summary(housing_level)
summary(housing_standardized)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",housing_level, housing_standardized, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
### **`Example 6.2:` Effects of Pollution on Housing Prices, Quadratic Interactive Term**
Modify the housing model from **`example 4.5`**, adding a quadratic term in _rooms_:
$$log(price) = \beta_0 + \beta_1log(nox) + \beta_2log(dist) + \beta_3rooms + \beta_4rooms^2 + \beta_5stratio + \mu$$
```{r}
housing_model_4.5 <- lm(lprice ~ lnox + log(dist) + rooms + stratio, data = hprice2)
housing_model_6.2 <- lm(lprice ~ lnox + log(dist) + rooms + I(rooms^2) + stratio,
data = hprice2)
```
Compare the results with the model from `example 6.1`.
```{r, eval=FALSE}
summary(housing_model_4.5)
summary(housing_model_6.2)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html", housing_model_4.5 , housing_model_6.2, single.row = TRUE, header = FALSE, digits=5)
```
Estimate the minimum turning point at which the `rooms` interactive term changes
from negative to positive.
$$x = \frac{\hat{\beta_1}}{2\hat{\beta_2}}$$
```{r}
beta_1 <- summary(housing_model_6.2)$coefficients["rooms",1]
beta_2 <- summary(housing_model_6.2)$coefficients["I(rooms^2)",1]
turning_point <- abs(beta_1 / (2*beta_2))
print(turning_point)
```
Compute the percent change across a range of average rooms. Include the smallest,
turning point, and largest.
```{r}
Rooms <- c(min(hprice2$rooms), 4, turning_point, 5, 5.5, 6.45, 7.5, max(hprice2$rooms))
Percent.Change <- 100*(beta_1 + 2*beta_2*Rooms)
kable(data.frame(Rooms, Percent.Change))
```
```{r, echo=FALSE}
from <- min(hprice2$rooms)
to <- max(hprice2$rooms)
rooms <- seq(from=from, to =to, by = ((to - from)/(NROW(hprice2)-1)))
quadratic <- abs(100*summary(housing_model_6.2)$coefficients["rooms",1] + 200*summary(housing_model_6.2)$coefficients["I(rooms^2)",1]*rooms)
housing_model_frame <- model.frame(housing_model_6.2)
housing_sq <- abs(beta_1*housing_model_frame[,"rooms"]) +
beta_2*housing_model_frame[,"I(rooms^2)"]
```
Graph the log of the selling price against the number of rooms. Superimpose a
simple model as well as a quadratic model and examine the difference.
```{r, echo=FALSE}
rooms_interaction <- lm(lprice ~ rooms + I(rooms^2), data = hprice2)
par(mfrow=c(1,2))
plot(y = hprice2$lprice, x = hprice2$rooms, xaxt="n", pch = 21, bg = "lightgrey",
frame = FALSE, main = "lprice ~ rooms", xlab = "Rooms", ylab = "")
mtext(side=2, line=2, "Log( selling price )", cex=1.25)
axis(side = 1, at = c(min(hprice2$rooms), 4, 5, 6, 7, 8, max(hprice2$rooms)))
abline(lm(lprice ~ rooms, data = hprice2), col="red", lwd=2.5)
plot(y = hprice2$lprice, x = hprice2$rooms, xaxt="n", pch = 21, bg = "lightgrey",
frame = FALSE, main = "lprice ~ rooms + I(rooms^2)", xlab = "Rooms", ylab = " ")
axis(side = 1, at = c(min(hprice2$rooms), 4, 5, 6, 7, 8, max(hprice2$rooms)))
lines(sort(hprice2$rooms), sort(fitted(rooms_interaction)), col = "red", lwd=2.5)
```
\newpage
## Chapter 7: Multiple Regression Analysis with Qualitative Information
### **`Example 7.4:` Housing Price Regression, Qualitative Binary variable**
This time, use the `hrprice1` data.
```{r}
data("hprice1")
```
```{r, eval=FALSE}
?hprice1
```
Data collected from the real estate pages of the Boston Globe during 1990.
These are homes that sold in the Boston, MA area.
**$lprice:$** Log(house price, $1000s)
**$llotsize:$** Log(size of lot in square feet)
**$lsqrft:$** Log(size of house in square feet)
**$bdrms:$** number of bdrms
**$colonial:$** =1 if home is colonial style
```{r, fig.height=8, eval=FALSE, echo=FALSE}
par(mfrow=c(2,2))
palette(rainbow(6, alpha = 0.8))
plot(y = hprice1$lprice, x = hprice1$llotsize, col=hprice1$bdrms, pch = 19,
frame = FALSE, main = "Log(lot size)", xlab = "", ylab = "")
mtext(side=2, line=2, "Log( selling price )", cex=1.25)
plot(y = hprice1$lprice, x = hprice1$lsqrft, col=hprice1$bdrms, pch=19,
frame = FALSE, main = "Log(home size)", xlab = "Rooms", ylab = " ")
legend(8, 5.8, sort(unique(hprice1$bdrms)), col = 1:length(hprice1$bdrms),
pch=19, title = "bdrms")
hprice1$colonial <- as.factor(hprice1$colonial)
palette(rainbow(2, alpha = 0.8))
plot(y = hprice1$lprice, x = hprice1$llotsize, col=hprice1$colonial, pch = 19, bg = "lightgrey",
frame = FALSE, main = "Log(lot size)", xlab = "", ylab = "")
mtext(side=2, line=2, "Log( selling price )", cex=1.25)
plot(y = hprice1$lprice, x = hprice1$lsqrft, col=hprice1$colonial, pch=19,
frame = FALSE, main = "Log(home size)", xlab = "Rooms", ylab = " ")
legend(8, 5.25, unique(hprice1$colonial), col=1:length(hprice1$colonial), pch=19, title = "colonial")
```
$$\widehat{log(price)} = \beta_0 + \beta_1log(lotsize) + \beta_2log(sqrft) + \beta_3bdrms + \beta_4colonial $$
Estimate the coefficients of the above linear model on the `hprice` data set.
```{r}
housing_qualitative <- lm(lprice ~ llotsize + lsqrft + bdrms + colonial, data = hprice1)
```
```{r, eval=FALSE}
summary(housing_qualitative)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",housing_qualitative, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
## Chapter 8: Heteroskedasticity
### **`Example 8.9:` Determinants of Personal Computer Ownership**
$$\widehat{PC} = \beta_0 + \beta_1hsGPA + \beta_2ACT + \beta_3parcoll + \beta_4colonial $$
Christopher Lemmon, a former MSU undergraduate, collected these data from a survey he took of MSU students in Fall 1994. Load `gpa1` and create a new variable combining the `fathcoll` and `mothcoll`, into `parcoll`. This new column indicates if either parent went to college.
```{r, message=FALSE}
data("gpa1")
gpa1$parcoll <- as.integer(gpa1$fathcoll==1 | gpa1$mothcoll)
GPA_OLS <- lm(PC ~ hsGPA + ACT + parcoll, data = gpa1)
```
Calculate the weights and then pass them to the `weights` argument.
```{r}
weights <- GPA_OLS$fitted.values * (1-GPA_OLS$fitted.values)
GPA_WLS <- lm(PC ~ hsGPA + ACT + parcoll, data = gpa1, weights = 1/weights)
```
Compare the OLS and WLS model in the table below:
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",GPA_OLS, GPA_WLS, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
## Chapter 9: More on Specification and Data Issues
### **`Example 9.8:` R&D Intensity and Firm Size**
$$rdintens = \beta_0 + \beta_1sales + \beta_2profmarg + \mu$$
From _Businessweek R&D Scoreboard_, October 25, 1991. Load the data and estimate the model.
```{r, message=FALSE}
data("rdchem")
all_rdchem <- lm(rdintens ~ sales + profmarg, data = rdchem)
```
Plotting the data reveals the outlier on the far right of the plot, which will skew the results of our model.
```{r, echo=FALSE}
plot_title <- "FIGURE 9.1: Scatterplot of R&D intensity against firm sales"
x_axis <- "firm sales (in millions of dollars)"
y_axis <- "R&D as a percentage of sales"
plot(rdintens ~ sales, pch = 21, bg = "lightgrey", data = rdchem, main = plot_title, xlab = x_axis, ylab = y_axis)
```
So, we can estimate the model without that data point to gain a better understanding of how `sales` and `profmarg` describe `rdintens` for most firms. We can use the `subset` argument of the linear model function to indicate that we only want to estimate the model using data that is less than the highest sales.
```{r}
smallest_rdchem <- lm(rdintens ~ sales + profmarg, data = rdchem,
subset = (sales < max(sales)))
```
The table below compares the results of both models side by side. By removing the outlier firm, $sales$ become a more significant determination of R&D expenditures.
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",all_rdchem, smallest_rdchem, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
## Chapter 10: Basic Regression Analysis with Time Series Data
### **`Example 10.2:` Effects of Inflation and Deficits on Interest Rates**
$$\widehat{i3} = \beta_0 + \beta_1inf_t + \beta_2def_t$$
Data from the _Economic Report of the President, 2004_, Tables B-64, B-73, and B-79.
```{r, message=FALSE}
data("intdef") # load data
# load eXtensible Time Series package.
# xts is excellent for time series plots and
# properly indexing time series.
library(xts)
# create xts object from data.frame
# First, index year as yearmon class of monthly data.
# Note: I add 11/12 to set the month to December, end of year.
index <- zoo::as.yearmon(intdef$year + 11/12)
# Next, create the xts object, ordering by the index above.
intdef.xts <- xts(intdef[ ,-1], order.by = index)
# extract 3-month Tbill, inflation, and deficit data
intdef.xts <- intdef.xts[ ,c("i3", "inf", "def")]
# rename with clearer names
colnames(intdef.xts) <- c("Tbill3mo", "cpi", "deficit")
# plot the object, add a title, and place legend at top left.
plot(x = intdef.xts,
main = "Inflation, Deficits, and Interest Rates",
legend.loc = "topleft")
# Run a Linear regression model
tbill_model <- lm(Tbill3mo ~ cpi + deficit, data = intdef.xts)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",tbill_model, single.row = TRUE, header = FALSE, digits=5)
```
Now lets update the example with current data, pull from the Federal Reserve Economic Research (FRED) using the [quantmod package](https://CRAN.R-project.org/package=quantmod). Other than the convenient API, the package also formats time series data into [xts: eXtensible Time Series](https://CRAN.R-project.org/package=xts) objects, which add many feature and benefits when working with time series.
```{r eval=FALSE, FALSE, message=FALSE, warning=FALSE}
# DO NOT RUN
library(quantmod)
# Tbill, 3 month
getSymbols("TB3MS", src = "FRED")
# convert to annual observations and convert index to type `yearmon`.
TB3MS <- to.yearly(TB3MS, OHLC=FALSE, drop.time = TRUE)
index(TB3MS) <- zoo::as.yearmon(index(TB3MS))
# Inflation
getSymbols("FPCPITOTLZGUSA", src = "FRED")
# Convert the index to yearmon and shift FRED's Jan 1st to Dec
index(FPCPITOTLZGUSA) <- zoo::as.yearmon(index(FPCPITOTLZGUSA)) + 11/12
# Rename and update column names
inflation <- FPCPITOTLZGUSA
colnames(inflation) <- "inflation"
## Deficit, percent of GDP: Federal outlays - federal receipts
# Download outlays
getSymbols("FYFRGDA188S", src = "FRED")
# Lets move the index from Jan 1st to Dec 30th/31st
index(FYFRGDA188S) <- zoo::as.yearmon(index(FYFRGDA188S)) + 11/12
# Rename and update column names
outlays <- FYFRGDA188S
colnames(outlays) <- "outlays"
# Download receipts
getSymbols("FYONGDA188S", src = "FRED")
# Lets move the index from Jan 1st to Dec 30th/31st
index(FYONGDA188S) <- zoo::as.yearmon(index(FYONGDA188S)) + 11/12
# Rename and update column names
receipts <- FYONGDA188S
colnames(receipts) <- "receipts"
```
Now that all data has been downloaded, we can calculate the deficit from the federal `outlays` and `receipts` data. Next, we will merge our new `deficit` variable with `inflation` and `TB3MS` variables. As these are all `xts` times series objects, the `merge` function will automatically key off each series time date index, insuring integrity and alignment among each observation and its respective date. Additionally, xts provides easy chart construction with its plot method.
```{r eval=FALSE, message=FALSE, warning=FALSE}
# DO NOT RUN
# create deficits from outlays - receipts
# xts objects respect their indexing and outline the future
deficit <- outlays - receipts
colnames(deficit) <- "deficit"
# Merge and remove leading and trailing NAs for a balanced data matrix
intdef_updated <- merge(TB3MS, inflation, deficit)
intdef_updated <- zoo::na.trim(intdef_updated)
#Plot all
plot(intdef_updated,
main = "T-bill (3mo rate), inflation, and deficit (% of GDP)",
legend.loc = "topright",)
```
Now lets run the model again. Inflation plays a much more prominent role in the 3 month T-bill rate, than the deficit.
```{r eval=FALSE}
# DO NOT RUN
updated_model <- lm(TB3MS ~ inflation + deficit, data = intdef_updated)
```
```{r, eval=FALSE, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
#DO NOT RUN
updated_model <- lm(TB3MS ~ inflation + deficit, data = intdef_updated)
stargazer(type = "html", updated_model, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
## Chapter 11: Further Issues in Using OLS with with Time Series Data
### **`Example 11.7:` Wages and Productivity**
$$\widehat{log(hrwage_t)} = \beta_0 + \beta_1log(outphr_t) + \beta_2t + \mu_t$$
Data from the _Economic Report of the President, 1989_, Table B-47. The data are for the non-farm business sector.
```{r, message=FALSE}
data("earns")
wage_time <- lm(lhrwage ~ loutphr + t, data = earns)
```
```{r}
wage_diff <- lm(diff(lhrwage) ~ diff(loutphr), data = earns)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",wage_time, wage_diff, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
## Chapter 12: Serial Correlation and Heteroskedasticiy in Time Series Regressions
### **`Example 12.8:` Heteroskedasticity and the Efficient Markets Hypothesis**
These are Wednesday closing prices of value-weighted NYSE average, available in many publications. Wooldridge does not recall the particular source used when he collected these data at MIT, but notes probably the easiest way to get similar data is to go to the NYSE web site, [www.nyse.com](https://www.nyse.com/data-and-tech).
$$return_t = \beta_0 + \beta_1return_{t-1} + \mu_t$$
```{r, message=FALSE, eval=FALSE}
data("nyse")
?nyse
```
```{r}
return_AR1 <-lm(return ~ return_1, data = nyse)
```
$$\hat{\mu^2_t} = \beta_0 + \beta_1return_{t-1} + residual_t$$
```{r}
return_mu <- residuals(return_AR1)
mu2_hat_model <- lm(return_mu^2 ~ return_1, data = return_AR1$model)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",return_AR1, mu2_hat_model, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
### **`Example 12.9:` ARCH in Stock Returns**
$$\hat{\mu^2_t} = \beta_0 + \hat{\mu^2_{t-1}} + residual_t$$
We still have `return_mu` in the working environment so we can use it to create $\hat{\mu^2_t}$, (`mu2_hat`) and $\hat{\mu^2_{t-1}}$ (`mu2_hat_1`). Notice the use `R`'s matrix subset operations to perform the lag operation. We drop the first observation of `mu2_hat` and squared the results. Next, we remove the last observation of `mu2_hat_1` using the subtraction operator combined with a call to the `NROW` function on `return_mu`. Now, both contain $688$ observations and we can estimate a standard linear model.
```{r}
mu2_hat <- return_mu[-1]^2
mu2_hat_1 <- return_mu[-NROW(return_mu)]^2
arch_model <- lm(mu2_hat ~ mu2_hat_1)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",arch_model, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
## Chapter 13: Pooling Cross Sections across Time: Simple Panel Data Methods
### **`Example 13.7:` Effect of Drunk Driving Laws on Traffic Fatalities**
Wooldridge collected these data from two sources, the 1992 _Statistical Abstract of the United States_ (Tables 1009, 1012) and _A Digest of State Alcohol-Highway Safety Related Legislation_, 1985 and 1990, published by the U.S. National Highway Traffic Safety Administration.
$$\widehat{\Delta{dthrte}} = \beta_0 + \Delta{open} + \Delta{admin}$$
```{r, message=FALSE, eval=FALSE}
data("traffic1")
?traffic1
```
```{r}
DD_model <- lm(cdthrte ~ copen + cadmn, data = traffic1)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",DD_model, single.row = TRUE, header = FALSE, digits=5)
```
\newpage
## Chapter 18: Advanced Time Series Topics
### **`Example 18.8:` FORECASTING THE U.S. UNEMPLOYMENT RATE**
Data from _Economic Report of the President, 2004_, Tables B-42 and B-64.
```{r, message=FALSE, eval=FALSE}
data("phillips")
?phillips
```
$$\widehat{unemp_t} = \beta_0 + \beta_1unem_{t-1}$$
Estimate the linear model in the usual way and note the use of the `subset` argument to define data equal to and before the year 1996.
```{r}
phillips_train <- subset(phillips, year <= 1996)
unem_AR1 <- lm(unem ~ unem_1, data = phillips_train)
```
$$\widehat{unemp_t} = \beta_0 + \beta_1unem_{t-1} + \beta_2inf_{t-1}$$
```{r}
unem_inf_VAR1 <- lm(unem ~ unem_1 + inf_1, data = phillips_train)
```
```{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
stargazer(type = "html",unem_AR1, unem_inf_VAR1, single.row = TRUE, header = FALSE, digits=5)
```
Now, use the `subset` argument to create our testing data set containing observation after 1996.
Next, pass the both the model object and the test set to the `predict` function for both models.
Finally, `cbind` or "column bind" both forecasts as well as the year and unemployment rate of the test set.
```{r, warning=FALSE, message=FALSE, echo=TRUE}
phillips_test <- subset(phillips, year >= 1997)
AR1_forecast <- predict.lm(unem_AR1, newdata = phillips_test)
VAR1_forecast <- predict.lm(unem_inf_VAR1, newdata = phillips_test)
kable(cbind(phillips_test[ ,c("year", "unem")], AR1_forecast, VAR1_forecast))
```
\newpage
# Appendix
### Using R for Introductory Econometrics
This is an excellent open source complimentary text to "Introductory Econometrics" by Jeffrey M. Wooldridge and should be your number one resource. This excerpt from the book's website:
> This book introduces the popular, powerful and free programming language and software package R with a focus on the implementation of standard tools and methods used in econometrics. Unlike other books on similar topics, it does not attempt to provide a self-contained discussion of econometric models and methods. Instead, it builds on the excellent and popular textbook "Introductory Econometrics" by Jeffrey M. Wooldridge.
Heiss, Florian. _Using R for Introductory Econometrics_. ISBN: 979-8648424364, CreateSpace Independent Publishing Platform, 2020, Dusseldorf, Germany.
[url: http://www.urfie.net/](http://www.urfie.net/).
### Applied Econometrics with R
From the publisher's website:
> This is the first book on applied econometrics using the R system for statistical computing and graphics. It presents hands-on examples for a wide range of econometric models, from classical linear regression models for cross-section, time series or panel data and the common non-linear models of microeconometrics such as logit, probit and tobit models, to recent semiparametric extensions. In addition, it provides a chapter on programming, including simulations, optimization, and an introduction to R tools enabling reproducible econometric research. An R package accompanying this book, AER, is available from the Comprehensive R Archive Network (CRAN) at https://CRAN.R-project.org/package=AER.
Kleiber, Christian and Achim Zeileis. _Applied Econometrics with R_. ISBN 978-0-387-77316-2,
Springer-Verlag, 2008, New York. [https://link.springer.com/book/10.1007/978-0-387-77318-6](https://link.springer.com/book/10.1007/978-0-387-77318-6)
\newpage
## Bibliography
Jeffrey M. Wooldridge (2020). _Introductory Econometrics: A Modern Approach, 7th edition_. ISBN-13: 978-1-337-55886-0. Mason, Ohio :South-Western Cengage Learning.
Jeffrey A. Ryan and Joshua M. Ulrich (2020). quantmod:
Quantitative Financial Modelling Framework. R package version
0.4.18. https://CRAN.R-project.org/package=quantmod
R Core Team (2021). R: A language and environment for
statistical computing. R Foundation for Statistical Computing,
Vienna, Austria. URL https://www.R-project.org/.
Marek Hlavac (2018). _stargazer: Well-Formatted Regression and Summary Statistics Tables_. R package version 5.2.2. URL: https://CRAN.R-project.org/package=stargazer
van der Loo M (2020). tinytest “A method for deriving information from
running R code.” _The R Journal_, Accepted for publication. <URL:
https://arxiv.org/abs/2002.07472>.
Yihui Xie (2021). _knitr: A General-Purpose Package for Dynamic
Report Generation in R_. R package version 1.33. https://CRAN.R-project.org/package=knitr
|
/scratch/gouwar.j/cran-all/cranData/wooldridge/vignettes/Introductory-Econometrics-Examples.Rmd
|
#' @title Add synthetic data to WORCS project
#' @description This function adds a user-specified synthetic data resource for
#' public use to a WORCS project with closed data.
#' @param data A \code{data.frame} containing the synthetic data.
#' @param synthetic_name Character, naming the file synthetic data should be
#' written to. By
#' default, prepends \code{"synthetic_"} to the \code{original_name}.
#' @param original_name Character, naming an existing data resource in the WORCS
#' project with which to associate the synthetic \code{data} object.
#' @param worcs_directory Character, indicating the WORCS project directory to
#' which to save data. The default value \code{"."} points to the current
#' directory.
#' @param verbose Logical. Whether or not to print status messages to
#' the console. Default: TRUE
#' @param ... Additional arguments passed to and from functions.
#' @return Returns \code{NULL} invisibly. This
#' function is called for its side effects.
#' @examples
#' # Create directory to run the example
#' old_wd <- getwd()
#' test_dir <- file.path(tempdir(), "add_synthetic")
#' dir.create(test_dir)
#' setwd(test_dir)
#' worcs:::write_worcsfile(".worcs")
#' # Prepare data
#' df <- iris[1:3, ]
#' # Run closed_data without synthetic
#' closed_data(df, codebook = NULL, synthetic = FALSE)
#' # Manually add synthetic
#' add_synthetic(df, original_name = "df.csv")
#' # Remove original from file and environment
#' file.remove("df.csv")
#' rm(df)
#' # See that load_data() now loads the synthetic file
#' load_data()
#' # Cleaning example directory
#' setwd(old_wd)
#' unlink(test_dir, recursive = TRUE)
#' @seealso open_data closed_data save_data
#' @export
#' @rdname add_synthetic
add_synthetic <- function(data,
synthetic_name = paste0("synthetic_", original_name),
original_name,
worcs_directory = ".",
verbose = TRUE,
...){
cl <- as.list(match.call()[-1])
# Filenames housekeeping
dn_worcs <- dirname(check_recursive(file.path(normalizePath(worcs_directory), ".worcs")))
fn_worcs <- path_abs_worcs(".worcs", dn_worcs)
if(!file.exists(fn_worcs)){
stop(".worcs file not found.")
}
worcs_file <- read_yaml(fn_worcs)
if(is.null(worcs_file[["data"]])){
stop("This WORCS project does not contain any data resources.", call. = FALSE)
}
data_names <- names(worcs_file$data)
if(!original_name %in% data_names){
stop("This WORCS project does not contain a data resource called ", original_name, ". The available data resources are called:", paste0("\n ", data_names, collapse = ""), call. = FALSE)
}
fn_gitig <- file.path(dn_worcs, ".gitignore")
fn_original <- basename(original_name)
dn_original <- dirname(original_name)
fn_synthetic <- synthetic_name
if(!dn_original == "."){
fn_synthetic <- file.path(dn_original, fn_synthetic)
}
#fn_write_original <- file.path(dn_original, fn_original)
fn_write_synth_abs <- path_abs_worcs(fn_synthetic, dn_worcs)
fn_write_synth_rel <- path_rel_worcs(fn_write_synth_abs, dn_worcs)
# End filenames
# Remove this when worcs can handle different types:
# if(!inherits(data, c("data.frame", "matrix"))){
# stop("Argument 'data' must be a data.frame, matrix, or inherit from these classes.")
# }
# End remove
# Insert three checks:
# 1) write_func works with data object
# 2) read_func works with data object
# 3) result of read_func is identical to data object
# Store data --------------------------------------------------------------
# Prepare for writing to worcs file
to_worcs <- list(
filename = fn_worcs,
modify = TRUE
)
#to_worcs$data[[original_name]][["synthetic"]] <- vector(mode = "list")
# Synthetic data
col_message("Storing synthetic data in '", fn_write_synth_rel, "' and updating the checksum in '.worcs'.", verbose = verbose)
# Obtain save_expression from the worcs_file
save_expression <- worcs_file$data[[original_name]][["save_expression"]]
# If there is no save_expression, this is a legacy worcs_file.
# Use the default save expression of previous worcs versions.
if(is.null(save_expression)){
save_expression <- "write.csv(data, filename, row.names = FALSE)"
}
# Create an environment in which to evaluate the save_expression, in which
# filename is an object with value equal to fn_write_synth
save_env <- new.env()
assign(x = "filename", value = fn_write_synth_abs, envir = save_env)
out <- eval(parse(text = save_expression), envir = save_env)
# Add info to worcs_file
to_worcs$data[[original_name]]$synthetic <- fn_write_synth_rel
store_checksum(fn_write_synth_abs, entry_name = fn_write_synth_rel, worcsfile = fn_worcs)
write_gitig(fn_gitig, paste0("!", basename(fn_synthetic)))
col_message("Updating '.gitignore'.", verbose = verbose)
fn_readme <- path_abs_worcs("README.md", dn_worcs)
if(file.exists(fn_readme)){
lnz <- readLines(fn_readme)
if(!any(grepl("Synthetic data with similar", lnz, fixed = TRUE))){
update_textfile(filename = fn_readme,
txt = "Synthetic data with similar characteristics to the original data have been provided. Using the function load_data() will load these synthetic data when the original data are unavailable. Note that these synthetic data cannot be used to reproduce the original results. However, it does allow users to run the code and, optionally, generate valid code that can be evaluated using the original data by the project authors.",
next_to = "Some of the data used in this project are not publically available.",
verbose = verbose)
}
}
do.call(write_worcsfile, to_worcs)
invisible(NULL)
}
update_textfile <- function(filename, txt, next_to = NULL, before = FALSE, verbose = TRUE){
# Update readme file
tryCatch({
if(file.exists(filename)){
contentz <- readLines(filename)
if(!is.null(next_to)){
the_matches <- grepl(next_to, contentz, fixed = TRUE)
if(any(the_matches)){
loc <- which(the_matches)[1]
out <- append(contentz, txt, after = loc-before)
write_as_utf(out, con = filename, append = FALSE)
col_message("Updating ", filename, ".", verbose = verbose)
return(invisible(NULL))
}
}
write_as_utf(txt, con = filename, append = TRUE)
col_message("Appending ", filename, ".", verbose = verbose)
} else {
write_as_utf(txt, con = filename, append = FALSE)
col_message("Creating ", filename, ".", verbose = verbose)
}
}, error = function(e){
col_message("Failed to update file ", filename, verbose = verbose, success = FALSE)
})
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/add_synthetic.R
|
gert_works <- function() {
dir_name <- file.path(tempfile())
if (dir.exists(dir_name))
unlink(dir_name, recursive = TRUE, force = TRUE)
dir.create(dir_name)
on.exit(unlink(dir_name, recursive = TRUE), add = TRUE)
pass <- !inherits(try({
gert::git_init(dir_name)
}, silent = TRUE)
, "try-error")
if (!pass)
return(FALSE)
pass <- !inherits(try({
if (gert::user_is_configured(dir_name)) {
gert::git_config_set("user.name", "Gert test", repo = dir_name)
gert::git_config_set("user.email", "[email protected]", repo = dir_name)
}
if (!gert::user_is_configured(dir_name))
stop()
}, silent = TRUE)
, "try-error")
if (!pass)
return(FALSE)
pass <- !inherits(try({
gert::git_init()
}, silent = TRUE)
, "try-error")
if (!pass)
return(FALSE)
pass <- !inherits(try({
writeLines("test git", con = file.path(dir_name, "tmp.txt"))
tmp <- gert::git_add(".", repo = dir_name)
if (!isTRUE(tmp$staged))
stop()
}, silent = TRUE)
, "try-error")
if (!pass)
return(FALSE)
pass <-
!inherits(try(gert::git_commit("First commit", repo = dir_name),
silent = TRUE)
, "try-error")
if (!pass)
return(FALSE)
return(TRUE)
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/check_gert.R
|
#' @title Check worcs dependencies
#' @description This function checks that all worcs dependencies are correctly
#' installed, and suggests how to remedy any missing dependencies.
#' @param what Character vector indicating which dependencies to check. Default:
#' `"all"`. All checks defined in the Usage section can be called, e.g.
#' `check_git` can be called using the argument `what = "git"`.
#' @return Logical, indicating whether all checks passed or not.
#' @examples
#' check_worcs_installation("none")
#' @rdname check_worcs_installation
#' @export
#' @importFrom gert user_is_configured libgit2_config git_init git_add git_commit
#' @importFrom credentials ssh_key_info
#' @importFrom tinytex pdflatex
#' @importFrom utils packageDescription
#' @importFrom gh gh_token
#' @importFrom renv consent
check_worcs_installation <- function(what = "all") {
if (isTRUE(what == "none"))
return(invisible(TRUE))
pass <- list()
errors <- list()
checkfuns <- c("check_dependencies", "check_git", "check_github", "check_renv", "check_rmarkdown", "check_tinytext")
# checkfuns <- ls(asNamespace("worcs"))
# checkfuns <- checkfuns[startsWith(checkfuns, "check_")]
# checkfuns <- setdiff(checkfuns, c("check_recursive", "check_sum", "check_worcs", "check_worcs_installation"))
if(!what == "all"){
checkfuns <- checkfuns[checkfuns %in% paste0("check_", what)]
if(isFALSE(length(checkfuns) > 0)){
stop("Argument 'what' does not refer to any valid checks.")
}
}
out <- lapply(checkfuns, function(thisfun){
out <- try(eval(str2lang(paste0("worcs::", thisfun, "()"))))
if(inherits(out, "try-error")){
out <- structure(list(pass = list(name = FALSE), errors = list()), class = c("worcs_check",
"list"))
names(out[["pass"]]) <- thisfun
}
out
})
worcs_checkres <- list(pass = do.call(c, lapply(out, `[[`, "pass")),
errors = do.call(c, lapply(out, `[[`, "errors")))
class(worcs_checkres) <- c("worcs_check", class(worcs_checkres))
return(invisible(isTRUE(all(pass))))
}
#' @rdname check_worcs_installation
#' @param package Atomic character vector, indicating for which package to check
#' the dependencies.
#' @export
check_dependencies <- function(package = "worcs") {
available <- data.frame(installed.packages()) #[, c("Package", "Version")])
thesedeps <- {
pks <- packageDescription(package)
if (isTRUE(is.na(pks))) pks <- vector("character")
pks <- gsub("\n", "", pks$Imports, fixed = TRUE)
pks <- gsub("\\s", "", pks)
pks <- strsplit(pks, ",")[[1]]
setdiff(
pks,
c(
"R",
"stats",
"graphics",
"grDevices",
"utils",
"datasets",
"methods",
"base",
"tools"
)
)
}
has_version <- grepl("(", thesedeps, fixed = TRUE)
correct_vers <- rep(TRUE, length(thesedeps))
if(any(has_version)){
vers <- data.frame(do.call(rbind, strsplit(thesedeps[has_version], "(", fixed = TRUE)))
vers[,2] <- gsub(")", "", vers[,2], fixed = TRUE)
vers$op <- gsub("[0-9\\.]", "", vers[,2])
vers[,2] <- gsub("[^0-9.-]", "", vers[,2])
thesedeps[has_version] <- vers[,1]
correct_vers[has_version] <- sapply(vers$X1, function(n){
tryCatch({
do.call(vers$op[vers[[1]] == n], list(x = packageVersion(n), y = vers[1,2]))
}, error = function(e){ FALSE })
})
}
is_avlb <- thesedeps %in% available$Package
if (all(is_avlb & correct_vers)) {
out <- list(pass = list(dependencies = TRUE), errors = list(dependencies = ""))
} else {
errors <- thesedeps[which(!(is_avlb & correct_vers))]
out <- list(
pass = list(dependencies = FALSE),
errors = list(
dependencies = paste0(
"The following packages are not installed (or their correct versions are not installed), run install.packages() for: ",
paste0(errors, collapse = ", ")
)
)
)
}
class(out) <- c("worcs_check", class(out))
return(out)
}
#' @rdname check_worcs_installation
#' @export
check_git <- function() {
pass <- list()
errors <- list()
# Check command line git
pass[["git_cmd"]] <-
system2("git",
"--version",
stdout = tempfile(),
stderr = tempfile()) == 0L
if (!pass[["git_cmd"]])
errors[["git_cmd"]] <-
"Could not execute Git on the command line; please reinstall from https://git-scm.com/"
# Check user
pass[["git_user"]] <- gert::user_is_configured()
if (!pass[["git_user"]])
errors[["git_user"]] <-
"No user configured; please run worcs::git_user(yourname, youremail, overwrite = TRUE)"
# Check libgit for SSH
libgit <- gert::libgit2_config()
pass[["libgit2"]] <- !is.null(libgit[["version"]])
if (pass[["libgit2"]]) {
pass[["libgit2"]] <- libgit$ssh
}
if (!pass[["libgit2"]]) {
errors[["libgit2"]] <-
"libgit2 is not properly installed and you may not be able to use the SSH protocol to connect to Git remote repositories."
}
# Check that one can create a repo
the_test <- "git"
dir_name <- file.path(tempdir(), the_test)
if (dir.exists(dir_name))
unlink(dir_name, recursive = TRUE, force = TRUE)
dir.create(dir_name)
pass[["git_init"]] <-
!inherits(try({
gert::git_init(dir_name)
}, silent = TRUE)
, "try-error")
if (!pass[["git_init"]]) {
errors[["git_init"]] <-
"Package gert could not initialize a Git repository."
} else {
# More tests
writeLines("test git", con = file.path(dir_name, "tmp.txt"))
tmp <- gert::git_add(".", repo = dir_name)
pass[["git_add"]] <- isTRUE(tmp$staged)
if (!pass[["git_add"]]) {
errors[["git_add"]] <-
"Package gert could not add files to Git repository."
} else {
# More tests
pass[["git_commit"]] <-
!inherits(try(gert::git_commit("First commit", repo = dir_name),
silent = TRUE)
, "try-error")
if (!pass[["git_commit"]]) {
errors[["git_commit"]] <-
"Package gert could not commit to Git repository."
}
}
}
if(isTRUE(all(unlist(pass)))){
pass <- list(git = TRUE)
errors <- list()
}
unlink(dir_name, recursive = TRUE, force = TRUE)
out <- list(pass = pass, errors = errors)
class(out) <- c("worcs_check", class(out))
return(out)
}
#' @rdname check_worcs_installation
#' @param pat Logical, whether to run tests for the existence and functioning of
#' a GitHub Personal Access Token (PAT). This is the preferred method of
#' authentication, so defaults to TRUE.
#' @param ssh Logical, whether to run tests for the existence and functioning of
#' an SSH key. This method of authentication is not recommended, so defaults to
#' FALSE.
#' @export
check_github <- function(pat = TRUE, ssh = FALSE) {
pass <- list()
errors <- list()
# Check if currently in a git repo with remote
repo <- try({gert::git_remote_list()})
if(!inherits(repo, "try-error")){
if(isTRUE(grepl("^https://", repo$url))) pass[["current git repo has a remote that requires PAT authentication"]] <- TRUE
if(isTRUE(grepl("^git@", repo$url))) pass[["current git repo has a remote that requires SSH authentication"]] <- TRUE
}
if(pat){
pass[["github_pat"]] <- isFALSE(gh::gh_token() == "")
if (!pass[["github_pat"]])
errors[["github_pat"]] <-
"You have not set a Personal Access Token (PAT) for GitHub; to fix this, run usethis::create_github_token(), create a PAT and copy it, then run gitcreds::gitcreds_set() and paste the PAT when asked. If you still experience problems try usethis::gh_token_help() for help."
# github pat grants access
if(pass[["github_pat"]]){
result <- tryCatch(gh::gh("/user"), error = function(e)e)
pass[["github_pat_response"]] <- isTRUE(inherits(result, "gh_response"))
if (!pass[["github_pat_response"]]){
errors[["github_pat_response"]] <- "The Personal Access Token (PAT) in your Git credential store does not grant access to GitHub. To fix this, run usethis::create_github_token(), create a PAT and copy it, then run gitcreds::gitcreds_set() and paste the PAT when asked. If you still experience problems try usethis::gh_token_help() for help."
if(inherits(result, "http_error_401")){
errors[["github_pat_response"]] <- "The Personal Access Token (PAT) in your Git credential store does not grant access to GitHub. It may have expired. To fix this, run usethis::create_github_token(), create a PAT and copy it, then run gitcreds::gitcreds_set() and paste the PAT when asked. If you still experience problems try usethis::gh_token_help() for help."
}
if(inherits(result, "rlib_error") && grepl("one of these forms", result$message)){
errors[["github_pat_response"]] <- "The Personal Access Token (PAT) in your Git credential store has the wrong format. To fix this, run usethis::create_github_token(), create a PAT and copy it, then run gitcreds::gitcreds_set() and paste the PAT when asked. If you still experience problems try usethis::gh_token_help() for help."
}
}
}
}
if(ssh){
# Check SSH
sshres <- check_ssh()
pass <- c(pass, sshres$pass)
errors <- c(errors, sshres$errors)
# GitHub SSH
temp <- tempfile()
system2("ssh",
"-T [email protected]",
stdout = temp,
stderr = temp)
output <- readLines(temp)
pass[["github_ssh"]] <-
isTRUE(any(grepl("success", output, fixed = TRUE)))
# Maybe check if *any* type of authentication is possible
if (!pass[["github_ssh"]])
errors[["github_ssh"]] <-
"Could not authenticate GitHub via SSH, but that's OK. We recommend using a Personal Access Token (PAT). If you intend to use SSH with GitHub, consult https://happygitwithr.com/rstudio-git-github.html"
}
if(isTRUE(all(unlist(pass)))){
pass <- list(github = TRUE)
errors <- list()
}
out <- list(pass = pass, errors = errors)
class(out) <- c("worcs_check", class(out))
return(out)
}
#' @rdname check_worcs_installation
#' @export
check_ssh <- function() {
pass <- list()
errors <- list()
pass[["ssh"]] <-
isTRUE(!inherits(try(credentials::ssh_key_info(host = NULL, auto_keygen = FALSE)$key,
silent = TRUE)
, "try-error") &
!inherits(try(credentials::ssh_read_key(), silent = TRUE)
, "try-error"))
if (!pass[["ssh"]])
errors[["ssh"]] <-
"Could not find a valid SSH key, but that's OK. We recommend using a Personal Access Token (PAT). If you do wish to set up SSH, please consult https://happygitwithr.com/ssh-keys.html"
out <- list(pass = pass, errors = errors)
class(out) <- c("worcs_check", class(out))
return(out)
}
#' @rdname check_worcs_installation
#' @export
check_tinytext <- function() {
pass <- list()
errors <- list()
pass[["tinytex"]] <- !inherits(try({
tmpfl <- tempfile(fileext = ".tex")
writeLines(
c(
'\\documentclass{article}',
'\\begin{document}',
'Hello world!',
'\\end{document}'
),
tmpfl
)
tmp <- tinytex::pdflatex(tmpfl)
isTRUE(endsWith(tmp, ".pdf"))
}, silent = TRUE)
, "try-error")
if (!pass[["tinytex"]])
errors[["tinytex"]] <-
"tinytex could not render a pdf document and may need to be reinstalled; please turn to https://yihui.org/tinytex/"
out <- list(pass = pass, errors = errors)
class(out) <- c("worcs_check", class(out))
return(out)
}
#' @rdname check_worcs_installation
#' @export
check_rmarkdown <- function() {
pass <- list()
errors <- list()
pass[["rmarkdown_html"]] <- !inherits(try({
tmpinp <- tempfile(fileext = ".rmd")
tmpout <- tempfile(fileext = ".html")
writeLines(
c(
'---',
'title: "Untitled"',
'author: "test"',
'output: html_document',
'---'
),
tmpinp
)
tmp <-
rmarkdown::render(input = tmpinp,
output_file = tmpout,
quiet = TRUE)
isTRUE(endsWith(tmp, ".html"))
}, silent = TRUE)
, "try-error")
if (!pass[["rmarkdown_html"]])
errors[["rmarkdown_html"]] <-
"Rmarkdown could not render a HTML file."
pass[["rmarkdown_pdf"]] <- !inherits(try({
tmpinp <- tempfile(fileext = ".rmd")
tmpout <- tempfile(fileext = ".pdf")
writeLines(
c(
'---',
'title: "Untitled"',
'author: "test"',
'output: pdf_document',
'---'
),
tmpinp
)
tmp <-
rmarkdown::render(input = tmpinp,
output_file = tmpout,
quiet = TRUE)
isTRUE(endsWith(tmp, ".pdf"))
}, silent = TRUE)
, "try-error")
if (!pass[["rmarkdown_pdf"]])
errors[["rmarkdown_pdf"]] <-
"Rmarkdown could not render a PDF file."
if(isTRUE(all(unlist(pass)))){
pass <- list(rmarkdown = TRUE)
errors <- list()
}
out <- list(pass = pass, errors = errors)
class(out) <- c("worcs_check", class(out))
return(out)
}
#' @rdname check_worcs_installation
#' @export
check_renv <- function() {
pass <- list()
errors <- list()
pass[["renv_consent"]] <- !inherits(try({
{
sink(tempfile())
tmp <- invisible(renv::consent())
sink()
}
if (!isTRUE(tmp))
stop()
}, silent = TRUE)
, "try-error")
if (!pass[["renv_consent"]])
errors[["renv_consent"]] <-
"renv does not have consent yet; run renv::consent(provided = TRUE)"
out <- list(pass = pass, errors = errors)
class(out) <- c("worcs_check", class(out))
return(out)
}
# Show results ------------------------------------------------------------
#' @method print worcs_check
#' @export
print.worcs_check <- function(x, ...){
pass <- unlist(x$pass)
for (n in names(pass)) {
if (pass[n]) {
col_message(n, success = TRUE)
} else {
col_message(paste0(n, ": ", x$errors[[n]]), success = FALSE)
}
}
}
get_deps <- function(package = "worcs") {
pks <- packageDescription(package)
if (isTRUE(is.na(pks)))
return(vector("character"))
pks <- gsub("\n", "", pks$Imports, fixed = TRUE)
pks <- gsub("\\s", "", pks)
pks <- strsplit(pks, ",")[[1]]
setdiff(
pks,
c(
"R",
"stats",
"graphics",
"grDevices",
"utils",
"datasets",
"methods",
"base",
"tools"
)
)
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/check_installation.R
|
#' Comprehensive citation Knit function for 'RStudio'
#'
#' This is a wrapper for \code{\link[rmarkdown]{render}}. First, this function
#' parses the citations in the document, converting citations
#' marked with double at sign, e.g.: \code{@@@@reference2020}, into normal
#' citations, e.g.: \code{@@reference2020}. Then, it renders the file.
#' @param ... All arguments are passed to \code{\link[rmarkdown]{render}}.
#' @export
#' @return Returns \code{NULL} invisibly. This
#' function is called for its side effect of rendering an
#' 'R Markdown' file.
#' @examples
#' # NOTE: Do not use this function interactively, as in the example below.
#' # Only specify it as custom knit function in an 'R Markdown' file, like so:
#' # knit: worcs::cite_all
#'
#' if (rmarkdown::pandoc_available("2.0")){
#' file_name <- file.path(tempdir(), "citeall.Rmd")
#' loc <- rmarkdown::draft(file_name,
#' template = "github_document",
#' package = "rmarkdown",
#' create_dir = FALSE,
#' edit = FALSE)
#' write(c("", "Optional reference: @@reference2020"),
#' file = file_name, append = TRUE)
#' cite_all(file_name)
#' }
cite_all <- function(...){
comprehensive_cite(..., citeall = TRUE)
}
#' Essential citations Knit function for 'RStudio'
#'
#' This is a wrapper for \code{\link[rmarkdown]{render}}. First, this function
#' parses the citations in the document, removing citations
#' marked with double at sign, e.g.: \code{@@@@reference2020}. Then, it renders
#' the file.
#' @param ... All arguments are passed to \code{\link[rmarkdown]{render}}.
#' @export
#' @return Returns \code{NULL} invisibly. This
#' function is called for its side effect of rendering an
#' 'R Markdown' file.
#' @examples
#' # NOTE: Do not use this function interactively, as in the example below.
#' # Only specify it as custom knit function in an R Markdown file, like so:
#' # knit: worcs::cite_all
#'
#' if (rmarkdown::pandoc_available("2.0")){
#' file_name <- tempfile("citeessential", fileext = ".Rmd")
#' rmarkdown::draft(file_name,
#' template = "github_document",
#' package = "rmarkdown",
#' create_dir = FALSE,
#' edit = FALSE)
#' write(c("", "Optional reference: @@reference2020"),
#' file = file_name, append = TRUE)
#' cite_essential(file_name)
#' }
cite_essential <- function(...){
comprehensive_cite(..., citeall = FALSE)
}
#' @importFrom rmarkdown render pandoc_available
comprehensive_cite <- function(input, encoding = "UTF-8", ..., citeall = TRUE) {
if(!pandoc_available("2.0")){
message("Using rmarkdown requires pandoc version >= 2.0.")
return(invisible(NULL))
}
dots <- list(...)
dots$encoding <- encoding
dots$input <- input
doc_text <- readLines(input, encoding = encoding)
if(citeall){
write_as_utf(.nonessential_to_normal(doc_text), input)
} else {
write_as_utf(.remove_nonessential(doc_text), input)
}
do.call(render, dots)
write_as_utf(doc_text, input) # reset file to original state
invisible(NULL)
}
.nonessential_to_normal <- function(text){
text <- gsub("(?<!`)@@", "@", text, perl = TRUE)
text <- gsub("\\@\\@", "@@", text, fixed = TRUE)
text
}
.remove_nonessential <- function(text){
out <- paste0(c(text, " onzin"), collapse = "\n")
out <- as.list(strsplit(out, "[", fixed = TRUE)[[1]])
out <- lapply(out, function(x){
if(grepl("@@", x, fixed = TRUE)){
ref_sec <- unlist(strsplit(x, split = "]", fixed = TRUE))
if(length(ref_sec) > 1){
if(grepl("@@", ref_sec[1], fixed = TRUE)){
each_ref_sec <- unlist(strsplit(ref_sec[1], split = ";"))
ref_sec[1] <- paste0(each_ref_sec[!grepl("@@", each_ref_sec, fixed = TRUE)], collapse = ";")
if(ref_sec[1] == "") ref_sec[1] <- "XXXXXDELETEMEXXXXX"
}
if(grepl("@@", ref_sec[2], fixed = TRUE)){
ref_sec[2] <- gsub("\\s{0,1}-?@@.+?\\b\\s{0,1}", " ", ref_sec[2])
}
} else {
ref_sec[1] <- gsub("\\s{0,1}-?@@.+?\\b\\s{0,1}", " ", ref_sec[1])
}
paste0(ref_sec, collapse = "]")
} else {
x
}
})
out <- paste0(unlist(out), collapse = "[")
out <- gsub("\\s{0,1}\\[XXXXXDELETEMEXXXXX\\]", "", out) # The \\s might cause trouble
substr(out, 1, nchar(out)-6)
}
# Extract citations
#
# This function extracts all citations from a character vector.
# @param txt Character vector, defaults to
# \code{readLines("manuscript/manuscript.Rmd")}.
# @param split Character vector to use for splitting, passed to
# \code{\link{strsplit}}.
# @param ... Additional arguments are passed to \code{\link{strsplit}}.
#@export
# @return Character vector.
# @examples
# extract_citations("This is just an example [@extract_cites; @works].")
extract_citations <- function(x = readLines("manuscript/manuscript.Rmd"),
split = "@+", ...){
cl <- match.call()
cl[[1L]] <- quote(strsplit)
cl[["x"]] <- gsub("\\w@", "", paste0(x, collapse = ""))
cl[["split"]] <- split
cites <- eval(cl, envir = environment())[[1]][-1]
cites <- gsub("^([a-zA-Z0-9-]+?)\\b.*$", "\\1", cites)
tabcit <- as.data.frame.table(table(cites))
tabcit[order(tabcit$Freq, decreasing = T), ]
}
string_citations <- function(x = readLines("manuscript/manuscript.Rmd"),
split = "@+", ...){
cl <- match.call()
cl[[1L]] <- quote(strsplit)
cl[["x"]] <- gsub("\\w@", "", paste0(x, collapse = ""))
cl[["split"]] <- split
cites <- eval(cl, envir = environment())[[1]][-1]
gsub("^([a-zA-Z0-9-]+?)\\b.*$", "\\1", cites)
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/citations.R
|
#' @title Create codebook for a dataset
#' @description Creates a codebook for a dataset in 'R Markdown' format, and
#' renders it to 'markdown' for 'GitHub'. A codebook contains metadata and
#' documentation for a data file.
#' We urge users to customize the automatically generated 'R Markdown'
#' document and re-knit it, for example, to add a paragraph with details on
#' the data collection procedures. The variable descriptives are stored in
#' a \code{.csv} file, which can be edited in 'R' or a spreadsheet program.
#' Columns can be appended, and we encourage users to complete at least the
#' following two columns in this file:
#' \describe{
#' \item{category}{Describe the type of variable in this column. For example:
#' "morality".}
#' \item{description}{Provide a plain-text description of the variable. For
#' example, the full text of a questionnaire item: "People should be willing to
#' do anything to help a member of their family".}
#' }
#' Re-knitting the 'R Markdown' file (using \code{\link[rmarkdown]{render}}) will
#' transfer these changes to the 'markdown' file for 'GitHub'.
#' @param data A data.frame for which to create a codebook.
#' @param render_file Logical. Whether or not to render the document.
#' @param filename Character. File name to write the codebook \code{rmarkdown}
#' file to.
#' @param csv_file Character. File name to write the codebook \code{rmarkdown}
#' file to. By default, uses the filename stem of the \code{filename} argument.
#' Set to \code{NULL} to write the codebook only to the 'R Markdown' file, and
#' not to \code{.csv}.
#' @param verbose Logical. Whether or not to print status messages to
#' the console. Default: TRUE
# @param worcs_directory Character, indicating the WORCS project directory from
# which to load data. The default value \code{"."} points to the current
# directory.
#' @return \code{Logical}, indicating whether or not the operation was
#' successful. This function is mostly called for its side effect of rendering
#' an 'R Markdown' codebook.
#' @examples
#' if(rmarkdown::pandoc_available("2.0")){
#' library(rmarkdown)
#' library(knitr)
#' filename <- tempfile("codebook", fileext = ".Rmd")
#' make_codebook(iris, filename = filename, csv_file = NULL)
#' unlink(c(
#' ".worcs",
#' filename,
#' gsub("\\.Rmd", "\\.md", filename),
#' gsub("\\.Rmd", "\\.html", filename),
#' gsub("\\.Rmd", "_files", filename)
#' ), recursive = TRUE)
#' }
#' @rdname codebook
#' @export
#' @importFrom rmarkdown draft render
#' @importFrom stats median var
#' @importFrom utils capture.output
make_codebook <-
function(data,
filename = "codebook.Rmd",
render_file = TRUE,
csv_file = gsub("rmd$", "csv", filename, ignore.case = TRUE),
verbose = TRUE
#, worcs_directory = "."
) {
# dn_worcs <- tryCatch(dirname(check_recursive(file.path(normalizePath(worcs_directory), ".worcs"))), error = function(e){dirname(filename)})
filename <- force(filename)
function_success <- TRUE
summaries <- do.call(descriptives, list(x = data))
summaries <- cbind(summaries,
category = NA,
description = NA)
if (file.exists(filename)) {
col_message(paste0("Removing previous version of '", filename, "'."), verbose = verbose)
invisible(file.remove(filename))
}
draft(
filename,
template = "github_document",
package = "rmarkdown",
create_dir = FALSE,
edit = FALSE
)
file_contents <- readLines(filename, encoding = "UTF-8")
file_contents[grep("^title:", file_contents)[1]] <-
paste0('title: "Codebook created on ',
Sys.Date(),
' at ',
Sys.time(),
'"')
file_contents[grep("^knitr::opts", file_contents)[1]] <-
"knitr::opts_chunk$set(echo = FALSE, results = 'asis')"
file_contents <-
file_contents[1:(grep("^##", file_contents)[1] - 1)]
dm <- dim(data)
# checksum <- checksum_data_as_csv(data)
if (is.null(csv_file)) {
sum_tab <-
paste0(c("summaries <- ", capture.output(dput(summaries))))
#write_worcsfile(".worcs",
# codebook = list(rmd_file = filename, checksum = checksum))
} else {
if (file.exists(csv_file)) {
col_message(paste0("Removing previous version of '", csv_file, "'."), verbose = verbose)
invisible(file.remove(csv_file))
}
write.csv(x = summaries, file = csv_file, row.names = FALSE)
sum_tab <- c(paste0('summaries <- read.csv("', basename(csv_file), '", stringsAsFactors = FALSE)'),
"summaries <- summaries[, !colSums(is.na(summaries)) == nrow(summaries)]"
)
#write_worcsfile(".worcs",
# codebook = list(
# rmd_file = filename,
# csv_file = csv_file,
# checksum = checksum
# ))
}
function_success <- function_success | tryCatch({
write(
c(
file_contents,
"",
"A codebook contains documentation and metadata describing the contents, structure, and layout of a data file.",
"",
"## Dataset description",
paste0("The data contains ", dm[1], " cases and ", dm[2], " variables."),
"",
"## Codebook",
"",
"```{r}",
sum_tab,
"options(knitr.kable.NA = '')",
"knitr::kable(summaries, row.names = FALSE, digits = 2)",
"```",
"",
"### Legend",
"",
"* __Name__: Variable name",
"* __type__: Data type of the variable",
"* __missing__: Proportion of missing values for this variable",
"* __unique__: Number of unique values",
"* __mean__: Mean value",
"* __median__: Median value",
"* __mode__: Most common value (for categorical variables, this shows the frequency of the most common category)",
"* **mode_value**: For categorical variables, the value of the most common category",
"* __sd__: Standard deviation (measure of dispersion for numerical variables",
"* __v__: Agresti's V (measure of dispersion for categorical variables)",
"* __min__: Minimum value",
"* __max__: Maximum value",
"* __range__: Range between minimum and maximum value",
"* __skew__: Skewness of the variable",
"* __skew_2se__: Skewness of the variable divided by 2*SE of the skewness. If this is greater than abs(1), skewness is significant",
"* __kurt__: Kurtosis (peakedness) of the variable",
"* __kurt_2se__: Kurtosis of the variable divided by 2*SE of the kurtosis. If this is greater than abs(1), kurtosis is significant.",
"",
"This codebook was generated using the [Workflow for Open Reproducible Code in Science (WORCS)](https://osf.io/zcvbs/)"
),
filename
)
TRUE
}, error = function(e) {
return(FALSE)
})
if (render_file) {
function_success <- function_success | tryCatch({
render(filename, quiet = TRUE)
TRUE
}, error = function(e) {
return(FALSE)
})
}
return(function_success)
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/codebook.R
|
#' @title Describe a dataset
#' @description Provide descriptive statistics for a dataset.
#' @param x An object for which a method exists.
#' @param ... Additional arguments.
#' @return A \code{data.frame} with descriptive statistics for \code{x}.
#' @examples
#' descriptives(iris)
#' @rdname descriptives
#' @export
#' @importFrom stats median sd
descriptives <- function(x, ...) {
UseMethod("descriptives", x)
}
#' @method descriptives matrix
#' @export
descriptives.matrix <- function(x, ...) {
Args <- as.list(match.call()[-1])
Args$x <- data.frame(x)
do.call(descriptives, Args)
}
#' @method descriptives data.frame
#' @export
descriptives.data.frame <- function(x, ...) {
data_types <-
sapply(x, function(i) {
paste0(class(i), collapse = ", ")
})
out <- lapply(x, descriptives)
all_names <-
c(
"n",
"missing",
"unique",
"mean",
"median",
"mode",
"mode_value",
"sd",
"v",
"min",
"max",
"range",
"skew",
"skew_2se",
"kurt",
"kurt_2se"
)
out <-
do.call(rbind, c(lapply(out, function(x)
data.frame(c(
x, sapply(setdiff(all_names, names(x)),
function(y)
NA)
))),
make.row.names = FALSE))
out <- out[, all_names]
out <- cbind(name = names(x),
type = data_types,
out)
rownames(out) <- NULL
out
}
#' @method descriptives numeric
#' @export
descriptives.numeric <- function(x, ...) {
rng <- range(x, na.rm = TRUE)
sk <- skew_kurtosis(x)
cbind(
data.frame(
n = sum(!is.na(x)),
missing = sum(is.na(x))/length(x),
unique = length(unique(x)),
mean = mean(x, na.rm = TRUE),
median = median(x, na.rm = TRUE),
mode = median(x, na.rm = TRUE),
sd = sd(x, na.rm = TRUE),
min = rng[1],
max = rng[2],
range = diff(rng)
),
t(sk)
)
}
#' @method descriptives integer
#' @export
descriptives.integer <- descriptives.numeric
#' @method descriptives default
#' @export
descriptives.default <- function(x, ...) {
if(is.factor(x)) x <- droplevels(x)
if(!is.vector(x)) x <- tryCatch(as.vector(x), error = function(e){NA})
tb <- tryCatch(table(x, useNA = "always"), error = function(e){NA})
data.frame(
n = tryCatch({sum(!is.na(x))}, error = function(e){NA}),
missing = sum(is.na(x))/length(x),
unique = tryCatch(length(tb), error = function(e){NA}),
mode = tryCatch({
unname(tb[which.max(tb)])
}, error = function(e){NA}),
mode_value = tryCatch(names(tb)[which.max(tb)], error = function(e){NA}),
v = tryCatch(var_cat(x), error = function(e){NA})
)
}
#' @method descriptives factor
#' @export
descriptives.factor <- descriptives.default
# Agresti's V for categorical data variability
# Agresti, Alan (1990). Categorical Data Analysis. John Wiley and Sons, Inc. 24-25
var_cat <- function(x) {
x <- x[!is.na(x)]
if (!length(x))
return(NA)
p <- prop.table(table(x))
#-1 * sum(p*log(p)) Shannon entropy
1 - sum(p ^ 2)
}
#' @title Calculate skew and kurtosis
#' @description Calculate skew and kurtosis, standard errors for both, and the
#' estimates divided by two times the standard error. If this latter quantity
#' exceeds an absolute value of 1, the skew/kurtosis is significant. With very
#' large sample sizes, significant skew/kurtosis is common.
#' @param x An object for which a method exists.
#' @param verbose Logical. Whether or not to print messages to the console,
#' Default: FALSE
#' @param se Whether or not to return the standard errors, Default: FALSE
#' @param ... Additional arguments to pass to and from functions.
#' @return A \code{matrix} of skew and kurtosis statistics for \code{x}.
#' @examples
#' skew_kurtosis(datasets::anscombe)
#' @rdname skew_kurtosis
#' @export
skew_kurtosis <- function(x, verbose = FALSE, se = FALSE, ...) {
UseMethod("skew_kurtosis", x)
}
#' @method skew_kurtosis matrix
#' @export
skew_kurtosis.matrix <-
function(x, verbose = FALSE, se = FALSE, ...) {
Args <- as.list(match.call()[-1])
Args$x <- data.frame(x)
do.call(skew_kurtosis, Args)
}
#' @method skew_kurtosis data.frame
#' @export
skew_kurtosis.data.frame <-
function(x, verbose = FALSE, se = FALSE, ...) {
t(sapply(x, skew_kurtosis))
}
#' @method skew_kurtosis matrix
#' @export
skew_kurtosis.matrix <-
function(x, verbose = FALSE, se = FALSE, ...) {
t(apply(x, 2, skew_kurtosis))
}
#' @method skew_kurtosis numeric
#' @export
skew_kurtosis.numeric <-
function(x, verbose = FALSE, se = FALSE, ...) {
x <- x[!is.na(x)]
n <- length(x)
out <- tryCatch({
if (n > 3) {
if (n > 5000 &
verbose)
message("Sample size > 5000; skew and kurtosis will likely be significant.")
skew <- sum((x - mean(x)) ^ 3) / (n * sqrt(var(x)) ^ 3)
skew_se <- sqrt(6 * n * (n - 1) / (n - 2) / (n + 1) / (n + 3))
skew_2se <- skew / (2 * skew_se)
kurt <- sum((x - mean(x)) ^ 4) / (n * var(x) ^ 2) - 3
kurt_se <- sqrt(24 * n * ((n - 1) ^ 2) / (n - 3) / (n - 2) / (n + 3) /
(n + 5))
kurt_2se <- kurt / (2 * kurt_se)
c(skew,
skew_se,
skew_2se,
kurt,
kurt_se,
kurt_2se
)
} else {
stop()
}
}, error = function(e){ rep(NA, 6) })
names(out) <-
c("skew", "skew_se", "skew_2se", "kurt", "kurt_se", "kurt_2se")
if (se) {
return(out)
} else {
return(out[c(1, 3, 4, 6)])
}
}
#' @method skew_kurtosis default
#' @export
skew_kurtosis.default <-
function(x, verbose = FALSE, se = FALSE, ...) {
out <- rep(NA, 6)
names(out) <-
c("skew", "skew_se", "skew_2se", "kurt", "kurt_se", "kurt_2se")
if (se) {
return(out)
} else {
return(out[c(1, 3, 4, 6)])
}
}
#' @importFrom usethis ui_oops ui_done
col_message <- function (..., col = 30, success = TRUE, verbose = TRUE){
if(verbose){
txt <- do.call(paste0, list(...))
# Check if this function is called from within an rmarkdown document.
# If that is the case, the colorized messages can cause knitting errors.
if(!any(grepl("rmarkdown", unlist(lapply(sys.calls(), `[[`, 1)), fixed = TRUE))){
if(success){
usethis::ui_done(txt)
} else {
usethis::ui_oops(txt)
}
}
}
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/descriptives.R
|
#' @title Add endpoint to WORCS project
#' @description Add a specific endpoint to the WORCS project file. Endpoints are
#' files that are expected to be exactly reproducible (e.g., a manuscript,
#' figure, table, et cetera). Reproducibility is checked by ensuring the
#' endpoint's checksum is unchanged.
#' @param filename Character, indicating the file to be tracked as endpoint.
#' Default: NULL.
#' @param worcs_directory Character, indicating the WORCS project directory
#' to which to save data. The default value "." points to the current directory.
#' Default: '.'
#' @param verbose Logical. Whether or not to print status messages to the
#' console. Default: TRUE
#' @param ... Additional arguments.
#' @return No return value. This function is called for its side effects.
#' @examples
#' # Create directory to run the example
#' old_wd <- getwd()
#' test_dir <- file.path(tempdir(), "add_endpoint")
#' dir.create(test_dir)
#' setwd(test_dir)
#' file.create(".worcs")
#' writeLines("test", "test.txt")
#' add_endpoint("test.txt")
#' # Cleaning example directory
#' setwd(old_wd)
#' unlink(test_dir, recursive = TRUE)
#' @rdname add_endpoint
#' @seealso
#' \code{\link[worcs]{snapshot_endpoints}}
#' \code{\link[worcs]{check_endpoints}}
#' @export
#' @importFrom yaml read_yaml
add_endpoint <- function(filename = NULL, worcs_directory = ".", verbose = TRUE, ...){
dn_worcs <- dirname(check_recursive(file.path(normalizePath(worcs_directory),
".worcs")))
fn_worcs <- file.path(dn_worcs, ".worcs")
worcsfile <- yaml::read_yaml(fn_worcs)
endpoints <- worcsfile[["endpoints"]]
# if(is.null(entry_point)){
# if(is.null(worcsfile[["entry_point"]])){
# stop("No 'entry_point' specified, and the project contains no existing entry points.")
# } else {
# if(length(worcsfile[["entry_point"]]) > 1){
# stop("No 'entry_point' specified, and the project contains multiple entry points. Specify one of the following: ", paste0("'", names(worcsfile[["entry_point"]])))
# }
# }
#
# }
fn_endpoint <- path_abs_worcs(filename, dn_worcs)
if(!file.exists(fn_endpoint)){
stop("The file does not exist: ", filename)
}
endpoints <- append(endpoints, filename)
endpoints <- unique(endpoints)
# Append worcsfile
out <- try({
write_worcsfile(filename = fn_worcs, endpoints = endpoints, modify = TRUE)
store_checksum(fn_endpoint, entry_name = filename, worcsfile = fn_worcs)
})
if(inherits(out, "try-error")){
col_message("Could not add endpoint '", filename, "' to '.worcs'.",
verbose = verbose, success = FALSE)
} else {
col_message("Adding endpoint '", filename, "' to '.worcs'.",
verbose = verbose)
}
}
#' @title Snapshot endpoints in WORCS project
#' @description Update the checksums of all endpoints in a WORCS project.
#' @param worcs_directory Character, indicating the WORCS project directory
#' to which to save data. The default value "." points to the current directory.
#' Default: '.'
#' @param verbose Logical. Whether or not to print status messages to the
#' console. Default: TRUE
#' @param ... Additional arguments.
#' @return No return value. This function is called for its side effects.
#' @examples
#' # Create directory to run the example
#' old_wd <- getwd()
#' test_dir <- file.path(tempdir(), "update_endpoint")
#' dir.create(test_dir)
#' setwd(test_dir)
#' file.create(".worcs")
#' writeLines("test", "test.txt")
#' add_endpoint("test.txt")
#' writeLines("second test", "test.txt")
#' snapshot_endpoints()
#' # Cleaning example directory
#' setwd(old_wd)
#' unlink(test_dir, recursive = TRUE)
#' @seealso
#' \code{\link[worcs]{add_endpoint}}
#' \code{\link[worcs]{check_endpoints}}
#' @export
snapshot_endpoints <- function(worcs_directory = ".", verbose = TRUE, ...){
dn_worcs <- dirname(check_recursive(file.path(normalizePath(worcs_directory),
".worcs")))
fn_worcs <- file.path(dn_worcs, ".worcs")
worcsfile <- yaml::read_yaml(fn_worcs)
if(is.null(worcsfile[["endpoints"]])){
col_message("No endpoints found in WORCS project.", verbose = verbose, success = FALSE)
}
endpoints <- worcsfile[["endpoints"]]
for(ep in endpoints){
out <- try({
fn_endpoint <- path_abs_worcs(ep, dn_worcs)
store_checksum(fn_endpoint, entry_name = ep, worcsfile = fn_worcs)
})
if(inherits(out, "try-error")){
col_message("Could not snapshot endpoint '", ep, "'.",
verbose = verbose, success = FALSE)
} else {
col_message("Update snapshot of endpoint '", ep, "'.",
verbose = verbose)
}
}
}
#' @title Check endpoints in WORCS project
#' @description Check that the checksums of all endpoints in a WORCS project
#' match their snapshots.
#' @param worcs_directory Character, indicating the WORCS project directory
#' to which to save data. The default value "." points to the current directory.
#' Default: '.'
#' @param verbose Logical. Whether or not to print status messages to the
#' console. Default: TRUE
#' @param ... Additional arguments.
#' @return Returns a logical value (TRUE/FALSE) invisibly.
#' @examples
#' # Create directory to run the example
#' old_wd <- getwd()
#' test_dir <- file.path(tempdir(), "check_endpoint")
#' dir.create(test_dir)
#' setwd(test_dir)
#' file.create(".worcs")
#' writeLines("test", "test.txt")
#' add_endpoint("test.txt")
#' check_endpoints()
#' # Cleaning example directory
#' setwd(old_wd)
#' unlink(test_dir, recursive = TRUE)
#' @seealso
#' \code{\link[worcs]{add_endpoint}}
#' \code{\link[worcs]{snapshot_endpoints}}
#' @export
check_endpoints <- function(worcs_directory = ".", verbose = TRUE, ...){
dn_worcs <- dirname(check_recursive(file.path(normalizePath(worcs_directory),
".worcs")))
fn_worcs <- file.path(dn_worcs, ".worcs")
worcsfile <- yaml::read_yaml(fn_worcs)
if(is.null(worcsfile[["endpoints"]])){
if(interactive()){
col_message("No endpoints found in WORCS project.", verbose = verbose, success = FALSE)
} else {
stop("No endpoints found in WORCS project.")
}
}
endpoints <- worcsfile[["endpoints"]]
replicates <- rep(x = TRUE, times = length(endpoints))
for(i in seq_along(endpoints)){
ep <- endpoints[i]
out <- try({
#fn_endpoint <- path_abs_worcs(ep, dn_worcs)
# Use absolute file path here
check_sum(file.path(dn_worcs, ep), old_cs = worcsfile[["checksums"]][[ep]], worcsfile = fn_worcs, error = TRUE)
}, silent = TRUE)
if(inherits(out, "try-error")){
col_message("Endpoint '", ep, "' did not replicate.",
verbose = verbose, success = FALSE)
replicates[i] <- FALSE
} else {
col_message("Endpoint '", ep, "' replicates.",
verbose = verbose)
}
}
if(!interactive()){
if(any(!replicates)){
# git_record <- system2("git", paste0('-C "', dirname(fn_worcs), '" ls-files --eol'), stdout = TRUE)
# git_record <- git_record[grepl(endpoints[1], git_record, fixed = TRUE)]
# stop("Endpoints ", paste0(endpoints[which(!replicates)], collapse = ", "), " did not replicate. Checksum of record: ", worcsfile[["checksums"]][[endpoints[1]]], ", local checksum: ", cs_fun(ep, fn_worcs), ", git ls: ", git_record)
stop("Endpoints ", paste0(endpoints[which(!replicates)], collapse = ", "), " did not replicate. Make sure that the endpoint snapshot and renv are up to date, and verify that differences are not due to Git changing the line endings of text files.")
}
}
return(invisible(all(replicates)))
}
#' @title Set up GitHub Actions to Check Endpoints
#' @description Sets up a GitHub Action to perform continuous integration (CI)
#' for a WORCS project. CI automatically evaluates `check_endpoints()`
#' at each push or pull request.
#' @return No return value. This function is called for its side effects.
#' @seealso
#' \code{\link[usethis]{use_github_action}}
#' \code{\link[worcs]{add_endpoint}}
#' \code{\link[worcs]{check_endpoints}}
#' @export
#' @importFrom usethis use_github_action
github_action_check_endpoints <- function(){
usethis::use_github_action(url = "https://github.com/cjvanlissa/actions/blob/main/worcs_endpoints.yaml", badge = TRUE)
}
#' @title Set up GitHub Action to Reproduce WORCS Project
#' @description Sets up a GitHub Action to perform continuous integration (CI)
#' for a WORCS project. CI automatically evaluates `reproduce()` and
#' `check_endpoints()` at each push or pull request.
#' @return No return value. This function is called for its side effects.
#' @seealso
#' \code{\link[usethis]{use_github_action}}
#' \code{\link[worcs]{add_endpoint}}
#' \code{\link[worcs]{check_endpoints}}
#' \code{\link[worcs]{github_action_check_endpoints}}
#' @export
#' @importFrom usethis use_github_action
github_action_reproduce <- function(){
usethis::use_github_action(url = "https://github.com/cjvanlissa/actions/blob/main/worcs_reproduce.yaml", badge = TRUE)
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/endpoint.R
|
#' @title Export project to .zip file
#' @param zipfile Character. Path to a \code{.zip} file that is to be created.
#' The default argument \code{NULL} creates a \code{.zip} file in the directory
#' one level above the 'worcs' project directory. By default, all files tracked
#' by 'Git' are included in the \code{.zip} file, excluding 'data.csv' if
#' \code{open_data = FALSE}.
#' @param worcs_directory Character. Path to the WORCS project directory to
#' export. Defaults to \code{"."}, which refers to the current working
#' directory.
#' @param open_data Logical. Whether or not to include the original data,
#' 'data.csv', if this file exists. If \code{open_data = FALSE} and an open
#' data file does exist, then it is excluded from the \code{.zip} file. If it
#' does not yet exist, a synthetic data set is generated and added to the
#' \code{.zip} file.
#' @return Logical, indicating the success of the operation. This function is
#' called for its side effect of creating a \code{.zip} file.
#' @examples
#' export_project(worcs_directory = tempdir())
#' @importFrom utils tail zip
#' @importFrom gert git_ls
#' @export
export_project <- function(zipfile = NULL, worcs_directory = ".", open_data = TRUE)
{
# get properties about the project and paths
#worcs_directory <- normalizePath(worcs_directory)
worcsfile <- tryCatch(read_yaml(file.path(worcs_directory, ".worcs")), error = function(e){
col_message("No '.worcs' file found; not a WORCS project, or the working directory has been changed.", success = FALSE)
FALSE
})
if(isFALSE(worcsfile)) return(invisible(FALSE))
zip_these <- tryCatch({
git_ls(repo = worcs_directory)$path
}, error = function(e){
col_message("Could not find 'Git' repository.", success = FALSE)
FALSE
})
if(isFALSE(zip_these)) return(invisible(FALSE))
project_folder <- basename(normalizePath(worcs_directory))
# if no zipfile is given, export to a zip file with
# the name of the project folder
if(is.null(zipfile)) {
zipfile <- tryCatch(file.path(dirname(normalizePath(worcs_directory)), paste0(project_folder, ".zip")), error = function(e) NULL)
}
if(!is.character(zipfile)){
col_message("Could not create zipfile.", success = FALSE)
return(invisible(FALSE))
}
if (file.exists(zipfile)) {
stop("Could not write to '", zipfile, "' because the file already exists.")
return(invisible(FALSE))
}
# Use this to decide which files to ZIP, but always add data.csv
# if the user specifies open_data = TRUE
if(!is.null(worcsfile[["data"]])){
data_original <- names(worcsfile[["data"]])
data_synthetic <- unlist(lapply(data_original, function(i){worcsfile[["data"]][[i]][["synthetic"]]}))
if(!open_data){
data_original <- vector("character")
for(this_file in data_original){
endsw <- endsWith(x = zip_these, suffix = this_file)
if(any(endsw)){
zip_these <- zip_these[-which(endsw)]
}
}
}
zip_these <- unique(c(zip_these, data_original, data_synthetic))
}
oldwd <- getwd()
setwd(worcs_directory)
on.exit(setwd(oldwd))
outcome <- zip(zipfile = zipfile, files = zip_these, flags="-rq")
setwd(oldwd)
if(!outcome == 0){
return(invisible(FALSE))
} else {
return(invisible(TRUE))
}
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/export.R
|
#' @title Modify .gitignore file
#' @description Arguments passed through \code{...} are added to the .gitignore
#' file. Elements already present in the file are modified.
#' When \code{ignore = TRUE}, the arguments are added to the .gitignore file,
#' which will cause 'Git' to not track them.
#'
#' When \code{ignore = FALSE}, the arguments are prepended with \code{!},
#' This works as a "double negation", and will cause 'Git' to track the files.
#' @param ... Any number of character arguments, representing files to be added
#' to the .gitignore file.
#' @param ignore Logical. Whether or not 'Git' should ignore these files.
#' @param repo a path to an existing repository, or a git_repository object as
#' returned by git_open, git_init or git_clone.
#' @return No return value. This function is called for its side effects.
#' @rdname git_ignore
#' @examples
#' dir.create(".git")
#' git_ignore("ignorethis.file")
#' unlink(".git", recursive = TRUE)
#' file.remove(".gitignore")
#' @export
git_ignore <- function(..., ignore = TRUE, repo = "."){
ab_path <- normalizePath(repo)
if(!dir.exists(file.path(ab_path, ".git"))){
stop("No valid Git repository exists at ", normalizePath(file.path(ab_path, ".git")), call. = FALSE)
}
dots <- unlist(list(...))
path_gitig <- file.path(ab_path, ".gitignore")
cl <- match.call()
cl[[1L]] <- str2lang("worcs:::write_gitig")
cl[["filename"]] <- path_gitig
cl[c("ignore", "repo")] <- NULL
cl[["modify"]] <- file.exists(path_gitig)
if(!ignore){
ig_these <- names(cl) == "" & sapply(cl, class) == "character"
if(any(ig_these)){
cl[ig_these] <- lapply(cl[ig_these], function(x){ paste0("!", x) })
}
}
eval(cl, parent.frame())
}
#' @importFrom gert libgit2_config git_config_global
has_git <- function(){
tryCatch({
config <- libgit2_config()
return(has_git_user() & (any(unlist(config[c("ssh", "https")]))))
}, error = function(e){
return(FALSE)
})
}
#' @title Set global 'Git' credentials
#' @description This function is a wrapper for
#' \code{\link[gert:git_config]{git_config_global_set}}.
#' It sets two name/value pairs at
#' once: \code{name = "user.name"} is set to the value of the \code{name}
#' argument, and \code{name = "user.email"} is set to the value of the
#' \code{email} argument.
#' @param name Character. The user name you want to use with 'Git'.
#' @param email Character. The email address you want to use with 'Git'.
#' @param overwrite Logical. Whether or not to overwrite existing 'Git'
#' credentials. Use this to prevent code from accidentally overwriting existing
#' 'Git' credentials. The default value uses \code{\link{has_git_user}}
#' to set overwrite to \code{FALSE} if user credentials already exist, and to
#' \code{TRUE} if no user credentials exist.
#' @param verbose Logical. Whether or not to print status messages to
#' the console. Default: TRUE
#' @return No return value. This function is called for its side effects.
#' @rdname git_user
#' @examples
#' do.call(git_user, worcs:::get_user())
#' @export
#' @importFrom gert git_config_global_set
git_user <- function(name, email, overwrite = !has_git_user(), verbose = TRUE){
if(overwrite){
invisible(
tryCatch({
do.call(git_config_global_set, list(
name = "user.name",
value = name
))
do.call(git_config_global_set, list(
name = "user.email",
value = email
))
col_message("'Git' username set to '", name, "' and email set to '", email, "'.", verbose = verbose)
}, error = function(e){col_message("Could not set 'Git' credentials.", success = FALSE)})
)
} else {
message("To set the 'Git' username and email, call 'git_user()' with the argument 'overwrite = TRUE'.")
}
}
#' @importFrom gert git_config_global
get_user <- function(){
Args <- list(
name = "yourname",
email = "[email protected]"
)
if(has_git_user()){
cf <- git_config_global()
Args$name <- cf$value[cf$name == "user.name"]
Args$email <- cf$value[cf$name == "user.email"]
}
return(Args)
}
#' @title Check whether global 'Git' credentials exist
#' @description Check whether the values \code{user.name} and \code{user.email}
#' exist exist in the 'Git' global configuration settings.
#' Uses \code{\link[gert:git_config]{git_config_global}}.
#' @return Logical, indicating whether 'Git' global configuration settings could
#' be retrieved, and contained the values
#' \code{user.name} and \code{user.email}.
#' @rdname has_git_user
#' @examples
#' has_git_user()
#' @export
#' @importFrom gert git_config_global
has_git_user <- function(){
tryCatch({
cf <- git_config_global()
if(!("user.name" %in% cf$name) & ("user.email" %in% cf$name)){
stop()
} else {
return(TRUE)
}
}, error = function(e){
message("No 'Git' credentials found, returning name = 'yourname' and email = '[email protected]'.")
return(FALSE)
})
}
#' @title Add, commit, and push changes.
#' @description This function is a wrapper for
#' \code{\link[gert:git_add]{git_add}}, \code{\link[gert:git_commit]{git_commit}},
#' and
#' \code{\link[gert:git_push]{git_push}}. It adds all locally changed files to the
#' staging area of the local 'Git' repository, then commits these changes
#' (with an optional) \code{message}, and then pushes them to a remote
#' repository. This is used for making a "cloud backup" of local changes.
#' Do not use this function when working with privacy sensitive data,
#' or any other file that should not be pushed to a remote repository.
#' The \code{\link[gert:git_add]{git_add}} argument
#' \code{force} is disabled by default,
#' to avoid accidentally committing and pushing a file that is listed in
#' \code{.gitignore}.
#' @param remote name of a remote listed in git_remote_list()
#' @param refspec string with mapping between remote and local refs
#' @param password a string or a callback function to get passwords for authentication or password protected ssh keys. Defaults to askpass which checks getOption('askpass').
#' @param ssh_key path or object containing your ssh private key. By default we look for keys in ssh-agent and credentials::ssh_key_info.
#' @param verbose display some progress info while downloading
#' @param repo a path to an existing repository, or a git_repository object as returned by git_open, git_init or git_clone.
#' @param mirror use the --mirror flag
#' @param force use the --force flag
#' @param files vector of paths relative to the git root directory. Use "." to stage all changed files.
#' @param message a commit message
#' @param author A git_signature value, default is git_signature_default.
#' @param committer A git_signature value, default is same as author
#' @return No return value. This function is called for its side effects.
#' @examples
#' git_update()
#' @rdname git_update
#' @export
#' @importFrom gert git_config_global_set git_ls git_add git_commit git_push
git_update <- function(message = paste0("update ", Sys.time()),
files = ".",
repo = ".",
author,
committer,
remote,
refspec,
password,
ssh_key,
mirror,
force,
verbose = TRUE){
tryCatch({
git_ls(repo = repo)
col_message("Identified local 'Git' repository.", verbose = verbose)
}, error = function(e){
col_message("Not a 'Git' repository.", success = FALSE)
col_message("Could not add files to staging area of 'Git' repository.", success = FALSE)
col_message("Could not commit staged files to 'Git' repository.", success = FALSE)
col_message("Could not push local commits to remote repository.", success = FALSE)
return()
})
cl <- as.list(match.call()[-1])
for(this_arg in c("message", "files", "repo")){
if(is.null(cl[[this_arg]])){
cl[[this_arg]] <- formals()[[this_arg]]
}
}
#if(length(cl) > 0){
# Args[sapply(names(cl), function(i) which(i == names(Args)))] <- cl
#}
Args_add <- cl[names(cl) %in% c("files", "repo")]
Args_commit <- cl[names(cl) %in% c("message", "author", "committer", "repo")]
Args_push <- cl[names(cl) %in% c("remote", "refspec", "password", "ssh_key", "mirror", "force", "verbose", "repo")]
invisible(
tryCatch({
do.call(git_add, Args_add)
col_message("Added files to staging area of 'Git' repository.", verbose = verbose)
}, error = function(e){
col_message("Could not add files to staging area of 'Git' repository.", success = FALSE)
message(e)
})
)
invisible(
tryCatch({
do.call(git_commit, Args_commit)
col_message("Committed staged files to 'Git' repository.", verbose = verbose)
}, error = function(e){
col_message("Could not commit staged files to 'Git' repository.", success = FALSE)
message(e)
})
)
invisible(
tryCatch({
do.call(git_push, Args_push)
col_message("Pushed local commits to remote repository.", verbose = verbose)
}, error = function(e){
col_message("Could not push local commits to remote repository.", success = FALSE)
message(e)
})
)
}
parse_repo <- function(remote_repo, verbose = TRUE){
valid_repo <- grepl("^git@.+?\\..+?:.+?/.+?(\\.git)?$", remote_repo) | grepl("^https://.+?\\..+?/.+?/.+?(\\.git)?$", remote_repo)
if(!valid_repo){
col_message("Not a valid 'Git' remote repository address: ", remote_repo, success = FALSE, verbose = verbose)
return(NULL)
}
repo_url <- gsub("(^.+?@)(.*)$", "\\2", remote_repo)
repo_url <- gsub("(\\..+?):", "\\1/", repo_url)
repo_url <- gsub("\\.git$", "", repo_url)
gsub("^(https://)?", "https://", repo_url)
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/github.R
|
make_labels <- function(data, variables = names(data)[sapply(data, inherits, what = "factor")], filename = "value_labels.yml"){
df <- data[variables]
out <- lapply(df, function(i){
whatclass <- class(i)[1]
res <- levels(i)
names(res) <- 1:length(levels(i))
c(list(class = whatclass), as.list(res))
})
yaml::write_yaml(out, file = filename)
}
# read_labels <- function(filename = "value_labels.yml"){
# labs <- yaml::read_yaml(filename)
# class(labs) <- c("value_labels", class(labs))
# labs
# }
#' @title Drop value labels
#' @description Coerces `factor` and `ordered` variables to class `integer`.
#' @param x A `data.frame`.
#' @param variables Column names of `x` to coerce to integer.
#' @return A `data.frame`.
#' @examples
#' \dontrun{
#' if(interactive()){
#' df <- data.frame(x = factor(c("a", "b")))
#' data_unlabel(df)
#' }
#' }
#' @rdname data_unlabel
#' @export
data_unlabel <- function(x, variables = names(x)[sapply(x, inherits, what = "factor")]){
if(length(variables) > 0){
x[variables] <- lapply(x[variables], as.integer)
}
x
}
#' @title Label factor variables using metadata
#' @description For each column of `x`, this function checks whether value
#' labels exist in `value_labels`. If so, integer values are replaced with these
#' value labels.
#' @param x A `data.frame`.
#' @param variables Column names of `x` to replace, Default: `names(x)`
#' @param value_labels A list with value labels, typically read from metadata
#' generated by \code{\link{open_data}} or \code{\link{closed_data}}.
#' Default: `read_yaml(paste0("value_labels_", substitute(x), ".yml"))`
#' @return A `data.frame`.
#' @examples
#' \dontrun{
#' if(interactive()){
#' labs <- list(x = list(class = "factor", `1` = "a", `2` = "b"))
#' df <- data.frame(x = 1:2)
#' data_label(df, value_labels = labs)
#' }
#' }
#' @rdname data_label
#' @export
data_label <- function(x, variables = names(x), value_labels = read_yaml(paste0("value_labels_", substitute(x), ".yml"))){
out <- x
for(nam in variables){
if(!nam %in% names(value_labels)){
next
}
if(inherits(x[[nam]], what = value_labels[[nam]][1])){
next
}
switch(value_labels[[nam]][["class"]],
"factor" = {
out[[nam]] <- factor(x[[nam]], levels = names(value_labels[[nam]])[-1], labels = unlist(value_labels[[nam]][-1]))
},
"ordered" = {
out[[nam]] <- ordered(x[[nam]], levels = names(value_labels[[nam]])[-1], labels = unlist(value_labels[[nam]][-1]))
})
}
out
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/label_data.R
|
#' @title Load project entry points
#' @description Loads the designated project entry point into the default
#' editor, using \code{\link[utils]{file.edit}}.
#' @param worcs_directory Character, indicating the WORCS project directory to
#' which to save data. The default value \code{"."} points to the current
#' directory.
#' @param verbose Logical. Whether or not to print status messages to
#' the console. Default: TRUE
#' @param ... Additional arguments passed to \code{\link[utils]{file.edit}}.
#' @return No return value. This function is called for its side effects.
#' @examples
#' \dontrun{
#' # Create directory to run the example
#' old_wd <- getwd()
#' test_dir <- file.path(tempdir(), "entrypoint")
#' dir.create(test_dir)
#' setwd(test_dir)
#' # Prepare worcs file and dummy entry point
#' worcs:::write_worcsfile(".worcs", entry_point = "test.txt")
#' writeLines("Hello world", con = file("test.txt", "w"))
#' # Demonstrate load_entrypoint()
#' load_entrypoint()
#' # Cleaning example directory
#' setwd(old_wd)
#' unlink(test_dir, recursive = TRUE)
#' }
#' @rdname load_entrypoint
#' @importFrom utils file.edit
#' @export
load_entrypoint <- function(worcs_directory = ".", verbose = TRUE, ...){
cl <- as.list(match.call()[-1])
# Filenames housekeeping
dn_worcs <- dirname(check_recursive(file.path(normalizePath(worcs_directory), ".worcs")))
fn_worcs <- file.path(dn_worcs, ".worcs")
if(file.exists(fn_worcs)){
worcsfile <- read_yaml(fn_worcs)
col_message("Loading .worcs file.", verbose = verbose)
} else {
stop("No .worcs file found.")
}
if(!is.null(worcsfile[["entry_point"]])){
for(thisfile in worcsfile[["entry_point"]]){
tryCatch({
thepath <- file.path(dn_worcs, thisfile)
cl <- c(thepath, cl)
cl[!names(cl) %in% c("title", "editor", "fileEncoding")] <- NULL
do.call(file.edit, cl)
col_message("Loading entry point '", thisfile, "'.", verbose = verbose)
}, error = function(e){
col_message("Could not load entry point '", thisfile, "'.", verbose = verbose, success = FALSE)
})
}
} else {
stop("No .worcs file found.")
}
}
#load_entrypoint("c:/tmp")
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/load_entrypoint.R
|
authors_from_csv <- function(filename, format = "papaja", what = "aut"){
df <- read.csv(filename, stringsAsFactors = FALSE)
if(!is.null(df[["order"]])) df <- df[order(df$order), -which(names(df) == "order")]
aff <- df[, grep("^X(\\.\\d{1,})?$", names(df))]
unique_aff <- as.vector(t(as.matrix(aff)))
unique_aff <- unique_aff[!unique_aff == ""]
unique_aff <- unique_aff[!duplicated(unique_aff)]
df$affiliation <- apply(aff, 1, function(x){
paste(which(unique_aff %in% unlist(x)), collapse = ",")
})
aff <- do.call(c, lapply(1:length(unique_aff), function(x){
c(paste0(" - id: \"", x, "\""), paste0(" institution: \"", unique_aff[x], "\""))
}))
df$name <- paste(df[[1]], df[[2]])
aut <- df[, c("name", "affiliation", "corresponding", "address", "email")]
names(aut) <- paste0(" ", names(aut))
names(aut)[1] <- " - name"
aut <- do.call(c, lapply(1:nrow(aut), function(x){
tmp <- aut[x, ]
tmp <- tmp[, !tmp == "", drop = FALSE]
out <- paste0(names(tmp), ': "', tmp, '"')
gsub('\\"yes\\"', "yes", out)
}))
if(what == "aff") return(paste0(aff, collapse = "\n")) #cat("\n", aff, sep = "\n")
if(what == "aut") return(paste0(aut, collapse = "\n")) #cat(aut, "", sep = "\n")
}
write_as_utf <- function(x, con, append = FALSE, encoding = "UTF-8", ...) {
if(append){
if(file.exists(con)){
old_contents <- readLines(con, encoding = encoding)
x <- c(old_contents, x)
}
}
opts <- options(encoding = 'native.enc')
on.exit(options(opts), add = TRUE)
writeLines(enc2utf8(x), con, ..., useBytes = TRUE)
}
read_as_utf <- function(..., encoding = "UTF-8"){
Args <- list(...)
if(is.null(Args[["encoding"]])) Args[["encoding"]] <- encoding
do.call(readLines, Args)
}
cran_version <- function(x = packageVersion("worcs")){
tryCatch(
gsub("(\\d+\\.\\d+\\.\\d+).*", "\\1", as.character(x), perl = TRUE),
error = function(e){NA}
)
}
all_args <- function(orig_values = FALSE, ...) {
# Perhaps ... must be removed altogether, see https://github.com/HenrikBengtsson/future/issues/13
# get formals for parent function
parent_formals <- formals(sys.function(sys.parent(n = 1)))
# Get names of implied arguments
fnames <- names(parent_formals)
# Remove '...' from list of parameter names if it exists
fnames <- fnames[-which(fnames == '...')]
# Get currently set values for named variables in the parent frame
args <- evalq(as.list(environment()), envir = parent.frame())
# Get the list of variables defined in '...'
# CJ: This needs to be fixed to work with nested function calls
args <- c(args[fnames], evalq(list(...), envir = parent.frame()))
if(orig_values) {
# get default values
defargs <- as.list(parent_formals)
defargs <- defargs[unlist(lapply(defargs, FUN = function(x) class(x) != "name"))]
args[names(defargs)] <- defargs
setargs <- evalq(as.list(match.call())[-1], envir = parent.frame())
args[names(setargs)] <- setargs
}
return(args)
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/misc.R
|
#' @title Add Recipe to Generate Endpoints
#' @description Add a recipe to a WORCS project file to generate its endpoints.
#' @param worcs_directory Character, indicating the WORCS project directory
#' to which to save data. The default value "." points to the current directory.
#' Default: '.'
#' @param recipe Character string, indicating the function call to evaluate in
#' order to reproduce the endpoints of the WORCS project.
#' @param terminal Logical, indicating whether or not to evaluate the `recipe`
#' in the terminal (`TRUE`) or in R (`FALSE`). Defaults to `FALSE`
#' @param verbose Logical. Whether or not to print status messages to the
#' console. Default: `TRUE`
#' @param ... Additional arguments.
#' @return No return value. This function is called for its side effects.
#' @examples
#' # Create directory to run the example
#' old_wd <- getwd()
#' test_dir <- file.path(tempdir(), "add_recipe")
#' dir.create(test_dir)
#' setwd(test_dir)
#' file.create(".worcs")
#' writeLines("test", "test.txt")
#' add_recipe()
#' # Cleaning example directory
#' setwd(old_wd)
#' unlink(test_dir, recursive = TRUE)
#' @rdname add_recipe
#' @seealso
#' \code{\link[worcs]{add_endpoint}}
#' \code{\link[worcs]{snapshot_endpoints}}
#' \code{\link[worcs]{check_endpoints}}
#' @export
add_recipe <- function(worcs_directory = ".", recipe = "rmarkdown::render('manuscript/manuscript.Rmd')", terminal = FALSE, verbose = TRUE, ...){
checkworcs(worcs_directory, iserror = TRUE)
fn_worcs <- path_abs_worcs(".worcs", worcs_directory)
if(!file.exists(fn_worcs)){
stop(".worcs file not found.")
}
worcs_file <- read_yaml(fn_worcs)
# if(is.null(worcs_file[["entry_point"]])){
# col_message("This WORCS project does not contain an entry point. Make sure that it .", success = FALSE, verbose = verbose)
# }
# Prepare for writing to worcs file
to_worcs <- list(
filename = fn_worcs,
modify = TRUE
)
# Synthetic data
col_message("Adding recipe to '.worcs'.", verbose = verbose)
to_worcs$recipe <- list(recipe = recipe,
terminal = terminal)
do.call(write_worcsfile, to_worcs)
}
#' @title Reproduce WORCS Project
#' @description Evaluate the recipe contained in a WORCS project to derive its
#' endpoints.
#' @param worcs_directory Character, indicating the WORCS project directory
#' to which to save data. The default value "." points to the current directory.
#' Default: '.'
#' @param verbose Logical. Whether or not to print status messages to the
#' console. Default: `TRUE`
#' @param check_endpoints Logical. Whether or not to call `check_endpoints()`
#' after reproducing the recipe. Default: `TRUE`
#' @param ... Additional arguments.
#' @return No return value. This function is called for its side effects.
#' @examples
#' # Create directory to run the example
#' old_wd <- getwd()
#' test_dir <- file.path(tempdir(), "reproduce")
#' dir.create(test_dir)
#' setwd(test_dir)
#' file.create(".worcs")
#' worcs:::add_recipe(recipe = 'writeLines("test", "test.txt")')
#' # Cleaning example directory
#' setwd(old_wd)
#' unlink(test_dir, recursive = TRUE)
#' @rdname reproduce
#' @seealso
#' \code{\link[worcs]{add_endpoint}}
#' \code{\link[worcs]{snapshot_endpoints}}
#' \code{\link[worcs]{check_endpoints}}
#' @export
reproduce <- function(worcs_directory = ".", verbose = TRUE, check_endpoints = TRUE, ...){
checkworcs(worcs_directory, iserror = TRUE)
fn_worcs <- path_abs_worcs(".worcs", worcs_directory)
if(!file.exists(fn_worcs)){
stop(".worcs file not found.")
}
worcs_file <- read_yaml(fn_worcs)
if(is.null(worcs_file[["recipe"]])){
# Check if it's an old worcs version that does have an entry point
if(!is.null(worcs_file[["entry_point"]])){
col_message("No recipe found in WORCS project. Attempting to deduce recipe from entry_point.", verbose = verbose, success = FALSE)
if(grepl(".rmd", tolower(worcs_file[["entry_point"]]), fixed = TRUE)){
worcs_file[["recipe"]] <- list(recipe = paste0("rmarkdown::render('", worcs_file[["entry_point"]],"')"), terminal = FALSE)
}
if(grepl(".r", tolower(worcs_file[["entry_point"]]), fixed = TRUE)){
worcs_file[["recipe"]] <- list(recipe = paste0("source('", worcs_file[["entry_point"]], "')"), terminal = FALSE)
}
} else {
stop("No recipe or entry_point found in '.worcs' file.")
}
}
out <- if(isTRUE(worcs_file[["recipe"]][["terminal"]])){
try(do.call(system, list(command = worcs_file[["recipe"]][["recipe"]])))
} else {
try(eval.parent(parse(text = worcs_file[["recipe"]][["recipe"]])))
}
if(inherits(out, "try-error")){
if(interactive()){
col_message("Attempt to run recipe to reproduce this WORCS project failed.", verbose = verbose, success = FALSE)
return()
} else {
stop("Attempt to run recipe to reproduce this WORCS project failed.")
}
}
if(check_endpoints){
check_endpoints(worcs_directory = dirname(fn_worcs), verbose = verbose)
}
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/recipes.R
|
#' Report formatted number
#'
#' Report a number, rounded to a specific number of decimals (defaults to two),
#' using \code{\link{formatC}}. Intended for 'R Markdown' reports.
#' @param x Numeric. Value to be reported
#' @param digits Integer. Number of digits to round to.
#' @param equals Logical. Whether to report an equals (or: smaller than) sign.
#' @return An atomic character vector.
#' @author Caspar J. van Lissa
#' @keywords internal
#' @export
#' @examples
#' report(.0234)
report <- function(x, digits = 2, equals = TRUE){
equal_sign <- "= "
if(x%%1==0){
outstring <- format_with_na(x, digits = 0, format = "f")
} else {
if(abs(x) < 10^-digits){
equal_sign <- "< "
outstring <- 10^-digits
} else {
outstring <- format_with_na(x, digits = digits, format = "f")
}
}
ifelse(equals, paste0(equal_sign, outstring), outstring)
}
format_with_na <- function(x, ...){
cl <- match.call()
missings <- is.na(x)
out <- rep(NA, length(x))
cl$x <- na.omit(x)
cl[[1L]] <- quote(formatC)
out[!missings] <- eval.parent(cl)
out
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/report.R
|
#' @title Use open data in WORCS project
#' @description This function saves a data.frame as a \code{.csv} file (using
#' \code{\link[utils:write.table]{write.csv}}), stores a checksum in '.worcs',
#' and amends the \code{.gitignore} file to exclude \code{filename}.
#' @param data A data.frame to save.
#' @param filename Character, naming the file data should be written to. By
#' default, constructs a filename from the name of the object passed to
#' \code{data}.
#' @param codebook Character, naming the file the codebook should be written to.
#' An 'R Markdown' codebook will be created and rendered to
#' \code{\link[rmarkdown]{github_document}} ('markdown' for 'GitHub').
#' By default, constructs a filename from the name of the object passed to
#' \code{data}, adding the word 'codebook'.
#' Set this argument to \code{NULL} to avoid creating a codebook.
#' @param value_labels Character, naming the file the value labels of factors
#' and ordinal variables should be written to.
#' By default, constructs a filename from the name of the object passed to
#' \code{data}, adding the word 'value_labels'.
#' Set this argument to \code{NULL} to avoid creating a file with value labels.
#' @param worcs_directory Character, indicating the WORCS project directory to
#' which to save data. The default value \code{"."} points to the current
#' directory.
#' @param save_expression An R-expression used to save the \code{data}.
#' Defaults to \code{write.csv(x = data, file = filename, row.names = FALSE)},
#' which writes a comma-separated, spreadsheet-style file.
#' The arguments \code{data} and \code{filename} are passed from
#' \code{open_data()} to the expression defined in \code{save_expression}.
#' @param load_expression An R-expression used to load the \code{data} from the
#' file created by \code{save_expression}. Defaults to
#' \code{read.csv(file = filename, stringsAsFactors = TRUE)}. This expression
#' is stored in the project's \code{.worcs} file, and invoked by
#' \code{load_data()}.
#' @param ... Additional arguments passed to and from functions.
#' @return Returns \code{NULL} invisibly. This
#' function is called for its side effects.
#' @examples
#' test_dir <- file.path(tempdir(), "data")
#' old_wd <- getwd()
#' dir.create(test_dir)
#' setwd(test_dir)
#' worcs:::write_worcsfile(".worcs")
#' df <- iris[1:5, ]
#' open_data(df, codebook = NULL)
#' setwd(old_wd)
#' unlink(test_dir, recursive = TRUE)
#' @seealso open_data closed_data save_data
#' @export
#' @rdname open_data
open_data <- function(data,
filename = paste0(deparse(substitute(data)), ".csv"),
codebook = paste0("codebook_", deparse(substitute(data)), ".Rmd"),
value_labels = paste0("value_labels_", deparse(substitute(data)), ".yml"),
worcs_directory = ".",
save_expression = write.csv(x = data, file = filename, row.names = FALSE),
load_expression = read.csv(file = filename, stringsAsFactors = TRUE),
...){
Args <- match.call()
Args$open <- TRUE
Args$save_expression <- substitute(save_expression)
Args$load_expression <- substitute(load_expression)
Args[[1L]] <- str2lang("worcs:::save_data")
eval(Args, parent.frame())
}
#' @title Use closed data in WORCS project
#' @description This function saves a data.frame as a \code{.csv} file (using
#' \code{\link[utils:write.table]{write.csv}}), stores a checksum in '.worcs',
#' appends the \code{.gitignore} file to exclude \code{filename}, and saves a
#' synthetic copy of \code{data} for public use. To generate these synthetic
#' data, the function \code{\link{synthetic}} is used.
#' @inheritParams open_data
#' @param synthetic Logical, indicating whether or not to create a synthetic
#' dataset using the \code{\link{synthetic}} function. Additional arguments for
#' the call to \code{\link{synthetic}} can be passed through \code{...}.
#' @return Returns \code{NULL} invisibly. This
#' function is called for its side effects.
#' @examples
#' old_wd <- getwd()
#' test_dir <- file.path(tempdir(), "data")
#' dir.create(test_dir)
#' setwd(test_dir)
#' worcs:::write_worcsfile(".worcs")
#' df <- iris[1:3, ]
#' closed_data(df, codebook = NULL)
#' setwd(old_wd)
#' unlink(test_dir, recursive = TRUE)
#' @seealso open_data closed_data save_data
#' @export
#' @rdname closed_data
closed_data <- function(data,
filename = paste0(deparse(substitute(data)), ".csv"),
codebook = paste0("codebook_", deparse(substitute(data)), ".Rmd"),
value_labels = paste0("value_labels_", deparse(substitute(data)), ".yml"),
worcs_directory = ".",
synthetic = TRUE,
save_expression = write.csv(x = data, file = filename, row.names = FALSE),
load_expression = read.csv(file = filename, stringsAsFactors = TRUE),
...){
Args <- match.call()
Args$open <- FALSE
Args$save_expression <- substitute(save_expression)
Args$load_expression <- substitute(load_expression)
Args[[1L]] <- str2lang("worcs:::save_data")
Args[["worcs_directory"]] <- dirname(check_recursive(file.path(normalizePath(worcs_directory), ".worcs")))
eval(Args, parent.frame())
}
#' @importFrom digest digest
#' @importFrom utils write.csv
save_data <- function(data,
filename = paste0(deparse(substitute(data)), ".csv"),
open,
codebook = paste0("codebook_", deparse(substitute(data)), ".Rmd"),
value_labels = paste0("value_labels_", deparse(substitute(data)), ".yml"),
worcs_directory = ".",
verbose = TRUE,
synthetic = TRUE,
save_expression = write.csv(x = data, file = filename, row.names = FALSE),
load_expression = read.csv(file = filename, stringsAsFactors = TRUE),
...){
# Check data names
namz <- names(data)
if(any(duplicated(namz))){
col_message(paste0("Object 'data' contains duplicated names, which were replaced: ", paste0(namz[duplicated(namz)], collapse = ", ")), success = FALSE)
names(data) <- make.unique(namz)
}
# Find .worcs file
dn_worcs <- dirname(check_recursive(file.path(normalizePath(worcs_directory), ".worcs")))
if(grepl("[", filename, fixed = TRUE) | grepl("$", filename, fixed = TRUE)){
stop("This filename is not allowed: ", filename, ". Please specify a legal filename.", call. = FALSE)
}
cl <- as.list(match.call()[-1])
create_codebook <- !is.null(codebook)
create_labels <- !is.null(value_labels)
# Filenames housekeeping --------------------------------------------------
if(create_codebook){
if(grepl("[", codebook, fixed = TRUE) | grepl("$", codebook, fixed = TRUE)){
stop("This codebook filename is not allowed: ", codebook, ". Please specify a legal filename.", call. = FALSE)
}
fn_write_codebook <- path_abs_worcs(codebook, dn_worcs)
}
if(create_labels){
if(grepl("[", value_labels, fixed = TRUE) | grepl("$", value_labels, fixed = TRUE)){
stop("This filename is not allowed: ", value_labels, ". Please specify a legal filename.", call. = FALSE)
}
fn_write_labels <- path_abs_worcs(value_labels, dn_worcs)
}
# Filenames housekeeping
fn_worcs <- path_abs_worcs(".worcs", dn_worcs)
fn_gitig <- path_abs_worcs(".gitignore", dn_worcs)
fn_original <- basename(filename)
dn_original <- dirname(filename)
fn_synthetic <- paste0("synthetic_", fn_original)
if(!dn_original == "."){
fn_synthetic <- file.path(dn_original, fn_synthetic)
}
fn_write_original <- path_abs_worcs(filename, dn_worcs)
fn_write_synth <- path_abs_worcs(fn_synthetic, dn_worcs)
# End filenames
# Remove this when worcs can handle different types:
# if(!inherits(data, c("data.frame", "matrix"))){
# stop("Argument 'data' must be a data.frame, matrix, or inherit from these classes.")
# }
# End remove
# Insert three checks:
# 1) write_func works with data object
# 2) read_func works with data object
# 3) result of read_func is identical to data object
# Store data --------------------------------------------------------------
col_message("Storing original data in '", filename, "' and updating the checksum in '.worcs'.", verbose = verbose)
filename <- fn_write_original
eval(substitute(save_expression))
# Prepare for writing to worcs file
to_worcs <- list(
filename = fn_worcs,
modify = TRUE
)
filename <- path_rel_worcs(filename, dn_worcs)
to_worcs$data[[filename]] <- vector(mode = "list")
to_worcs$data[[filename]][["save_expression"]] <- deparse(substitute(save_expression))
to_worcs$data[[filename]][["load_expression"]] <- deparse(substitute(load_expression))
do.call(write_worcsfile, to_worcs)
store_checksum(fn_write_original, entry_name = filename, worcsfile = fn_worcs)
if(open){
write_gitig(fn_gitig, paste0("!", filename))
} else {
write_gitig(fn_gitig, filename)
# Update readme file with message about closed data
fn_readme <- path_abs_worcs("README.md", dn_worcs)
if(file.exists(fn_readme)){
lnz <- readLines(fn_readme)
if(!any(grepl("not publically available", lnz, fixed = TRUE))){
update_textfile(fn_readme,
"\n\n## Access to data\n\nSome of the data used in this project are not publically available.\nTo request access to the original data, [open a GitHub issue](https://docs.github.com/en/free-pro-team@latest/github/managing-your-work-on-github/creating-an-issue).\n\n<!--Clarify here how users should contact you to gain access to the data, or to submit syntax for evaluation on the original data.-->",
verbose = verbose)
}
}
if(synthetic){
# Synthetic data
col_message("Generating synthetic data for public use. Ensure that no identifying information is included.", verbose = verbose)
Args <- match.call()
Args <- Args[c(1, which(names(Args) %in% names(formals("synthetic"))))]
Args$verbose <- verbose
Args[[1L]] <- quote(worcs::synthetic)
synth <- eval.parent(Args)
add_synthetic(data = synth,
synthetic_name = fn_synthetic,
original_name = filename,
worcs_directory = dn_worcs,
verbose = verbose)
}
}
col_message("Updating '.gitignore'.", verbose = verbose)
# codebook ----------------------------------------------------------------
if(create_codebook){
col_message("Creating a codebook in '", codebook, "'.", success = TRUE, verbose = verbose)
Args_cb <- match.call()
Args_cb[[1L]] <- str2lang("make_codebook")
Args_cb <- Args_cb[c(1L, match("data", names(Args_cb)))]
Args_cb$filename <- fn_write_codebook #path_rel_worcs(fn_write_codebook, dn_worcs)
#Args_cb$worcs_directory <- dn_worcs
cb_out <- capture.output(eval.parent(Args_cb))
# Add to gitignore
write_gitig(filename = fn_gitig, paste0("!", gsub(".md$", "csv", path_rel_worcs(fn_write_codebook))))
# Add to worcs
to_worcs <- list(filename = fn_worcs,
"data" = list(list("codebook" = path_rel_worcs(fn_write_codebook))),
modify = TRUE)
names(to_worcs[["data"]])[1] <- filename
do.call(write_worcsfile, to_worcs)
}
# Value labels ------------------------------------------------------------
has_factors <- any(sapply(data, inherits, what = "factor"))
if(create_labels & has_factors){
col_message("Storing value labels in '", path_rel_worcs(fn_write_labels, dn_worcs), "'.", success = TRUE, verbose = verbose)
make_labels(data = data,
variables = names(data)[sapply(data, inherits, what = "factor")],
fn_write_labels
)
# Add to worcs
to_worcs <- list(filename = fn_worcs,
data = list(list(labels = value_labels)), modify = TRUE)
names(to_worcs[["data"]])[1] <- filename
do.call(write_worcsfile, to_worcs)
}
invisible(NULL)
}
#' @title Load WORCS project data
#' @description Scans the WORCS project file for data that have been saved using
#' \code{\link{open_data}} or \code{\link{closed_data}}, and loads these data
#' into the global (working) environment. The function will load the original
#' data if available on the current system. If only a synthetic dataset is
#' available, this function loads the synthetic data.
#' The name of the object containing the data is derived from the file name by
#' removing the file extension, and, when applicable, the prefix
#' \code{"synthetic_"}. Thus, both \code{"data.csv"} and
#' \code{"synthetic_data.csv"} will be loaded into an object called \code{data}.
#' @param worcs_directory Character, indicating the WORCS project directory from
#' which to load data. The default value \code{"."} points to the current
#' directory.
#' @param to_envir Logical, indicating whether to load objects directly into
#' the environment, or return a \code{\link{list}} containing the objects. The
#' environment is designated by argument \code{envir}. Loading
#' objects directly into the global environment is user-friendly, but has the
#' risk of overwriting an existing object with the same name, as explained in
#' \code{\link{load}}. The function \code{load_data} gives a warning when this
#' happens.
#' @param envir The environment where the data should be loaded. The default
#' value \code{parent.frame(1)} refers to the global environment in an
#' interactive session.
#' @param verbose Logical. Whether or not to print status messages to
#' the console. Default: TRUE
#' @param use_metadata Logical. Whether or not to use the codebook and
#' value labels and attempt to coerce the class and values of variables to
#' those recorded therein. Default: TRUE
#' @return Returns a list invisibly. If \code{to_envir = TRUE}, this list
#' contains the loaded data files. If \code{to_envir = FALSE}, the list is
#' empty, and the loaded data files are attached directly to the global
#' environment.
#' @examples
#' test_dir <- file.path(tempdir(), "loaddata")
#' old_wd <- getwd()
#' dir.create(test_dir)
#' setwd(test_dir)
#' worcs:::write_worcsfile(".worcs")
#' df <- iris[1:5, ]
#' suppressWarnings(closed_data(df, codebook = NULL))
#' load_data()
#' data
#' rm("data")
#' file.remove("data.csv")
#' load_data()
#' data
#' setwd(old_wd)
#' unlink(test_dir, recursive = TRUE)
#' @rdname load_data
#' @export
#' @importFrom digest digest
#' @importFrom utils read.csv
#' @importFrom yaml read_yaml
load_data <- function(worcs_directory = ".", to_envir = TRUE, envir = parent.frame(1),
verbose = TRUE, use_metadata = TRUE){
# When users work from Rmd in a subdirectory, the working directory will be
# set to that subdirectory. Check for .worcs file recursively, and change
# directory if necessary.
# Filenames housekeeping
dn_worcs <- dirname(check_recursive(file.path(normalizePath(worcs_directory), ".worcs")))
checkworcs(dn_worcs, iserror = TRUE)
fn_worcs <- file.path(dn_worcs, ".worcs")
# End filenames
worcsfile <- read_yaml(fn_worcs)
if(is.null(worcsfile[["data"]])){
stop("No data found in '.worcs'.")
}
data <- worcsfile$data
data_files <- names(data)
names(data_files) <- data_files
fn_data_files <- file.path(dn_worcs, data_files)
data_original <- sapply(fn_data_files, function(i){file.exists(i)})
data_files_synth <- rep(NA, length(data_files))
if(any(!data_original)){
for(i in data_files[!data_original]){
if(is.null(worcsfile$data[[i]][["synthetic"]])){
col_message("Cannot find the original data ", i, ", and there is no synthetic version on record.", success = FALSE, verbose = verbose)
} else {
data_files_synth <- worcsfile$data[[i]][["synthetic"]]
}
}
data_files[!data_original] <- data_files_synth
fn_data_files[!data_original] <- file.path(dn_worcs, data_files_synth)
}
if(anyNA(data_files)){
col_message("No valid resource found for these files:", paste0("\n * ", names(data_files)[is.na(data_files)]), success = FALSE, verbose = verbose)
}
data_files <- data_files[!(is.null(data_files)|is.na(data_files))]
data_original <- data_original[!(is.null(data_files)|is.na(data_files))]
if(length(data_files) == 0) stop("No valid data files found.")
outlist <- vector(mode = "list")
for(file_num in seq_along(data_files)){
fn_this_file <- fn_data_files[file_num]
data_name_this_file <- data_files[file_num]
check_sum(fn_this_file, worcsfile$checksums[[data_name_this_file]], worcsfile = fn_worcs)
col_message("Loading ", c("synthetic", "original")[data_original[file_num]+1], " data from '", data_name_this_file, "'.", verbose = verbose)
object_name <- sub('^(synthetic_)?(.+)\\..*$', '\\2', basename(data_name_this_file))
# Obtain load_expression from the worcsfile
load_expression <- worcsfile$data[[names(data_files)[file_num]]][["load_expression"]]
# If there is no load_expression, this is a legacy worcsfile.
# Use the default load expression of previous worcs versions.
if(is.null(load_expression)){
load_expression <- "read.csv(file = filename, stringsAsFactors = TRUE)"
}
# Create an environment in which to evaluate the load_expression, in which
# filename is an object with value equal to fn_this_file
load_env <- new.env()
assign(x = "filename", value = fn_this_file, envir = load_env)
out <- eval(parse(text = load_expression), envir = load_env)
# Check classes
if(use_metadata){
codebook <- tryCatch({
codebook <- gsub("\\.rmd$", ".csv", worcsfile$data[[names(data_files)[file_num]]][["codebook"]], ignore.case = TRUE)
read.csv(file.path(dn_worcs, codebook), stringsAsFactors = FALSE)
}, error = function(e){ NULL })
value_labels <- tryCatch({
value_labels <- worcsfile$data[[names(data_files)[file_num]]][["labels"]]
yaml::read_yaml(file.path(dn_worcs, value_labels))
}, error = function(e){ NULL })
out <- check_metadata(out, codebook, value_labels)
}
# Update attributes and class of output object
attr(out, "type") <- c("synthetic", "original")[data_original[file_num]+1]
class(out) <- c("worcs_data", class(out))
if(to_envir){
if(object_name %in% objects(envir = envir)) warning("Object '", object_name, "' already exists in the environment designated by 'envir', and will be replaced with the contents of '", data_name_this_file, "'.")
assign(object_name, out, envir = envir)
} else {
outlist[[object_name]] <- out
}
}
return(invisible(outlist))
}
# orderedvars <- sapply(out, inherits, what = "ordered")
# if(any(orderedvars)){
# browser()
# value_labels <- worcsfile$data[[names(data_files)[file_num]]][["labels"]]
# value_labels <- yaml::read_yaml(value_labels)
# for(v in names(out)[orderedvars]){
# out[[v]] <-
# }
# }
check_metadata <- function(x, codebook, value_labels){
if(!is.null(codebook)){
classes <- codebook[["type"]]
multiclass <- grepl(",", classes, fixed = TRUE)
if(any(multiclass)){
classes[multiclass] <- gsub(",.*$", "", classes[multiclass])
}
names(classes) <- codebook[["name"]]
} else {
col_message("No valid codebook found.", success = FALSE)
classes <- sapply(x, function(i){class(i)[1]})
names(classes) <- names(x)
}
for(v in names(x)){
if(!v %in% names(classes)){
col_message("Could not restore class of variable '", v, "'.", success = FALSE)
next
}
if(!inherits(x[[v]], classes[v])){
x[[v]] <- switch(classes[v],
ordered = {
tryCatch({
if(is.null(value_labels[[v]])) stop()
ordered(x[[v]], levels = unlist(value_labels[[v]][-1]))
},
error = function(e){
col_message("Could not restore class of variable '", v, "'.", success = FALSE)
x[[v]]
})
},
{
tryCatch({do.call(paste0("as.", classes[v]), list(x[[v]]))},
error = function(e){
message("Could not restore class of variable '", v, "'.")
x[[v]]
})
}
)
}
}
x
}
cs_fun <- function(filename, worcsfile = ".worcs"){
dn_worcs <- dirname(check_recursive(file.path(normalizePath(worcsfile))))
# fn_rel <- path_rel_worcs(filename, dn_worcs)
# fn_abs <- path_abs_worcs(fn_rel, dn_worcs = dn_worcs)
tryCatch({
if(is_binary(filename)){
digest::digest(filename, file = TRUE)
} else {
stop()
}
}, error = function(e){
suppressWarnings(digest::digest(paste0(readLines(filename), collapse = ""), serialize = FALSE, file = FALSE))
})
}
#' @importFrom digest digest
# @importFrom tools md5sum
store_checksum <- function(filename, entry_name = filename, worcsfile = ".worcs") {
# Compute checksum on loaded data to ensure conformity
#cs <- digest(object = filename, file = TRUE)
#cs <- tools::md5sum(files = filename)
checkworcs(dirname(worcsfile), iserror = FALSE)
cs <- cs_fun(filename, worcsfile = worcsfile)
checksums <- list(cs)
names(checksums) <- entry_name
do.call(write_worcsfile,
list(filename = worcsfile,
checksums = checksums,
modify = TRUE)
)
}
# checksum_data_as_csv <- function(object){
# filename <- tempfile(fileext = ".csv")
# write.csv(object, filename, row.names = FALSE)
# return(cs_fun(filename))
# }
load_checksum <- function(filename){
if(file.exists(".worcs")){
cs_file <- read_yaml(".worcs")
if(!is.null(cs_file[["checksums"]])){
if(!is.null(cs_file[["checksums"]][[filename]])){
return(cs_file[["checksums"]][[filename]])
}
}
stop("No checksum found for file '", filename, "'.")
} else {
stop("No '.worcs' file found; either this is not a worcs project, or the working directory is not set to the project directory.")
}
}
#' @importFrom digest digest
check_sum <- function(filename, old_cs = NULL, worcsfile = ".worcs", error = FALSE){
cs <- cs_fun(filename, worcsfile = worcsfile)
if(is.null(old_cs)){
old_cs <- load_checksum(filename = filename)
}
if(!cs == old_cs){
if(error){
stop("Checksum for file '", filename, "' did not match the checksum on record (in '.worcs'). This means that the file has changed since the checksum was stored.")
} else {
col_message("Checksum for file '", filename, "' did not match the checksum on record (in '.worcs'). This means that the file has changed since the checksum was stored.", success = FALSE)
}
}
}
#' @importFrom utils head
#' @export
print.worcs_data <- function(x, ...){
if(!is.null(attr(x, "type"))){
if(attr(x, "type") == "synthetic"){
cat("This is a synthetic data set. The first 6 rows are:\n\n")
}
if(attr(x, "type") == "original"){
cat("This is the original data set. The first 6 rows are:\n\n")
}
class(x) <- class(x)[-1]
}
print(head(x))
}
checkworcs <- function(worcs_directory, iserror = FALSE){
if (!file.exists(file.path(worcs_directory, ".worcs"))) {
if(iserror){
stop(
"No '.worcs' file found; either this is not a worcs project, or the working directory is not set to the project directory."
, call. = FALSE)
} else {
col_message(
"No '.worcs' file found; either this is not a worcs project, or the working directory is not set to the project directory. Writing .worcs file now."
, success = FALSE)
file.create(file.path(worcs_directory, ".worcs"))
return(FALSE)
}
}
return(TRUE)
}
check_recursive <- function(path){
tryCatch({ normalizePath(path) },
warning = function(e){
filename <- basename(path)
cur_dir <- dirname(path)
parent_dir <- dirname(dirname(path))
doesnt_exist <- !dir.exists(cur_dir)
if(cur_dir == parent_dir){
stop("No '.worcs' file found in this directory or any of its parent directories; either this is not a worcs project, or the working directory is not set to the project directory.", call. = FALSE)
} else if(doesnt_exist) {
stop("No '.worcs' file found, because the directory '", dirname(path), "' doesn't exists.", call. = FALSE)
}
check_recursive(file.path(parent_dir, filename))
})
}
write_gitig <- function(filename, ..., modify = TRUE){
new_contents <- unlist(list(...))
if(modify & file.exists(filename)){
old_contents <- readLines(filename, encoding = "UTF-8")
rep_these <- sapply(gsub("^!", "", new_contents), match, gsub("^!", "", old_contents))
old_contents[na.omit(rep_these)] <- new_contents[!is.na(rep_these)]
new_contents <- c(old_contents, new_contents[is.na(rep_these)])
}
write(new_contents, filename, append = FALSE)
}
#' @title Notify the user when synthetic data are being used
#' @description This function prints a notification message when some or all of
#' the data used in a project are synthetic (see \code{\link{closed_data}} and
#' \code{\link{synthetic}}). See details for important information.
#' @details The preferred way to use this function is to provide specific data
#' objects in the function call, using the \code{...} argument.
#' If no such objects are provided, \code{notify_synthetic} will scan the
#' parent environment for objects of class \code{worcs_data}.
#'
#' This function is emphatically designed to be included in an 'R Markdown'
#' file, to dynamically generate a notification message when a third party
#' 'Knits' such a document without having access to all original data.
#' @param ... Objects of class \code{worcs_data}. The function will check if
#' these are original or synthetic data.
#' @param msg Expression containing the message to print in case not all
#' \code{worcs_data} are original. This message may refer to \code{is_synth},
#' a logical vector indicating which \code{worcs_data} objects are synthetic.
#' @return No return value. This function is called for its side effect of
#' printing a notification message.
#' @examples
#' df <- iris
#' class(df) <- c("worcs_data", class(df))
#' attr(df, "type") <- "synthetic"
#' notify_synthetic(df, msg = "synthetic")
#' @rdname notify_synthetic
#' @export
#' @seealso closed_data synthetic add_synthetic
notify_synthetic <- function(...,
msg = NULL){
dots <- list(...)
cl <- as.list(match.call()[-1])
if(is.null(cl[["msg"]])){
msg <- quote(c("**Note that", ifelse(all(is_synth), "all", "some"), "of the data files used to generate this document are synthetic. The original data are not available. Synthetic data can be used to evaluate the reproducibility of the analysis code, but the results should not be substantively interpreted, and will likely deviate from the results generated using the original data. Please contact the authors for more information.**"))
}
msg <- substitute(msg)
if(length(dots) > 0){
if(!all(sapply(dots, inherits, what = "worcs_data"))){
stop("Some arguments provided to 'notify_synthetic()' are not objects of class 'worcs_data'.", call. = FALSE)
}
is_synth <- sapply(dots, attr, which = "type") == "synthetic"
} else {
worcs_data <- Filter(function(x) inherits(get(x), "worcs_data"), ls(name = parent.env(environment())))
is_synth <- sapply(worcs_data, function(x){ attr(get(x), which = "type") }) == "synthetic"
}
if(any(is_synth)){
cat(eval(msg))
}
}
#' @importFrom xfun is_abs_path
path_abs_worcs <- function(fn, dn_worcs = NULL, worcs_directory = "."){
if (xfun::is_abs_path(fn)) {
return(fn)
}
if (is.null(dn_worcs)) {
dn_worcs <- dirname(check_recursive(file.path(normalizePath(worcs_directory),
".worcs")))
}
invisible(checkworcs(dn_worcs, iserror = TRUE))
dirn <- normalizePath(dn_worcs)
return(file.path(dirn, fn))
}
path_rel_worcs <- function(fn, dn_worcs = NULL, worcs_directory = "."){
if (is.null(dn_worcs)) {
dn_worcs <-
dirname(check_recursive(file.path(
normalizePath(worcs_directory), ".worcs"
)))
}
invisible(checkworcs(dn_worcs, iserror = TRUE))
# Normalize both
fn <- normalizePath(fn, winslash = .Platform$file.sep, mustWork = FALSE)
dn_worcs <- normalizePath(dn_worcs, winslash = .Platform$file.sep)
# Check for OS
on_windows <- isTRUE(grepl("mingw", R.Version()$os, fixed = TRUE))
if (on_windows) {
dn_worcs <- tolower(dn_worcs)
fn <- tolower(fn)
}
# Split pathnames into components
dn_worcs <- unlist(strsplit(dn_worcs, split = .Platform$file.sep, fixed = TRUE))
fn <- unlist(strsplit(fn, split = .Platform$file.sep, fixed = TRUE))
if(length(dn_worcs) > length(fn)){
stop("File path must be inside of the worcs project file.", call. = FALSE)
}
if(!all(dn_worcs == fn[seq_along(dn_worcs)])){
stop("File path must be inside of the worcs project file.", call. = FALSE)
}
return(do.call(file.path, as.list(fn[-seq_along(dn_worcs)])))
}
is_binary <- function(x){
out <- tryCatch(system2("file", args = paste0("--mime '", x, "'"), stdout = TRUE), error = function(e){ stop() })
return(isFALSE(grepl(": text/", out, fixed = TRUE)))
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/save_load.R
|
#' @title Generate synthetic data
#' @description Generates a synthetic version of a \code{data.frame}, with
#' similar characteristics to the original. See Details for the algorithm used.
#' @param data A data.frame of which to make a synthetic version.
#' @param model_expression An R-expression to estimate a model. Defaults to
#' \code{ranger(x = x, y = y)}, which uses the fast implementation of random
#' forests in \code{\link[ranger]{ranger}}. The expression is evaluated in an
#' environment containing objects \code{x} and \code{y}, where \code{x} is a
#' \code{data.frame} with the predictor variables, and \code{y} is a
#' \code{vector} of outcome values (see Details).
#' @param predict_expression An R-expression to generate predicted values based
#' on the model estimated by \code{model_expression}. Defaults to
#' \code{predict(model, data = xsynth)$predictions}. This expression must return
#' a vector of predicted values. The expression is evaluated in an
#' environment containing objects \code{model} and \code{xsynth}, where
#' \code{model} is the model estimated by \code{model_expression}, and
#' \code{xsynth} is the \code{data.frame} of synthetic data used to predict the
#' next column (see Details).
#' @param missingness_expression Optional. An R-expression to impute missing
#' values. Defaults to \code{NULL}, which means listwise deletion is used. The
#' expression is evaluated in an environment containing the object \code{data},
#' as specified in the call to \code{synthetic}. It must return a
#' \code{data.frame} with the same dimensions and column names as the original
#' data. For example, use \code{missingness_expression =
#' missRanger::missRanger(data = data)} for a fast implementation of the
#' excellent 'missForest' single imputation technique.
#' @param verbose Logical, Default: TRUE. Whether to show a progress bar while
#' running the algorithm and provide informative messages.
#' @return A \code{data.frame} with synthetic data, based on \code{data}.
#' @details Based on the work by Nowok, Raab, and Dibben (2016),
#' this function uses a simple algorithm to generate a synthetic
#' dataset with similar characteristics to the original. The algorithm is as
#' follows:
#' \enumerate{
#' \item Let x be the original data.frame, with columns 1:j
#' \item Let xsynth be a synthetic data.frame, with columns 1:j
#' \item Column 1 of xsynth is a bootstrapped version of column 1 of x
#' \item Using \code{model_expression}, a predictive model is built for column
#' c, for c along 2:j, with c predicted from columns 1:(c-1) of the original
#' data.
#' \item Using \code{predict_expression}, columns 1:(c-1) of the synthetic data
#' are used to predict synthetic values for column c.
#' }
#' Variables are thus imputed in order of occurrence in the \code{data.frame}.
#' To impute in a different order, reorder the data.
#'
#' Note that, for data synthesis to work properly, it is essential that the
#' \code{class} of variables is defined correctly. The default algorithm
#' \code{\link[ranger]{ranger}} supports numeric, integer, and factor types.
#' Other types of variables should be converted to one of these types, or users
#' can use a custom \code{model_expression} and \code{predict_expressio}
#' when calling \code{synthetic}.
#'
#'
#' Note that for data synthesis to work properly, it is essential that the
#' \code{class} of variables is defined correctly. The default algorithm
#' \code{\link[ranger]{ranger}} supports numeric, integer, factor, and logical
#' data. Other types of variables should be converted to one of these types.
#'
#' Users can provide use a custom \code{model_expression} and
#' \code{predict_expression} to use a different algorithm when calling
#' \code{synthetic}.
#'
#' As demonstrated in the example, users could call \code{lm} as a
#' \code{model_expression} to use
#' linear regression, which preserves linear marginal relationships but can give
#' rise to values out of range of the original data.
#' Or users could call \code{sample} as a \code{predict_expression} to bootstrap
#' each variable, a very quick solution that maintains univariate distributions
#' but loses all marginal relationships. These examples are not exhaustive, and
#' users can even create custom functions.
#' @examples
#' \dontrun{
#' # Example using the iris dataset and default ranger algorithm
#' iris_syn <- synthetic(iris)
#'
#' # Example using lm as prediction algorithm (only works for numeric variables)
#' # note that, within the model_expression, a new data.frame is created because
#' # lm() requires a separate data argument:
#' dat <- iris[, 1:4]
#' synthetic(dat,
#' model_expression = lm(.outcome ~ .,
#' data = data.frame(.outcome = y,
#' xsynth)),
#' predict_expression = predict(model, newdata = xsynth))
#' }
#' # Example using bootstrapping:
#' synthetic(iris,
#' model_expression = NULL,
#' predict_expression = sample(y, size = length(y), replace = TRUE))
#' \dontrun{
#' # Example with missing data, no imputation
#' iris_missings <- iris
#' for(i in 1:10){
#' iris_missings[sample.int(nrow(iris_missings), 1, replace = TRUE),
#' sample.int(ncol(iris_missings), 1, replace = TRUE)] <- NA
#' }
#' iris_miss_syn <- synthetic(iris_missings)
#'
#' # Example with missing data, imputation by median/mode substitution
#' # First, define a simple function for median/mode substitution:
#' imp_fun <- function(x){
#' if(is.data.frame(x)){
#' return(data.frame(sapply(x, imp_fun)))
#' } else {
#' out <- x
#' if(inherits(x, "numeric")){
#' out[is.na(out)] <- median(x[!is.na(out)])
#' } else {
#' out[is.na(out)] <- names(sort(table(out), decreasing = TRUE))[1]
#' }
#' out
#' }
#' }
#'
#' # Then, call synthetic() with this function as missingness_expression:
#' iris_miss_syn <- synthetic(iris_missings,
#' missingness_expression = imp_fun(data))
#' }
#' @references Nowok, B., Raab, G.M and Dibben, C. (2016).
#' synthpop: Bespoke creation of synthetic data in R. Journal of Statistical
#' Software, 74(11), 1-26. \doi{10.18637/jss.v074.i11}.
#' @rdname synthetic
#' @export
synthetic <- function(data,
model_expression = ranger(x = x, y = y),
predict_expression = predict(model, data = xsynth)$predictions,
missingness_expression = NULL,
verbose = TRUE){
UseMethod("synthetic", data)
}
if(getRversion() >= "2.15.1") utils::globalVariables(c("x", "y", "model", "xsynth"))
#' @method synthetic matrix
#' @export
synthetic.matrix <- function(data,
model_expression = ranger(x = x, y = y),
predict_expression = predict(model, data = xsynth)$predictions,
missingness_expression = NULL,
verbose = TRUE){
cl <- match.call(expand.dots = FALSE)
cl[["data"]] <- data.frame(data)
cl[[1L]] <- quote(synthetic)
eval(cl, parent.frame())
}
#' @importFrom stats na.omit complete.cases predict
#' @importFrom utils txtProgressBar setTxtProgressBar
#' @importFrom ranger ranger
#' @method synthetic data.frame
#' @export
synthetic.data.frame <- function(data,
model_expression = ranger(x = x, y = y),
predict_expression = predict(model, data = xsynth)$predictions,
missingness_expression = NULL,
verbose = TRUE){
# Capture expressions
model_expression <- substitute(model_expression) # model_expression <- quote(ranger(x = x, y = y))
predict_expression <- substitute(predict_expression) # predict_expression <- quote(predict(model, data = xsynth)$predictions)
missingness_expression <- substitute(missingness_expression)
# Check if input is correct
# if(!is.null(model_expression)){
# me <- deparse(model_expression)
# if(!(grepl("\\bx\\b", me) & grepl("\\by\\b", me))){
# #stop("Argument 'model_expression' must use the arguments 'x' and 'y' to refer to the predictor variables and outcome variable, respectively.")
# }
# }
pe <- deparse(predict_expression)
# if(!((grepl("\\bmodel\\b", pe) & grepl("\\bxsynth\\b", pe)) | (is.null(model_expression) & grepl("\\by\\b", pe)))){
# stop("Argument 'predict_expression' must use the arguments 'model' and 'xsynth' to refer to the model generated by 'model_expression' and the synthetic predictor data.frame, respectively.")
# }
# Check if data is in expected format
if(!is.data.frame(data)) data <- data.frame(data)
# Analyze missing values
miss_props <- analyze_missing(data)
if(any(miss_props > 0)){
if(is.null(missingness_expression)){
if(verbose) col_message("Argument 'data' has missing values, but no 'missingness_expression' is specified. Listwise deletion is used.", success = FALSE)
} else {
data <- eval(missingness_expression)
}
}
# Number of obs
nobs <- nrow(data)
# Analyze data types
coltypes <- lapply(data, class)
# Bootstrap first column
x <- na.omit(data[, 1, drop = FALSE])
assign(names(data)[1], x[sample.int(nrow(x), nobs, replace = TRUE), , drop = FALSE])
xsynth <- data.frame(mget(names(data)[1]))
# For all other columns, evaluate model_expression on columns up until
# that one, then predict it based on existing synthetic data and add
# column to synthetic data
if(ncol(data) > 1){
if(verbose){
pb <- txtProgressBar(min = 0, max = ncol(data), style = 3)
setTxtProgressBar(pb, 1)
}
for(this_col in 2:ncol(data)){
# Listwise deletion
complete_cases <- complete.cases(data[1:this_col])
# Prepare x and y
x <- data[1:(this_col-1)][complete_cases, , drop = FALSE]
y <- data[[this_col]][complete_cases]
# Evaluate model
if(!is.null(model_expression)){
model <- eval(model_expression)
}
# Obtain predictions
pred <- eval(predict_expression)
# Add column to xsynth
xsynth[[names(data)[this_col]]] <- pred
if(verbose) setTxtProgressBar(pb, this_col)
}
if(verbose) close(pb)
}
# Check if variable classes are maintained
for(thisvar in 1:ncol(data)){
#thisvar=6
if(!all(class(xsynth[[thisvar]]) == coltypes[[thisvar]])){
msg <- paste0("Synthetic variable '", names(data)[thisvar], "' did not have identical classes to its original counterpart.")
# Try to convert to correct class
convert_func <- paste0("as.", coltypes[[thisvar]])
convert_func <- convert_func[sapply(convert_func, exists)]
if(length(convert_func) > 0) convert_func <- convert_func[1]
newvar <- tryCatch({
do.call(convert_func, list(xsynth[[thisvar]]))
}, error = function(e){NULL})
if(!is.null(newvar)){
col_message(msg, " Attempted to convert to its original type. Check the input types of your variables, and check whether the data are synthesized correctly.", success = TRUE, verbose = verbose)
xsynth[[thisvar]] <- newvar
} else {
col_message(msg, " Failed to convert to its original type. Check the input types of your variables, and check whether the data are synthesized correctly.", success = FALSE)
}
}
}
# Restore missing values
if(any(miss_props > 0)){
xsynth <- insert_missing(xsynth, miss_props)
}
rownames(xsynth) <- NULL
return(xsynth)
}
analyze_missing <- function(x){
miss <- colSums(is.na(x))
miss/nrow(x)
}
#' @importFrom stats rbinom
insert_missing <- function(x, props){
x[1:ncol(x)] <- mapply(function(col, prop){
col[as.logical(rbinom(length(col), 1, prop))] <- NA
col
}, col = x[1:ncol(x)], prop = props)
x
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/synthetic.R
|
if(getRversion() >= "2.15.1") utils::globalVariables(c("worcs_checklist"))
#' @title Add WORCS badge to README.md
#' @description Evaluates whether a project meets the criteria of the WORCS
#' checklist (see \code{\link{worcs_checklist}}), and adds a badge to the
#' project's \code{README.md}.
#' @param path Character. This can either be the path to a WORCS project folder
#' (a project with a \code{.worcs} file), or the path to a \code{checklist.csv}
#' file. The latter is useful if you want to evaluate a manually updated
#' checklist file. Default: '.' (path to current directory).
#' @param update_readme Character. Path to the \code{README.md} file to add the
#' badge to. Default: 'README.md'. Set to \code{NULL} to avoid updating the
#' \code{README.md} file.
#' @param update_csv Character. Path to the \code{README.md} file to add the
#' badge to. Default: 'checklist.csv'. Set to \code{NULL} to avoid updating the
#' \code{checklist.csv} file.
#' @return No return value. This function is called for its side effects.
#' @examples
#' example_dir <- file.path(tempdir(), "badge")
#' dir.create(example_dir)
#' write("a", file.path(example_dir, ".worcs"))
#' worcs_badge(path = example_dir,
#' update_readme = NULL)
#' @rdname worcs_badge
#' @export
worcs_badge <- function(path = ".",
update_readme = "README.md",
update_csv = "checklist.csv"){
ndir <- np <- normalizePath(path)
if(endsWith(path, "checklist.csv")){
ndir <- dirname(np)
checks <- read.csv(path, stringsAsFactors = FALSE)
checks$check <- as.logical(checks$check)
checks$pass <- as.logical(checks$pass)
if(anyNA(checks$pass)) stop("All values in the 'pass' column must be either TRUE or FALSE. Check your .csv file for spelling errors.", call. = FALSE)
if(anyNA(checks$check)) stop("All values in the 'check' column must be either TRUE or FALSE. Check your .csv file for spelling errors.", call. = FALSE)
} else {
checks <- do.call(check_worcs, list(path = np))
}
level <- "fail"
if(all(checks$pass)){
level <- "perfect"
} else {
if(any(checks$pass[checks$importance == "essential"])){
level <- c("limited", "open")[all(checks$pass[checks$importance == "essential"])+1]
}
}
if(!is.null(update_readme)){
tryCatch({
if(!is_abs(update_readme)){ # is relative
update_readme <- file.path(ndir, update_readme)
}
text <- readLines(update_readme, encoding = "UTF-8")
loc <- startsWith(text, "[
if(any(loc)){
loc <- which(loc)[1]
text <- text[-loc]
loc <- loc-1
} else {
loc <- which(startsWith(text, "#"))[1]+1
}
text <- append(x = text,
values = switch(level,
perfect = c("", "[](https://osf.io/zcvbs/)", ""),
limited = c("", "[](https://osf.io/zcvbs/)", ""),
open = c("", "[](https://osf.io/zcvbs/)", ""),
c("", "[](https://osf.io/zcvbs/)", "")),
after = loc
)
write_as_utf(text, update_readme)
}, error = function(e){warning("Could not update README.md")})
}
if(!is.null(update_csv)){
if(!is_abs(update_csv)){ # is relative
update_csv <- file.path(ndir, update_csv)
}
write.csv(checks, update_csv, row.names = FALSE)
write_gitig(file.path(dirname(update_csv), ".gitignore"), paste0("!", basename(update_csv)))
}
}
#' @title Evaluate project with respect to WORCS checklist
#' @description Evaluates whether a project meets the criteria of the WORCS
#' checklist (see \code{\link{worcs_checklist}}).
#' @param path Character. Path to a WORCS project folder (a project with a
#' \code{.worcs} file). Default: '.' (path to current directory).
#' @param verbose Logical. Whether or not to show status messages while
#' evaluating the checklist. Default: \code{TRUE}.
#' @return A \code{data.frame} with a description of the criteria, and a column
#' with evaluations (\code{$pass}). For criteria that must be evaluated
#' manually, \code{$pass} will be \code{FALSE}.
#' @examples
#' example_dir <- file.path(tempdir(), "badge")
#' dir.create(example_dir)
#' write("a", file.path(example_dir, ".worcs"))
#' check_worcs(path = example_dir)
#' @rdname check_worcs
#' @export
#' @importFrom gert git_remote_list
#' @importFrom utils data
check_worcs <- function(path = ".", verbose = TRUE){
if(!file.exists(file.path(path, ".worcs"))){
stop("No WORCS project found in directory '", path, "'")
} else {
worcsfile <- read_yaml(file.path(path, ".worcs"))
checks <- worcs_checklist
checks[sapply(checks, inherits, what = "factor")] <- lapply(checks[sapply(checks, inherits, what = "factor")], as.character)
checks$pass <- FALSE
# Get files in folder. This is potentially inefficient, as the entire renv directory is also indexed.
f <- list.files(path, recursive = TRUE, full.names = TRUE)
f_lc <- tolower(f)
# See what files are tracked by git
tracked <- tryCatch({
git_ls(repo = path)
}, error = function(e){NULL})
# If git tracks any files
checks$pass[checks$name == "git_repo"] <- length(tracked) > 0
# If git has a remote
checks$pass[checks$name == "has_remote"] <- tryCatch({dim(git_remote_list(path))[1] > 0}, error = function(e){FALSE})
# Do checks
checks$pass[checks$name == "readme"] <- any(endsWith(f_lc, "readme.md"))
checks$pass[checks$name == "license"] <- any(endsWith(f_lc, "license")|endsWith(f_lc, "license.md"))
checks$pass[checks$name == "citation"] <- {
rmarkdown_files <- f[endsWith(f_lc, ".rmd")]
any(sapply(rmarkdown_files, function(thisfile){
txt <- paste0(readLines(thisfile, encoding = "UTF-8"), collapse = "")
grepl("@", txt, fixed = TRUE) & grepl("\\.bib", txt)
}))
}
checks$pass[checks$name == "data"] <- tryCatch({
if(!is.null(worcsfile[["data"]]) & length(tracked) > 0){
worcs_data <- names(worcsfile$data)
worcs_data <- c(worcs_data, unlist(sapply(worcsfile$data[sapply(worcsfile$data, function(x){!is.null(x[["synthetic"]])})], `[[`, "synthetic")))
any(tolower(worcs_data) %in% tolower(tracked$path))
}
}, error = function(e){FALSE})
# If checksums are up to date
if(checks$pass[checks$name == "data"]){
checks$pass[checks$name == "data_checksums"] <-
tryCatch({
#cs_now <- sapply(worcs_data, digest, file = TRUE)
cs_now <- sapply(worcs_data, cs_fun, worcsfile = file.path(path, ".worcs"))
names(cs_now) <- worcs_data
cs_stored <- unlist(worcsfile$checksums)
if(all(names(cs_now) %in% names(cs_stored))){
all(sapply(names(cs_now), function(x){cs_now[x] == cs_stored[x]}))
} else {
FALSE
}
}, error = function(e){FALSE})
} else {
checks$pass[checks$name == "data_checksums"] <- FALSE
}
# If project has R-code
checks$pass[checks$name == "code"] <- any(endsWith(f_lc, ".r"))
# If project has preregistration
checks$pass[checks$name == "preregistration"] <- any(endsWith(f_lc, "preregistration.rmd"))
}
if(verbose){
tmp <- apply(checks[worcs_checklist$check, ], 1, function(thisrow){
col_message(thisrow["description"], success = thisrow["pass"])
})
}
return(checks)
}
is_abs <- function(filename){
grepl("^(/|[A-Za-z]:|\\\\|~)", filename)
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/worcs_badge.R
|
#' WORCS checklist
#'
#' This checklist can be used to see whether a project adheres to the principles
#' of open reproducible code in science, as set out in the WORCS paper.
#'
#' \tabular{lll}{
#' \strong{category} \tab \code{factor} \tab Category of the checklist
#' element.\cr
#' \strong{name} \tab \code{factor} \tab Name of the checklist
#' element.\cr
#' \strong{description} \tab \code{factor} \tab What are the requirements
#' to claim that this checklist element is met?\cr
#' \strong{importance} \tab \code{factor} \tab Whether the checklist
#' element is essential to obtain a green 'open science' badge, or optional.\cr
#' \strong{check} \tab \code{logical} \tab Whether the criterion is checked
#' automatically by \code{\link{worcs_badge}}.
#' }
#' @docType data
#' @keywords datasets
#' @name worcs_checklist
#' @usage data(worcs_checklist)
#' @references Van Lissa, C. J., Brandmaier, A. M., Brinkman, L., Lamprecht, A.,
#' Peikert, A., , Struiksma, M. E., & Vreede, B. (2021)
#' \doi{10.3233/DS-210031}.
#
#' @format A data frame with 15 rows and 5 variables.
NULL
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/worcs_checklist.R
|
# write_worcsfile(filename = ".worcs",
# worcs_version = "0.1.1",
# creator = Sys.info()["effective_user"],
# checksums = list(ckone = "1334", cktwo = "5y54")
# )
#' @importFrom yaml write_yaml read_yaml
write_worcsfile <- function(filename, ..., modify = FALSE){
new_contents <- list(...)
if(modify & file.exists(filename)){
old_contents <- read_yaml(filename)
new_contents <- mod_nested_list(old_contents, new_contents)
}
write_yaml(new_contents, filename)
}
mod_nested_list <- function(old, new){
if(is.null(old)){
return(new)
}
for(i in 1:length(new)){
if(depth(new[i]) == 1){
if(names(new)[i] %in% names(old)){
old[names(new)[i]] <- new[i]
} else {
old <- c(old, new[i])
}
} else {
old[[names(new)[i]]] <- mod_nested_list(old[[names(new)[i]]], new[[i]])
}
}
old
}
depth <- function(this,thisdepth=0){
if((!is.list(this))|length(this) == 0){
return(thisdepth)
}else{
return(max(unlist(lapply(this,depth,thisdepth=thisdepth+1))))
}
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/worcs_file.R
|
recommend_data <- c('library("worcs")',
"# We recommend that you prepare your raw data for analysis in 'prepare_data.R',",
"# and end that file with either open_data(yourdata), or closed_data(yourdata).",
"# Then, uncomment the line below to load the original or synthetic data",
"# (whichever is available), to allow anyone to reproduce your code:",
"# load_data()")
#' @title Create new WORCS project
#' @description Creates a new 'worcs' project. This function is invoked by
#' the 'RStudio' project template manager, but can also be called directly to
#' create a WORCS project through syntax or the console.
#' @param path Character, indicating the directory in which to create the
#' 'worcs' project. Default: 'worcs_project'.
#' @param manuscript Character, indicating what template to use for the
#' 'R Markdown' manuscript. Default: 'APA6'. Available choices include
#' \code{APA6} from the \code{papaja} package,
#' a \code{\link[rmarkdown]{github_document}}, and templates included in the
#' \code{\link[rticles:rticles]{rticles}} package.
#' For more information, see \code{\link{add_manuscript}}.
#' @param preregistration Character, indicating what template to use for the
#' preregistration. Default: 'cos_prereg'. Available choices include:
#' \code{"PSS", "Secondary", "None"}, and all templates from the
#' \code{\link[prereg:prereg]{prereg}} package. For more information, see
#' \code{\link{add_preregistration}}.
#' @param add_license Character, indicating what license to include.
#' Default: 'CC_BY_4.0'. Available options include:
#' \code{"CC_BY_4.0", "CC_BY-SA_4.0", "CC_BY-NC_4.0", "CC_BY-NC-SA_4.0",
#' "CC_BY-ND_4.0", "CC_BY-NC-ND_4.0", "None"}. For more information, see
#' <https://creativecommons.org/licenses/>.
#' @param use_renv Logical, indicating whether or not to use 'renv' to make the
#' project reproducible. Default: TRUE. See \code{\link[renv]{init}}.
#' @param remote_repo Character, address of the remote repository for
#' this project. This link should have the form
#' \code{https://github.com[username][repo].git} (preferred) or
#' \code{git@[...].git} (if using SSH).
#' If a valid remote repository link is provided, a commit will
#' be made containing the 'README.md' file, and will be pushed to the remote
#' repository. Default: 'https'.
#' @param verbose Logical. Whether or not to print messages to the console
#' during project creation. Default: TRUE
#' @param ... Additional arguments passed to and from functions.
#' @return No return value. This function is called for its side effects.
#' @examples
#' the_test <- "worcs_template"
#' old_wd <- getwd()
#' dir.create(file.path(tempdir(), the_test))
#' do.call(git_user, worcs:::get_user())
#' worcs_project(file.path(tempdir(), the_test, "worcs_project"),
#' manuscript = "github_document",
#' preregistration = "None",
#' add_license = "None",
#' use_renv = FALSE,
#' remote_repo = "https")
#' setwd(old_wd)
#' unlink(file.path(tempdir(), the_test))
#' @rdname worcs_project
#' @export
#' @importFrom rmarkdown draft
#' @importFrom gert git_init git_remote_add git_add git_commit git_push
#' @importFrom utils installed.packages packageVersion
#' @importFrom prereg vantveer_prereg
#' @importFrom methods formalArgs
# @importFrom renv init
worcs_project <- function(path = "worcs_project", manuscript = "APA6", preregistration = "cos_prereg", add_license = "CC_BY_4.0", use_renv = TRUE, remote_repo = "https", verbose = TRUE, ...) {
cl <- match.call(expand.dots = FALSE)
# collect inputs
manuscript <- tolower(manuscript)
preregistration <- tolower(preregistration)
add_license <- tolower(add_license)
dots <- list(...)
# ensure path exists
dir.create(path, recursive = TRUE, showWarnings = FALSE)
path <- normalizePath(path)
# Check if valid Git signature exists
use_git <- has_git()
if(!use_git){
col_message("Could not find a working installation of 'Git', which is required to safeguard the transparency and reproducibility of your project. Please connect 'Git' by following the steps described in this vignette:\n vignette('setup', package = 'worcs')", success = FALSE)
} else {
col_message("Initializing 'Git' repository.", verbose = verbose)
git_init(path = path)
}
# Create .worcs file
tryCatch({
write_worcsfile(filename = file.path(path, ".worcs"),
worcs_version = as.character(packageVersion("worcs")),
creator = Sys.info()["effective_user"]
)
col_message("Writing '.worcs' file.", verbose = verbose)
}, error = function(e){
col_message("Writing '.worcs' file.", success = FALSE)
})
# copy 'resources' folder to path
tryCatch({
copy_resources(which_files = c(
"README.md",
"prepare_data.R",
"worcs_icon.png"
), path = path)
col_message("Copying standard files.", verbose = verbose)
}, error = function(e){
col_message("Copying standard files.", success = FALSE)
})
# write files
# Begin manuscript
if(!manuscript == "none"){
cl[[1L]] <- quote(worcs::add_manuscript)
names(cl)[which(names(cl) == "path")] <- "worcs_directory"
eval(cl, parent.frame())
add_recipe(worcs_directory = path)
} else {
write_as_utf(recommend_data, file.path(path, "run_me.R"))
write_worcsfile(filename = file.path(path, ".worcs"),
entry_point = "run_me.R",
modify = TRUE)
add_recipe(worcs_directory = path,
recipe = "source('run_me.R')")
}
# End manuscript
# Begin prereg
if(!preregistration == "none"){
cl[[1L]] <- quote(worcs::add_preregistration)
names(cl)[which(names(cl) == "path")] <- "worcs_directory"
eval(cl, parent.frame())
}
# End prereg
# Begin license
if(!add_license == "none"){
tryCatch({
dir.create(path, recursive = TRUE, showWarnings = FALSE)
# copy 'resources' folder to path
license_dir = system.file('rstudio', 'templates', 'project', 'licenses', package = 'worcs', mustWork = TRUE)
license_file <- file.path(license_dir, paste0(add_license, ".txt"))
file.copy(license_file, file.path(path, "LICENSE"), copy.mode = FALSE)
col_message("Writing license file.", verbose = verbose)
}, error = function(e){
col_message("Writing license file.", success = FALSE)
})
}
# End license
# Use renv ----------------------------------------------------------------
if(use_renv){
tryCatch({
init_fun <- get("init", asNamespace("renv"))
do.call(init_fun, list(project = path, restart = FALSE))
col_message("Initializing 'renv' for a reproducible R environment.", verbose = verbose)
}, error = function(e){
col_message("Initializing 'renv' for a reproducible R environment.", success = FALSE)
})
}
#use_git() initialises a Git repository and adds important files to .gitignore. If user consents, it also makes an initial commit.
write(c(".Rhistory",
".Rprofile",
"*.csv",
"*.sav",
"*.sas7bdat",
"*.xlsx",
"*.xls",
"*.pdf",
"*.fff",
"*.log",
"*.tex"),
file = file.path(path, ".gitignore"), append = TRUE)
# Update readme
if(file.exists(file.path(path, "README.md"))){
cont <- readLines(file.path(path, "README.md"), encoding = "UTF-8")
f <- list.files(path)
tab <- matrix(c("File", "Description", "Usage",
"README.md", "Description of project", "Human editable"), nrow = 2, byrow = TRUE)
rproj_name <- paste0(basename(path), ".Rproj")
cont[which(startsWith(cont, "You can load this project in RStudio by opening the file"))] <- paste0("You can load this project in RStudio by opening the file called '", rproj_name, "'.")
tab <- rbind(tab, c(rproj_name, "Project file", "Loads project"))
tab <- describe_file("LICENSE", "User permissions", "Read only", tab, path)
tab <- describe_file(".worcs", "WORCS metadata YAML", "Read only", tab, path)
tab <- describe_file("preregistration.rmd", "Preregistered hypotheses", "Human editable", tab, path)
tab <- describe_file("prepare_data.R", "Script to process raw data", "Human editable", tab, path)
tab <- describe_file("manuscript/manuscript.rmd", "Source code for paper", "Human editable", tab, path)
tab <- describe_file("manuscript/references.bib", "BibTex references for manuscript", "Human editable", tab, path)
tab <- describe_file("renv.lock", "Reproducible R environment", "Read only", tab, path)
tab <- nice_tab(tab)
cont <- append(cont, tab, after = grep("You can add rows to this table", cont))
write_as_utf(cont, file.path(path, "README.md"))
}
# Create first commit
if(use_git){
tryCatch({
git_add(files = "README.md", repo = path)
git_commit(message = "worcs template initial commit", repo = path)
col_message("Creating first commit (committing README.md).", verbose = verbose)
}, error = function(e){
col_message("Creating first commit (committing README.md).", success = FALSE)
})
}
# Connect to remote repo if possible
repo_url <- parse_repo(remote_repo = remote_repo, verbose = verbose)
valid_repo <- !is.null(repo_url)
if(use_git & valid_repo){
tryCatch({
# For compatibility with old and new gert, check which formals it has
Args_gert <- list(
"origin",
url = remote_repo,
repo = path
)
if("remote" %in% formalArgs(git_remote_add)){
names(Args_gert)[1] <- "remote"
} else {
names(Args_gert)[1] <- "name"
}
do.call(git_remote_add, Args_gert)
git_push(remote = "origin", repo = path)
col_message(paste0("Connected to remote repository at ", remote_repo), verbose = verbose)
}, error = function(e){
col_message("Could not connect to a remote 'GitHub' repository. You are working with a local 'Git' repository only.", success = FALSE, verbose = verbose)
})
} else {
col_message("No valid 'GitHub' address provided. You are working with a local 'Git' repository only.", success = FALSE)
}
if("GCtorture" %in% ls()) rm("GCtorture")
}
describe_file <- function(file, desc, usage, tab, path){
if(file.exists(file.path(path, file))){
return(rbind(tab, c(file, desc, usage)))
} else {
return(tab)
}
}
create_man_papaja <- function(man_fn_abs, remote_repo){
if(requireNamespace("papaja", quietly = TRUE)) {
draft(
file = man_fn_abs,
"apa6",
package = "papaja",
create_dir = FALSE,
edit = FALSE
)
manuscript_text <- readLines(man_fn_abs, encoding = "UTF-8")
# Add bibliography
bib_line <- which(startsWith(manuscript_text, "bibliography"))[1]
manuscript_text[bib_line] <- paste0(substr(manuscript_text[bib_line], start = 1, stop = nchar(manuscript_text[bib_line])-1), ', "references.bib"]')
# Add citation function
add_lines <- c(
"knit : worcs::cite_all"
)
manuscript_text <- append(manuscript_text, add_lines, after = (grep("^---$", manuscript_text)[2]-1))
# Add call to library("worcs")
manuscript_text <- append(manuscript_text, recommend_data, after = grep('^library\\("papaja"\\)$', manuscript_text))
# Add introductory sentence
add_lines <- c(
"",
paste0("This manuscript uses the Workflow for Open Reproducible Code in Science [WORCS version ",
gsub("^(\\d{1,}(\\.\\d{1,}){2}).+$", "\\1", as.character(packageVersion("worcs"))),
", @vanlissaWORCSWorkflowOpen2021] to ensure reproducibility and transparency. All code <!--and data--> are available at ",
ifelse(is.null(remote_repo), "<!--insert repository URL-->", paste0("<", remote_repo, ">")), "."),
"",
"This is an example of a non-essential citation [@@vanlissaWORCSWorkflowOpen2021]. If you change the rendering function to `worcs::cite_essential`, it will be removed.",
"",
"<!--The function below inserts a notification if the manuscript is knit using synthetic data. Make sure to insert it after load_data().-->",
"`r notify_synthetic()`"
)
manuscript_text <- append(manuscript_text, add_lines, after = grep('^```', manuscript_text)[2])
# Write
write_as_utf(manuscript_text, man_fn_abs)
} else {
col_message('Could not generate an APA6 manuscript file, because the \'papaja\' package is not installed. Run this code to see instructions on how to install this package from GitHub:\n vignette("setup", package = "worcs")', success = FALSE)
}
}
create_man_github <- function(man_fn_abs, remote_repo){
draft(
file = man_fn_abs,
template = "github_document",
package = "rmarkdown",
create_dir = FALSE,
edit = FALSE
)
repo_address <- remote_repo
manuscript_text <- readLines(man_fn_abs, encoding = "UTF-8")
# Add bibliography and citation function
add_lines <- c(
"date: '`r format(Sys.time(), \"%d %B, %Y\")`'",
"bibliography: references.bib",
"knit: worcs::cite_all"
)
manuscript_text <- append(manuscript_text, add_lines, after = (grep("^---$", manuscript_text)[2]-1))
# Add call to library("worcs")
manuscript_text <- append(manuscript_text, recommend_data, after = grep('^```', manuscript_text)[1])
# Add introductory sentence
repo_url <- parse_repo(remote_repo = remote_repo, verbose = FALSE)
valid_repo <- !is.null(repo_url)
add_lines <- c(
"",
paste0("This manuscript uses the Workflow for Open Reproducible Code in Science [@vanlissaWORCSWorkflowOpen2021] to ensure reproducibility and transparency. All code <!--and data--> are available at ",
ifelse(is.null(remote_repo), "<!--insert repository URL-->", paste0("<", remote_repo, ">")), "."),
"",
"This is an example of a non-essential citation [@@vanlissaWORCSWorkflowOpen2021]. If you change the rendering function to `worcs::cite_essential`, it will be removed.",
"",
"<!--The function below inserts a notification if the manuscript is knit using synthetic data. Make sure to insert it after load_data().-->",
"`r notify_synthetic()`"
)
manuscript_text <- append(manuscript_text, add_lines, after = grep('^```', manuscript_text)[2])
# Write
write_as_utf(manuscript_text, man_fn_abs)
}
#' @importFrom rticles acm_article
create_man_rticles <- function(man_fn_abs, template, remote_repo){
if("rticles" %in% rownames(installed.packages())){
draft(
file = man_fn_abs,
template = template,
package = "rticles",
create_dir = FALSE,
edit = FALSE
)
manuscript_text <- readLines(man_fn_abs, encoding = "UTF-8")
# Add bibliography
bib_line <- which(startsWith(manuscript_text, "bibliography"))[1]
manuscript_text[bib_line] <- "bibliography: references.bib"
# Add citation function
add_lines <- c(
"knit: worcs::cite_all"
)
manuscript_text <- append(manuscript_text, add_lines, after = (grep("^---$", manuscript_text)[2]-1))
# Add call to library("worcs")
add_lines <- c(
'```{r, echo = FALSE, eval = TRUE, message = FALSE}',
recommend_data,
'```',
"",
paste0("This manuscript uses the Workflow for Open Reproducible Code in Science [@vanlissaWORCSWorkflowOpen2021] to ensure reproducibility and transparency. All code <!--and data--> are available at ",
ifelse(is.null(remote_repo), "<!--insert repository URL-->", paste0("<", remote_repo, ">")), "."),
"",
"This is an example of a non-essential citation [@@vanlissaWORCSWorkflowOpen2021]. If you change the rendering function to `worcs::cite_essential`, it will be removed.",
"",
"<!--The function below inserts a notification if the manuscript is knit using synthetic data. Make sure to insert it after load_data().-->",
"`r notify_synthetic()`"
)
manuscript_text <- append(manuscript_text, add_lines, after = (grep("^---$", manuscript_text)[2]))
write_as_utf(manuscript_text, man_fn_abs)
} else {
col_message(paste0('Could not generate ', template, ' manuscript file, because the \'rticles\' package is not installed. Run this code to install the package from CRAN:\n install.packages("rticles", dependencies = TRUE)'), success = FALSE)
}
}
copy_resources <- function(which_files, path){
resources <- system.file('rstudio', 'templates', 'project', 'resources', package = 'worcs', mustWork = TRUE)
files <- list.files(resources, recursive = TRUE, include.dirs = FALSE)
files <- files[files %in% which_files]
source <- file.path(resources, files)
target <- file.path(path, files)
file.copy(source, target, copy.mode = FALSE)
}
nice_tab <- function(tab){
tab <- apply(tab, 2, function(i){
sprintf(paste0("%-", max(nchar(i)), "s"), i)
})
tab <- rbind(tab, sapply(tab[1,], function(i){
paste0(rep("-", nchar(i)), collapse = "")
}))
tab <- tab[c(1, nrow(tab), 2:(nrow(tab)-1)), ]
apply(tab, 1, paste, collapse = " | ")
}
#' @title Add Rmarkdown manuscript
#' @description Adds an Rmarkdown manuscript to a 'worcs' project.
#' @param worcs_directory Character, indicating the directory
#' in which to create the manuscript files. Default: '.', which points to the
#' current working directory.
#' @param manuscript Character, indicating what template to use for the
#' 'R Markdown' manuscript. Default: 'APA6'. Available choices include:
#' \code{"APA6", "github_document", "None"} and the templates from the
#' \code{\link[rticles:rticles]{rticles}} package. See Details.
#' @param remote_repo Character, 'https' link to the remote repository for
#' this project. This link should have the form \code{https://[...].git}.
#' This link will be inserted in the draft manuscript.
#' @param verbose Logical. Whether or not to print messages to the console
#' during project creation. Default: TRUE
#' @param ... Additional arguments passed to and from functions.
#' @details Available choices include the following manuscript templates:
#' \describe{
#' \item{\code{'APA6'}}{An APA6 style template from the \code{papaja} package}
#' \item{\code{'github_document'}}{A \code{\link[rmarkdown]{github_document}} from the \code{rmarkdown} package}
#' \item{\code{'acm_article'}}{acm style template from the \code{rtices} package}
#' \item{\code{'acs_article'}}{acs style template from the \code{rtices} package}
#' \item{\code{'aea_article'}}{aea style template from the \code{rtices} package}
#' \item{\code{'agu_article'}}{agu style template from the \code{rtices} package}
#' \item{\code{'ajs_article'}}{ajs style template from the \code{rtices} package}
#' \item{\code{'amq_article'}}{amq style template from the \code{rtices} package}
#' \item{\code{'ams_article'}}{ams style template from the \code{rtices} package}
#' \item{\code{'arxiv_article'}}{arxiv style template from the \code{rtices} package}
#' \item{\code{'asa_article'}}{asa style template from the \code{rtices} package}
#' \item{\code{'bioinformatics_article'}}{bioinformatics style template from the \code{rtices} package}
#' \item{\code{'biometrics_article'}}{biometrics style template from the \code{rtices} package}
#' \item{\code{'copernicus_article'}}{copernicus style template from the \code{rtices} package}
#' \item{\code{'ctex_article'}}{ctex style template from the \code{rtices} package}
#' \item{\code{'elsevier_article'}}{elsevier style template from the \code{rtices} package}
#' \item{\code{'frontiers_article'}}{frontiers style template from the \code{rtices} package}
#' \item{\code{'glossa_article'}}{glossa style template from the \code{rtices} package}
#' \item{\code{'ieee_article'}}{ieee style template from the \code{rtices} package}
#' \item{\code{'ims_article'}}{ims style template from the \code{rtices} package}
#' \item{\code{'informs_article'}}{informs style template from the \code{rtices} package}
#' \item{\code{'iop_article'}}{iop style template from the \code{rtices} package}
#' \item{\code{'isba_article'}}{isba style template from the \code{rtices} package}
#' \item{\code{'jasa_article'}}{jasa style template from the \code{rtices} package}
#' \item{\code{'jedm_article'}}{jedm style template from the \code{rtices} package}
#' \item{\code{'joss_article'}}{joss style template from the \code{rtices} package}
#' \item{\code{'jss_article'}}{jss style template from the \code{rtices} package}
#' \item{\code{'lipics_article'}}{lipics style template from the \code{rtices} package}
#' \item{\code{'mdpi_article'}}{mdpi style template from the \code{rtices} package}
#' \item{\code{'mnras_article'}}{mnras style template from the \code{rtices} package}
#' \item{\code{'oup_article'}}{oup style template from the \code{rtices} package}
#' \item{\code{'peerj_article'}}{peerj style template from the \code{rtices} package}
#' \item{\code{'pihph_article'}}{pihph style template from the \code{rtices} package}
#' \item{\code{'plos_article'}}{plos style template from the \code{rtices} package}
#' \item{\code{'pnas_article'}}{pnas style template from the \code{rtices} package}
#' \item{\code{'rjournal_article'}}{rjournal style template from the \code{rtices} package}
#' \item{\code{'rsos_article'}}{rsos style template from the \code{rtices} package}
#' \item{\code{'rss_article'}}{rss style template from the \code{rtices} package}
#' \item{\code{'sage_article'}}{sage style template from the \code{rtices} package}
#' \item{\code{'sim_article'}}{sim style template from the \code{rtices} package}
#' \item{\code{'springer_article'}}{springer style template from the \code{rtices} package}
#' \item{\code{'tf_article'}}{tf style template from the \code{rtices} package}
#' \item{\code{'trb_article'}}{trb style template from the \code{rtices} package}
#' \item{\code{'wellcomeor_article'}}{wellcomeor style template from the \code{rtices} package}
#' }
# p <- ls(asNamespace("rticles"))
# p <- p[endsWith(p, "_article")]
# out <- c(
# "\\itemize{",
# " \\item{\\code{'APA6'}}{A \\code{\\link[papaja:papaja]{APA6}} style template from the \\code{papaja} package}",
# " \\item{\\code{'github_document'}}{A \\code{\\link[rmarkdown]{github_document}} from the \\code{rmarkdown} package}",
# paste0(" \\item{\\code{'", p, "'}}{", gsub("_article", "", p, fixed = TRUE), " style template from the \\code{rtices} package}"),
# "}"
# )
# out <- paste0("#' ", out)
# cat(out, sep = '\n', file = "clipboard")
## !!!ALSO ADD THEM TO THE PROJECT TEMPLATE FILE WORCS.DCF!!!!
# cat(p, sep = ', ', file = "clipboard")
#' @return No return value. This function is called for its side effects.
#' @examples
#' the_test <- "worcs_manuscript"
#' old_wd <- getwd()
#' dir.create(file.path(tempdir(), the_test))
#' file.create(file.path(tempdir(), the_test, ".worcs"))
#' add_manuscript(file.path(tempdir(), the_test),
#' manuscript = "None")
#' setwd(old_wd)
#' unlink(file.path(tempdir(), the_test))
#' @rdname add_manuscript
#' @export
#' @importFrom rmarkdown draft
#' @importFrom prereg vantveer_prereg
# @importFrom renv init
add_manuscript <- function(worcs_directory = ".", manuscript = "APA6", remote_repo = NULL, verbose = TRUE, ...) {
# collect inputs
dn_worcs <- dirname(check_recursive(file.path(normalizePath(worcs_directory), ".worcs")))
fn_worcs <- file.path(dn_worcs, ".worcs")
manuscript <- tolower(manuscript)
dots <- list(...)
# ensure path exists
worcs_directory <- normalizePath(worcs_directory)
# Check if valid Git signature exists
#remote_repo <- parse_repo(remote_repo = remote_repo, verbose = verbose)
# Begin manuscript
tryCatch({
# Construct path to filename and create directory
man_dir_rel <- "manuscript"
man_dir_abs <- file.path(worcs_directory, man_dir_rel)
man_fn_rel <- file.path(man_dir_rel, "manuscript.Rmd")
man_fn_abs <- file.path(man_dir_abs, "manuscript.Rmd")
dir.create(man_dir_abs)
if(manuscript == "apa6"){
create_man_papaja(man_fn_abs, remote_repo = remote_repo)
}
# all_rticles <- ls(asNamespace("rticles"))
# all_rticles <- all_rticles[endsWith(all_rticles, "_article")]
# dput(all_rticles, "clipboard")
if(manuscript %in% c("acm_article", "acs_article", "aea_article", "agu_article",
"ajs_article", "amq_article", "ams_article", "arxiv_article",
"asa_article", "bioinformatics_article", "biometrics_article",
"copernicus_article", "ctex_article", "elsevier_article", "frontiers_article",
"glossa_article", "ieee_article", "ims_article", "informs_article",
"iop_article", "isba_article", "jasa_article", "jedm_article",
"joss_article", "jss_article", "lipics_article", "mdpi_article",
"mnras_article", "oup_article", "peerj_article", "pihph_article",
"plos_article", "pnas_article", "rjournal_article", "rsos_article",
"rss_article", "sage_article", "sim_article", "springer_article",
"tf_article", "trb_article", "wellcomeor_article")){
manuscript <- gsub("_article", "", manuscript, fixed = TRUE)
create_man_rticles(man_fn_abs, manuscript, remote_repo = remote_repo)
}
if(manuscript == "github_document"){
create_man_github(man_fn_abs, remote_repo = remote_repo)
}
# Add references.bib
copy_resources(which_files = "references.bib", path = man_dir_abs)
bibfiles <- list.files(path = man_dir_abs, pattern = ".bib$", full.names = TRUE)
if(length(bibfiles) > 1){
worcs_ref <- readLines(bibfiles[endsWith(bibfiles, "references.bib")], encoding = "UTF-8")
bib_text <- do.call(c, lapply(bibfiles[!endsWith(bibfiles, "references.bib")], readLines, encoding = "UTF-8"))
invisible(file.remove(bibfiles))
write_as_utf(c(worcs_ref, bib_text), file.path(man_dir_abs, "references.bib"))
}
write_worcsfile(filename = fn_worcs,
entry_point = man_fn_rel,
modify = TRUE)
col_message("Creating manuscript files.", verbose = verbose)
}, error = function(e){
col_message("Creating manuscript files.", success = FALSE)
})
# End manuscript
}
#' @title Add Rmarkdown preregistration
#' @description Adds an Rmarkdown preregistration template to a 'worcs' project.
#' @param worcs_directory Character, indicating the directory
#' in which to create the manuscript files. Default: '.', which points to the
#' current working directory.
#' @param preregistration Character, indicating what template to use for the
#' preregistration. Default: \code{"cos_prereg"}; use \code{"None"} to omit a
#' preregistration. See Details for other available choices.
#' @param verbose Logical. Whether or not to print messages to the console
#' during project creation. Default: TRUE
#' @param ... Additional arguments passed to and from functions.
#' @return No return value. This function is called for its side effects.
#' @details Available choices include the templates from the
#' \code{\link[prereg:prereg]{prereg}} package, and several unique templates
#' included with \code{worcs}:
# p <- ls(asNamespace("prereg"))
# p <- p[endsWith(p, "_prereg")]
# out <- c(
# "\\itemize{",
# " \\item{\\code{'PSS'}}{Preregistration and Sharing Software (Krypotos,",
# " Klugkist, Mertens, & Engelhard, 2019)}",
# " \\item{\\code{'Secondary'}}{Preregistration for secondary analyses (Mertens &",
# " Krypotos, 2019)}",
# paste0(" \\item{\\code{'", p, "'}}{", gsub("_prereg", "", p, fixed = TRUE), " template from the \\code{prereg} package}"),
# "}"
# )
# out <- paste0("#' ", out)
# cat(out, sep = '\n', file = "clipboard")
## !!!ALSO ADD THEM TO THE PROJECT TEMPLATE FILE WORCS.DCF!!!!
# cat(p, sep = ', ', file = "clipboard")
#' \describe{
#' \item{\code{'PSS'}}{Preregistration and Sharing Software (Krypotos,
#' Klugkist, Mertens, & Engelhard, 2019)}
#' \item{\code{'Secondary'}}{Preregistration for secondary analyses (Mertens &
#' Krypotos, 2019)}
#' \item{\code{'aspredicted_prereg'}}{aspredicted template from the \code{prereg} package}
#' \item{\code{'brandt_prereg'}}{brandt template from the \code{prereg} package}
#' \item{\code{'cos_prereg'}}{cos template from the \code{prereg} package}
#' \item{\code{'fmri_prereg'}}{fmri template from the \code{prereg} package}
#' \item{\code{'prp_quant_prereg'}}{prp_quant template from the \code{prereg} package}
#' \item{\code{'psyquant_prereg'}}{psyquant template from the \code{prereg} package}
#' \item{\code{'rr_prereg'}}{rr template from the \code{prereg} package}
#' \item{\code{'vantveer_prereg'}}{vantveer template from the \code{prereg} package}
#' }
#' @examples
#' the_test <- "worcs_prereg"
#' old_wd <- getwd()
#' dir.create(file.path(tempdir(), the_test))
#' file.create(file.path(tempdir(), the_test, ".worcs"))
#' add_preregistration(file.path(tempdir(), the_test),
#' preregistration = "cos_prereg")
#' setwd(old_wd)
#' unlink(file.path(tempdir(), the_test))
#' @rdname add_preregistration
#' @export
#' @importFrom rmarkdown draft
#' @importFrom prereg vantveer_prereg
add_preregistration <- function(worcs_directory = ".",
preregistration = "cos_prereg",
verbose = TRUE,
...) {
# collect inputs
dn_worcs <- dirname(check_recursive(file.path(normalizePath(worcs_directory), ".worcs")))
#fn_worcs <- file.path(dn_worcs, ".worcs")
worcs_directory <- normalizePath(dn_worcs)
preregistration <- tolower(preregistration)
#dots <- list(...)
# Begin preregistration
tryCatch({
# Different handling for prereg preregistrations and those included in worcs
if(!preregistration %in% c("pss", "secondary")){
p <- ls(asNamespace("prereg"))
if(paste0(preregistration, "_prereg") %in% p) preregistration <- paste0(preregistration, "_prereg")
if(endsWith(preregistration, "_prereg")){
draft(
file.path(worcs_directory, "preregistration.Rmd"),
preregistration,
package = "prereg",
create_dir = FALSE,
edit = FALSE
)
}
} else {
if(file.exists(paste0(preregistration, ".Rmd"))|file.exists("preregistration.Rmd")){
stop("Preregistration already exists.")
} else {
copy_resources(paste0(preregistration, ".Rmd"), worcs_directory)
file.rename(paste0(preregistration, ".Rmd"), "preregistration.Rmd")
}
}
col_message("Creating preregistration files.", verbose = verbose)
}, error = function(e){
col_message("Creating preregistration files.", success = FALSE)
})
# End preregistration
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/worcs_project.R
|
# @importFrom rstudioapi versionInfo
.onAttach <- function(libname, pkgname) {
#correct_version <- versionInfo()$version >= "1.1.28"
print_message <- "\033[0;34mWelcome to WORCS: Workflow for Open Reproducible Code in Science. Please read the tutorial before using this package, and consider citing it:\n Van Lissa and colleagues (2020) <doi:10.3233/DS-210031>\033[0m"
if(!has_git()){
print_message <- paste0(print_message, "\n\033[0;31mCould not find a working installation of 'Git', which is required to safeguard the transparency and reproducibility of your project. Please connect 'Git' by following the steps described in this vignette:\n vignette('setup', package = 'worcs')\033[0m")
}
#if(!correct_version){ "RStudio version 1.1.28 or higher is required to use the worcs package."))
packageStartupMessage(print_message)
}
|
/scratch/gouwar.j/cran-all/cranData/worcs/R/zzz.R
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
library(worcs)
## ---- scholarbib, echo = FALSE, fig.cap="Exporting a BibTex reference from Google Scholar"----
knitr::include_graphics("scholar_bib.png")
|
/scratch/gouwar.j/cran-all/cranData/worcs/inst/doc/citation.R
|
---
title: "Citing references in worcs"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Citing references in worcs}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
library(worcs)
```
Comprehensive citation of literature, data, materials, methods, and software
is one of the hallmarks of open science. When using the R-implementation of
WORCS, you will most likely be writing your manuscript in `RMarkdown` format.
This means that you will use Markdown `citekey`s to refer to references, and
these references will be stored in a separate text file known as a `.bib` file.
To ease this process, we recommend following this procedure for citation:
1. During writing, maintain a plain-text `.bib` file with the BibTeX references
for all citations.
+ You can export a `.bib` file from most reference manager
programs; the free, open-source reference manager
[Zotero](https://www.zotero.org/download/) is excellent and user-friendly,
and highly interoperable with other commercial reference managers. [Here](https://christopherjunk.netlify.com/blog/2019/02/25/zotero-RMarkdown/) is a tutorial for using Zotero with RMarkdown.
+ Alternatively, it is possible to make this file by hand, copy and pasting
each new reference below
the previous one; e.g., Figure \@ref(fig:scholarbib) shows how to obtain a
BibTeX reference from Google Scholar; simply copy-paste each reference into
the `.bib` file
2. To cite a reference, use the `citekey` - the first word in the BibTeX entry
for that reference. Insert it in the RMarkdown file like so: `@yourcitekey2020`.
For a parenthesized reference, use `[@citekeyone2020; @citekeytwo2020]`. For
more options, see the [RMarkdown cookbook](https://bookdown.org/yihui/rmarkdown-cookbook/bibliography.html).
3. To indicate a *non-essential* citation, mark it with a double at-symbol: `@@nonessential2020`.
4. When Knitting the document, adapt the `knit` command in the YAML header.
`knit: worcs::cite_all` renders all citations, and
`knit: worcs::cite_essential` removes all *non-essential* citations.
5. Optional: To be extremely thorough, you could make a "branch" of the GitHub repository for the print version of the manuscript. Only in this branch, you use the function `knit: worcs::cite_essential`. The procedure is documented in [this tutorial](http://rstudio-pubs-static.s3.amazonaws.com/142364_3b344a38149b465c8ebc9a8cd2eee3aa.html).
```{r, scholarbib, echo = FALSE, fig.cap="Exporting a BibTex reference from Google Scholar"}
knitr::include_graphics("scholar_bib.png")
```
|
/scratch/gouwar.j/cran-all/cranData/worcs/inst/doc/citation.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
## ----setup--------------------------------------------------------------------
library(worcs)
|
/scratch/gouwar.j/cran-all/cranData/worcs/inst/doc/endpoints.R
|
---
title: "Using Endpoints to Check Reproducibility"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Using Endpoints to Check Reproducibility}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
```{r setup}
library(worcs)
```
This vignette describe the `worcs` package's functionality for automating reproducibility.
The basic idea is that the entry point, endpoint (or endpoints), and recipe by which to get to the endpoint from the entry point are all well-defined.
In a typical `worcs` project, the entry point will be a dynamic document (e.g., `manuscript.Rmd`), and the endpoint will be the rendered manuscript (e.g., `manuscript.pdf`). The recipe by which to get from the entry point to the endpoint is often a simple call to `rmarkdown::render("manuscript.Rmd")`.
By default, the entry point and recipe are documented in the `.worcs` project file when the project is created, if an R-script or Rmarkdown file is selected as the manuscript.
Endpoints are not created by default, as it only makes sense to define them when the analyses are complete.
Custom recipes can be added to a project using `add_recipe()`.
## Adding endpoints
Users can add endpoints using the function `add_endpoint("filename")`. When running this function, `filename` is added to the `.worcs` project file, and its checksum is computed so that any changes to the contents of the file can be detected.
It is also possible to specify multiple endpoints. For example, maybe the user has finalized the analyses, and wants to track reproducibility for the analysis results - but still wants to make changes to the text of the manuscript without breaking reproducibility checks.
In this case, it is useful to track files that contain analysis results instead of the rendered manuscript. Imagine these are intermediary files with analysis results:
* `descriptives.csv`: A file with the descriptive statistics of study variables
* `model_fit.csv`: A table with model fit indices for several models
* `finalmodel.RData`: An RData file with the results of the final model
These three files could be tracked as endpoints by calling `add_endpoint("descriptives.csv"); add_endpoint("model_fit.csv"); add_endpoint("finalmodel.RData")`.
## Reproducing a Project
A WORCS project can be reproduced by evaluating the function `reproduce()`.
This function evaluates the recipe defined in the `.worcs` project file.
If no recipe is specified (e.g., when a project was created with an older version of the package), but an entry point is defined, `reproduce()` will try to evaluate the entry point if it is an Rmarkdown or R source file.
## Checking reproducibility
Users can verify that the endpoint remains unchanged after reproducing the project by calling the function `check_endpoints()`. If any endpoint has changed relative to the version stored in the `.worcs` project file, this will result in a warning message.
## Updating endpoints
To update the endpoints in the `.worcs` file, call `snapshot_endpoints()`. Always call this function to log changes to the code that should result in a different end result.
## Automating Reproducibility
If a project is connected to a remote repository on GitHub, it is possible to use GitHub actions to automatically check a project's reproducibility and signal the result of this reproducibility check by displaying a badge on the project's readme page (which is the welcome page visitors of the GitHub repository first see).
To do so, follow these steps:
1. Add endpoint using add_endpoint(); for example, if the endpoint of your analyses is a file called `'manuscript/manuscript.md'`, then you would call `add_endpoint('manuscript/manuscript.md')`
1. Run `github_action_reproduce()`
1. You should see a message asking you to copy-paste code for a status badge to your `readme.md`. If you do not see this message, add the following code to your readme.md manually:
+ `[](https://github.com/YOUR_ACCOUNT/PROJECT_REPOSITORY/actions/worcs_reproduce.yaml/worcs_endpoints.yaml)`
1. Commit these changes to GitHub using `git_update()`
Visit your project page on GitHub and select the `Actions` tab to see that your reproducibility check is running; visit the main project page to see the new badge in your readme.md file.
## Automating Endpoint Checks
Sometimes, you may wish to verify that the endpoints of a project remain the same but without reproducing all analyses on GitHub's servers. This may be the case when the project has closed data that are not available on GitHub, or if the analyses take a long time to compute and you want to prevent using unnecessary compute power (e.g., for environmental reasons).
In these cases, you can still use GitHub actions to automatically check whether the endpoints have remained unchanged. If your local changes to the project introduce deviations from the endpoint snapshots, these tests will fail.
If you make intentional changes to the endpoints, you should of course run `snapshot_endpoints()`.
You can display a badge on the project's readme page to signal that the endpoints remain unchanged.
To do so, follow these steps:
1. Add endpoint using add_endpoint(); for example, if the endpoint of your analyses is a file called `'manuscript/manuscript.md'`, then you would call `add_endpoint('manuscript/manuscript.md')`
1. Run `github_action_check_endpoints()`
1. You should see a message asking you to copy-paste code for a status badge to your `readme.md`. If you do not see this message, add the following code to your readme.md manually:
+ `[](https://github.com/YOUR_ACCOUNT/PROJECT_REPOSITORY/actions/workflows/worcs_endpoints.yaml)`
1. Commit these changes to GitHub using `git_update()`
Visit your project page on GitHub and select the `Actions` tab to see that your reproducibility check is running; visit the main project page to see the new badge in your readme.md file.
|
/scratch/gouwar.j/cran-all/cranData/worcs/inst/doc/endpoints.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
library(worcs)
|
/scratch/gouwar.j/cran-all/cranData/worcs/inst/doc/git_cloud.R
|
---
title: "Connecting to 'Git' remote repositories"
output: rmarkdown::html_vignette
vignette: >
%\VignetteIndexEntry{Connecting to 'Git' remote repositories}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
library(worcs)
```
The WORCS paper describes a workflow centered around 'GitHub', but there are several other cloud hosting services that offer similar functionality. This vignette describes the process of connecting a `worcs` project to these other cloud hosting services. If you are missing your preferred cloud hosting service, please submit a pull request with a step-by-step tutorial for that service [here](https://github.com/cjvanlissa/worcs/pulls).
## GitLab
### Setup steps (do this only once)
The 'GitLab' website looks and feels almost identical to 'GitHub'. Steps 4 and 5 of the `setup` vignette can be applied nearly without alterations. To connect `worcs` to 'GitLab', I proceeded as follows:
4. Register on GitLab
+ Go to [gitlab.com](https://about.gitlab.com/) and click *Register now*. Choose an "Individual", "Free" plan.
+ Request a [free academic upgrade](https://about.gitlab.com/solutions/education/).
5. Connect 'RStudio' to Git and GitLab (for more support, see [Happy Git with R](https://happygitwithr.com/)
a. Open 'RStudio', open the Tools menu, click *Global Options*, and click *Git/SVN*
b. Verify that *Enable version control interface for RStudio projects* is selected
c. Verify that *Git executable:* shows the location of git.exe. If it is missing, manually fix the location of the file.
d. Click *Create RSA Key*. Do not enter a passphrase. Press *Create*. A window with some information will open, which you can close.
e. Click *View public key*, and copy the entire text to the clipboard.
f. Close 'RStudio' (it might offer to restart by itself; this is fine)
g. Go to [gitlab.com](https://about.gitlab.com/)
h. Click your user icon in the top right of the screen, click *Settings*
i. On the settings page, click *SSH Keys* in the left sidebar
j. Copy-paste the public key from your clipboard into the box labeled *Key*.
k. Click *Add key*.
l. Open 'RStudio' again (unless it restarted by itself)
### Connect new `worcs` project to 'GitLab'
To create a new project on 'GitLab', go to your account page, and click the *Create a project* tile in the middle of the screen.
* Fill in a *Project name*; do not change anything else. Click the green *Create project* button.
* You will see a page titled *"The repository for this project is empty"*. Under the header *"Create a new repository"*, you can see a web address starting with https, like so:
`git clone https://gitlab.com/yourname/yourrepo.git`
* Copy only this address, from `https://` to `.git`.
* Paste this address into the New project dialog window.
## Bitbucket
### Setup steps (do this only once)
The 'Bitbucket' website has cosmetic differences from 'GitHub', but works similarly. Steps 4 and 5 of the `setup` vignette can be applied nearly without alterations. To connect `worcs` to 'Bitbucket', I proceeded as follows:
4. Register on Bitbucket
+ Go to the Bitbucket website and click *Get started for free*. Follow the steps to create your account. Sign in.
+ Bitbucket has largely automated the process of awarding free academic upgrades. If your email address is not recognized as belonging to an academic institution, you can fill out a form to request this upgrade manually.
5. Connect 'RStudio' to Git and Bitbucket (for more support, see [Happy Git with R](https://happygitwithr.com/)
a. Open 'RStudio', open the Tools menu, click *Global Options*, and click *Git/SVN*
b. Verify that *Enable version control interface for RStudio projects* is selected
c. Verify that *Git executable:* shows the location of git.exe. If it is missing, manually fix the location of the file.
d. Click *Create RSA Key*. Do not enter a passphrase. Press *Create*. A window with some information will open, which you can close.
e. Click *View public key*, and copy the entire text to the clipboard.
f. Close 'RStudio' (it might offer to restart by itself; this is fine)
g. Go to the Bitbucket website
h. In the bottom left of the screen, click the circular icon with your initials. Select *Personal settings*
i. On the settings page, click *SSH Keys* in the left sidebar
j. Click *Add key*
k. Copy-paste the public key from your clipboard into the box labeled *Key*, and give it a label. Click the *Add key* button.
l. Open 'RStudio' again (unless it restarted by itself)
### Connect new `worcs` project to 'Bitbucket'
To create a new project on 'Bitbucket', go to your account page, and click *Create repository* in the middle of the page. These steps differ somewhat from the procedure for 'GitHub':
* Enter a *Project name* and a *Repository name*. The latter will be used to connect your `worcs` project.
* __Important:__ Change the setting *Include a README?* to *No*.
* Click "Create repository"
* When the project page opens, you will see the tagline "Let's put some bits in your bucket". Change the dropdown menu Just below this tagline from *SSH* to *https*. It will show a web address starting with https, like this:
`git clone https://[email protected]/yourrepo.git`
* Copy only this address, from `https://` to `.git`.
* Paste this address into the New project dialog window.
|
/scratch/gouwar.j/cran-all/cranData/worcs/inst/doc/git_cloud.Rmd
|
## ---- include = FALSE---------------------------------------------------------
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
|
/scratch/gouwar.j/cran-all/cranData/worcs/inst/doc/reproduce.R
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.