My da|ra Login

Detailed view

metadata language: English

Foreign Language Proficiency Test Data from Three American Universities, [United States], 2014-2017

Version
v1
Resource Type
Dataset : administrative records data, survey data
Creator
  • Winke, Paula Marie
  • Gass, Susan M.
  • Soneson, Dan
  • Rubio, Fernando
  • Hacking, Jane F.
Other Title
  • Version 1 (Subtitle)
Publication Date
2020-03-10
Publication Place
Ann Arbor, Michigan
Publisher
  • Inter-University Consortium for Political and Social Research
Funding Reference
  • United States Department of Defense
Language
English
Free Keywords
Schema: ICPSR
foreign languages; higher education
Description
  • Abstract

    In the years 2014 through 2019, three U.S. universities, Michigan State University, the University of Minnesota, Twin Cities, and The University of Utah, received Language Proficiency Flagship Initiative grants as part of the larger Language Flagship, which is a National Security Education Program (NSEP) and Defense Language and National Security Education Office (DLNSEO) initiative to improve language learning in the United States. The goal of the three universities' Language Proficiency Flagship Initiative grants was to document language proficiency in regular tertiary foreign language programs so that the programs, and ones like them at other universities, could use the proficiency-achievement data to set programmatic learning benchmarks and recommendations, as called for by the Modern Language Association in 2007. This call was reiterated by the National Standards Collaborative Board in 2015.During the first three years of the three, university-specific five-year grants (Fall 2014 through Spring 2017), each university collected language proficiency data during academic years 2014-2015, 2015-2016, and 2016-2017, from language learners in selected, regular language programs to document the students' proficiency achievements.University A tested Chinese, French, Russian, and Spanish with the NSEP grant funding, and German, Italian, Japanese, Korean, and Portuguese with additional (in-kind) financial support from within University A.University B tested Arabic, French, Portuguese, Russian, and Spanish with the NSEP grant funding, and German and Korean with additional (in-kind) financial support from University B.University C tested Arabic, Chinese, Portuguese, and Russian with the NSEP grant funding, and Korean with additional (in-kind) financial support from University C.Each university additionally provided the students background questionnaires at the time of testing. As stipulated by the grant terms, at the universities, students were offered to take up to three proficiency tests each semester: speaking, listening, and reading. Writing was not assessed because the grants did not financially cover the costs of writing assessments. The universities were required by grant terms to use official, nationally recognized, and standardized language tests that reported scores out on one of two standardized proficiency test scales: either the American Councils of Teaching Foreign Languages (ACTFL, 2012) proficiency scale, or the Interagency Language Roundtable (ILR: Hertzog, n.d.) proficiency scale. The three universities thus contracted mostly with Language Testing International, ACTFL's official testing subsidiary, to purchase and administer to students the Oral Proficiency Interview - computer (OPIc) for speaking, the Listening Proficiency Test (LPT) for listening, and the Reading Proficiency Test (RPT) for reading. However, earlier in the grant cycling, because ACTFL did not yet have tests in all of the languages to be tested, some of the earlier testing was contracted with American Councils and Avant STAMP, even though those tests are not specifically geared for the specific populations of learners present in the given project.Students were able to opt out of testing in certain cases; those cases varied from university to university. The speaking tests occurred normally within intact classes that came into computer labs to take the tests. Students were often times requested to take the listening and reading tests outside of class time in proctored language labs on the campuses on walk-in bases, or they took the listening and reading tests in a language lab during a regular class setting. These decisions were often made by the language instructors and/or the language programs. The data are cross-sectional, but certain individuals took the tests repeatedly, thus, longitudinal data sets are nested within the cross-sectional data.The three universities worked mostly independently during the initial year of data collection because the identities of the three universities receiving the grants was not announced until weeks before testing was to begin at all three campuses. Thus, each university independently designed its background questionnaire. However, because all three were guided by the same set of grant-rules to use nationally-recognized standardized tests for the assessments, combining all three universities' test data was rather simple. During year two of data collection, the three universities organized to produce a more unified background questionnaire that would pose many of the same questions to students during the final third (2017) year of testing. Thus, this data deposit project, beyond the test scores and simple background data from all three years of testing, also contains data from the students' 2017 background questionnaire questions that were common across all three university background questionnaires.Acknowledgements: The projects benefited over the years from the help of the following individuals: Daniel R. Isbell, Xiaowan Zhang, Elizabeth Webster, Angelika Kraemer, Shinhye Lee, Jessica Fox, Melody Wenyue Ma, Amaresh Joshi, Bill VanPatten, Charlene Polio, Daniel Reed, Koen Van Gorp, Steven Ross, and Steven Pierce aided the project at Michigan State University. Elaine Tarone, Stephanie Treat, Monica Frahm, Kate Paesani, Carter Griffith, Ellen Wormwood, Anna Hubbard, Diane Rackowski, Gabriela Sweet, Anna Olivero-Agney, Adolfo Carrillo Cabello, Caroline Vang, Beth Dillard, Andrew Wilson, and Colin Delong aided the project at the University of Minnesota, Twin Cities. Catherine Scott, Elvis Ryan, Lissie Ah Yen, Paul Allen, and Jeanine Alesch contributed to The University of Utah project. Special thank yous from the three university PIs are extended to Erwin Tschirner, Margaret E. Malone, and Helen Hamlyn for their valuable assistance over the years with data collection, data information, and testing assistance, and to Judith E. Liskin-Gasparro for her assistance with the advanced speaking project that occurred during years 4 and 5 of the project. The PIs at the three universities extend their sincere appreciation to Samuel D. Eisen and Kaveri Advani at DLNSEO and Carrie Reynolds and Chelsea Sypher at IIE for their grant guidance and overall project support.References:ACTFL. (2012). ACTFL Proficiency Guidelines 2012. http://www.actfl.org/publications/guidelines-and-manuals/actfl-proficiency-guidelines-2012Hertzog, M.(n.d.). An overview of the history of the ILR language proficiency skill level descriptions and scale. https://www.govtilr.org/Skills/Modern Language Association. (2007). Foreign languages and higher education: New structures for a changed world. https://www.mla.org/Resources/Research/Surveys-Reports-and-Other-Documents/Teaching-Enrollments-and-Programs/Foreign-Languages-and-Higher-Education-New-Structures-for-a-Changed-WorldNational Standards Collaborative Board. (2015). World-readiness standards for learning languages (4th ed.). Alexandria, VA: ACTFL.
  • Abstract

    In the years 2014 through 2019, three U.S. universities, Michigan State University, the University of Minnesota, Twin Cities, and The University of Utah, received Language Proficiency Flagship Initiative grants as part of the larger Language Flagship, which is a National Security Education Program (NSEP) and Defense Language and National Security Education Office (DLNSEO) initiative to improve language learning in the United States. The goal of the three universities' Language Proficiency Flagship Initiative grants was to document language proficiency in regular tertiary foreign language programs so that the programs, and ones like them at other universities, could use the proficiency-achievement data to set programmatic learning benchmarks and recommendations, as called for by the Modern Language Association in 2007. This call was reiterated by the National Standards Collaborative Board in 2015.
  • Methods

    During the first three years of the three, university-specific five-year grants (Fall 2014 through Spring 2017), each university collected language proficiency data during academic years 2014-2015, 2015-2016, and 2016-2017, from language learners in selected, regular language programs to document the students' proficiency achievements.University A tested Chinese, French, Russian, and Spanish with the NSEP grant funding, and German, Italian, Japanese, Korean, and Portuguese with additional (in-kind) financial support from within University A.University B tested Arabic, French, Portuguese, Russian, and Spanish with the NSEP grant funding, and German and Korean with additional (in-kind) financial support from University B.University C tested Arabic, Chinese, Portuguese, and Russian with the NSEP grant funding, and Korean with additional (in-kind) financial support from University C.Each university additionally provided the students background questionnaires at the time of testing. As stipulated by the grant terms, at the universities, students were offered to take up to three proficiency tests each semester: speaking, listening, and reading. Writing was not assessed because the grants did not financially cover the costs of writing assessments. The universities were required by grant terms to use official, nationally recognized, and standardized language tests that reported scores out on one of two standardized proficiency test scales: either the American Councils of Teaching Foreign Languages (ACTFL, 2012) proficiency scale, or the Interagency Language Roundtable (ILR: Hertzog, n.d.) proficiency scale. The three universities thus contracted mostly with Language Testing International, ACTFL's official testing subsidiary, to purchase and administer to students the Oral Proficiency Interview - computer (OPIc) for speaking, the Listening Proficiency Test (LPT) for listening, and the Reading Proficiency Test (RPT) for reading. However, earlier in the grant cycling, because ACTFL did not yet have tests in all of the languages to be tested, some of the earlier testing was contracted with American Councils and Avant STAMP, even though those tests are not specifically geared for the specific populations of learners present in the given project.Students were able to opt out of testing in certain cases; those cases varied from university to university. The speaking tests occurred normally within intact classes that came into computer labs to take the tests. Students were often times requested to take the listening and reading tests outside of class time in proctored language labs on the campuses on walk-in bases, or they took the listening and reading tests in a language lab during a regular class setting. These decisions were often made by the language instructors and/or the language programs. The data are cross-sectional, but certain individuals took the tests repeatedly, thus, longitudinal data sets are nested within the cross-sectional data.The three universities worked mostly independently during the initial year of data collection because the identities of the three universities receiving the grants was not announced until weeks before testing was to begin at all three campuses. Thus, each university independently designed its background questionnaire. However, because all three were guided by the same set of grant-rules to use nationally-recognized standardized tests for the assessments, combining all three universities' test data was rather simple. During year two of data collection, the three universities organized to produce a more unified background questionnaire that would pose many of the same questions to students during the final third (2017) year of testing. Thus, this data deposit project, beyond the test scores and simple background data from all three years of testing, also contains data from the students' 2017 background questionnaire questions that were common across all three university background questionnaires.
  • Methods

    The dataset 2014-2017 Test Score Qualifiers includes the variables LANG_MAJOR and LANG_MINOR, regarding the foreign language(s) the participants study, as well as LANGUAGE, which is the target language the participant was tested in, and RATING, participants' scores. It also includes the demographic variables include GENDER (self reported gender) and ACADEMIC_LEVEL (what year the participants are in at their university).The dataset 2017 Student Background Info includes the variables LANGUAGE_SPOKEN_AT_HOME, FAMILY_SPEAKS_LANGUAGE, and SPEAKING_GROWINGUP, which refer to whether the participant spoke the target language at home or had exposure to it in the past.The dataset 2017 Student External L2 Learning includes variables regarding to what extent participants interacted with their target language outside of class, such as OUTOFCLASS_COMPLETEHW, OUTOFCLASS_WRITEEMAILS, and OUTOFCLASS_LISTENTOMUSIC.
  • Methods

    Presence of Common Scales: For the test data, two scales were used: (1) ACTFL proficiency scale; see the ACTFL (2012) Proficiency Guidelines.(2) ILR proficiency scale; see Hertzog, M. (n.d.). An overview of the history of the ILR language proficiency skill level descriptions and scale.
  • Methods

    Response Rates: Over the years, the response rates were not calculated. Intact classes were brought to the computer labs to take the tests. Individual students could opt out of testing by, for example, being absent on the day of testing, or could decide to not come in for additional (normally listening and reading) assessments that they were supposed to take in proctored language laboratories on a walk-in basis outside of class or with their intact class on another day. (Each student was provided the option to take three exams: speaking, listening, and reading). Students were presented with a background questionnaire at the time of the speaking assessment, but they were allowed to not answer all questions. Some opted to not take the background questionnaire.
  • Abstract

    Datasets:

    • DS0: Study-Level Files
    • DS1: Restricted-Use 2014-2017 Test Scores Qualifiers Data
    • DS2: Restricted-Use 2017 Student Background Info Data
    • DS3: Restricted-Use 2017 Student External L2 Learning Data
Temporal Coverage
  • Time period: 2014-08-15--2017-06-15
  • 2014-08-15 / 2017-06-15
  • Collection date: 2014-10-01--2017-06-15
  • 2014-10-01 / 2017-06-15
Geographic Coverage
  • Michigan
  • Minnesota
  • Utah
Sampled Universe
College students enrolled in undergraduate foreign language programs were the object of study.
Sampling
The students in the sample were university students enrolled in first through fourth year foreign language classes at three universities. (Please see the text under "Universe" above for more information about the sample.) The language programs that participated were those that were not supplemented nor augmented through federal funding. In other words, foreign language programs that received additional, program-enhancement funding through the federal government or through the U.S. Department of Defense's Language Flagship were excluded: This study included students in language programs that were considered "regular" university-level language programs. In most cases, intact foreign language classes were selected pseudo-randomly to participate by having their students assessed in speaking, reading, and listening. The method for intact class selection varied from university to university and from year to year. In some cases, classes with professors or instructors willing or wanting to participate were selected. In other cases, when, for example, two Russian classes were available for testing, but only one language lab was available at the time testing, one of the two Russian classes was selected at random. At times, students self-selected themselves for testing, or students associated with a specific study abroad program or a certificate program were selected for testing (selection information is in "course_note2" in the data set "2014-2017TestScores-Qualifiers"). Most of the testing occurred during the spring semester of 2015, 2016, and 2017, although additional testing occurred during the fall semesters of 2014, 2015, and 2016. Some of the smaller language programs participated by having some of their classes take a smaller subset of tests, such as just reading and listening.
Collection Mode
  • cognitive assessment test
  • on-site questionnaire
Note
Funding institution(s): United States Department of Defense (2340-MSU-7-PI-093-PO1, 0054-MSU-22-PI-280-PO2, 2340-UMN-4-PI-093-PO2, 0054-UMN-14-PI-280-P03, 2340UTAH9P1093P01, 0054UTAH23PI280P01).
Availability
Delivery
One or more files in this study are not available for download due to special restrictions; consult the study documentation to learn more on how to obtain the data.
Alternative Identifiers
  • 37499 (Type: ICPSR Study Number)

Update Metadata: 2020-03-10 | Issue Number: 2 | Registration Date: 2020-03-10

Winke, Paula Marie; Gass, Susan M.; Soneson, Dan; Rubio, Fernando; Hacking, Jane F. (2020): Foreign Language Proficiency Test Data from Three American Universities, [United States], 2014-2017. Version 1. Version: v1. ICPSR - Interuniversity Consortium for Political and Social Research. Dataset. https://doi.org/10.3886/ICPSR37499.v1