Kvantiteter i kvalitativt bedömda elevtexter – framtida verktyg för rättvis bedömning?
DOI:
https://doi.org/10.5617/adno.6357Keywords:
skrivbedömning, automatisk textanalys, nationella provAbstract
Ett problem i bedömning av skrivprov är bristande samstämmighet mellan bedömare. Att fler än en person bedömer samma skrivprov är ett vanligt sätt att öka samstämmigheten, men denna metod är tidskrävande och kostsam. Ett automatiskt bedömningsverktyg, som ger bedömaren stöd på ett effektivt och förutsägbart sätt, vore därför ett användbart hjälpmedel. I artikeln presenteras en pilotundersökning inför ett framtida arbete med automatisk bedömning av skrivprov, där användbarheten av fyra olika textmått i automatisk bedömning undersöks: textlängd, ordlängd, ordvariationsindex och nominalkvot. Materialet utgörs av två korpusar med benchmarktexter från nationella prov, Np 1 och Np 3. Varje prov ges i två olika skolämnen, svenska och svenska som andraspråk. I analysen beräknas medel- och medianvärden samt korrelationer för textmåtten och texternas betyg. Resultatet visar att skillnaderna mellan textmåttens värden i de två korpusarna är relativt stora. I Np 3 uppmäts generellt högre värden för de fyra textmåtten än i Np 1. Vidare korrelerar samtliga undersökta textmått med betyg i Np 1, medan de textmått som visar starkast korrelation i Np 1 inte på ett signifikant sätt korrelerar med betyg i Np 3. Dessutom visar analysen att texter från ett av proven, som har samma betyg men har tillkommit inom olika svenskämnen, ligger nära varandra utifrån de objektiva textmåtten. I ett framtida arbete inför automatisk bedömning av skrivprov måste användningen av textmåtten anpassas till det specifika provet. Vidare bör en automatisk bedömning omfatta många fler textegenskaper än de som mäts med de undersökta måtten.
Nyckelord: skrivbedömning, automatisk bedömning, nationella prov, elevtexter, bedömning i svenska och svenska som andraspråk
Analyzing Quantity in Qualitatively Assessed Student Texts – a Future Tool for Fair Assessment?
Abstract
In assessing writing one problem is the lack of rater consistency. Letting more than one rater take part in the assessment of the same test is one way of improving rater consistency, but this method is time-consuming and expensive. A tool for automated assessment, giving the human rater support in an effective and predictable way, would therefore be useful. In this article a pilot study is presented, where the usefulness of four automatic text measures in the assessment of writing tests are investigated: text length, word length, word variation and nominal ratio. The data consists of two corpora with benchmark texts from two national tests, Np 1 and Np 3. Each test is given in both Swedish and Swedish as a second language. Mean and median values are calculated, as well as correlations for the text measures and the assessment grades of the texts. The results show important differences between the values of the text measures from the two tests. In Np 3 the values for text measures are generally higher than in Np 1. Further, the four text measures correlate significantly with grades in Np 1, but the measures correlating strongest in Np 1 do not show a significant correlation in Np 3. In one of the tests, the texts with the same assessment grade but different school subjects are very similar according to the text measures. The conclusion is that a tool for automated assessment must be adapted to a specific writing test. Furthermore, an automated assessment should include the analysis of a greater amount of text qualities than those having been the focus of this study.
Keywords: assessing writing, automated assessment, national tests, student texts, assessment in Swedish and Swedish as a second language
Downloads
Published
How to Cite
Issue
Section
License
Content published in Acta Didactica is - unless otherwise is stated - licensed through Creative Commons License BY-NC-ND 4.0. Content can be copied, distributed and disseminated in any medium or format under the following terms:
Attribution: You must give appropriate credit and provide a link to the license
Non-Commercial: You may not use the material for commercial purposes.
No derivatives: If you remix, transform, or build upon the material, you may not distribute the modified material.
No additional restrictions: You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notice: No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.
Authors who publish in Acta Didactica accept the following conditions:
Author(s) retains copyright to the article and give Acta Didactica rights to first publication while the article is licensed under the Creative Commons CC BY-NC-ND 4.0. This license allows sharing the article for non-commercial purposes, as long as the author and first publishing place Acta Didactica are credited.
The author is free to publish and distribute the work/article after publication in Acta Didactica, as long as the journal is referred to as the first place of publication. Submissions that are under consideration for publication or accepted for publication in Acta Didactica cannot simultaneously be under consideration for publication in other journals, anthologies, monographs or the like. By submitting contributions, the author accepts that the contribution is published online in Acta Didactica.