General

Selection of students is a risky experiment

Higher education programs must openly justify the methods they use for the selection of their students. That is not happening now, writes former education director Klaas Visser and that is unacceptable.

Tekst Klaas Visser - - 5 Minuten om te lezen

student selection2

Illustration Type tank

A number of courses in higher professional education and university education have more interested parties than there are places, so that a numerus fixus is required. The weighted draw has been abolished since 2016/2017 and all students are selected locally. Study programs themselves determine which students are most suitable. The legislator does impose requirements: the selection must contribute to study success and must be comprehensible, feasible and affordable, both for the prospective student and for the institution. The selection should consist of at least two criteria (not just final exam grades).

The best predictor of study success, the final exam mean, is unfortunately useless

It seems like a good idea: no longer the arbitrariness of fate, but allow the best students to study courses with limited space. But how does it turn out?
An initial report from the Education Inspectorate shows that students with a non-Western migration background, students with a lower average grade in secondary education and men are underrepresented in selective study programmes. This also applies to students with less educated parents and students from lower income groups. Worrying.

Free hand

This concern is echoed in the recently concluded coalition agreement. It states that selection must be transparent and fair. But are the selection procedures? And how do we find out?
Training has been given a free hand. If only at least two criteria are used, everyone can select in their own way. The inspectorate reports that training courses apply many different and almost always multiple selection criteria, but their weighting is unclear. Selection interviews are held, references are asked and tests and questionnaires are taken. Knowledge is measured and it is ascertained whether students have certain skills. Furthermore, measuring instruments are used to determine whether the student is adventurous, goal-oriented, motivated, careful, creative or whatever.

The legislator is taking a great risk with this social experiment. Research has been conducted into selection for years and it is repeatedly found that there is a lot of wishful thinking but little evidence. Many selection instruments appear to predict little and effects disappear in the longer term. The relationship with failure and return is limited or absent and the methods used often prove to be unreliable and valid. It is believed that interviews, references and questionnaires predict well, but that is not the case. And yet everyone has been given a free hand, with the danger that in a few years' time we will find that the methods used are not turned out to be transparent and honest and that certain groups were left out.

Unreliable

What's going wrong? Meta analyzes of selection research show that reference letters, individual interviews such as intake interviews, emotional intelligence tests, motivation letters and personal statements do not or hardly predict anything. Yet they are widely used. Self-reporting is used, for example in the form of motivation letters and CVs. The danger of unreliable data is great: with the data supplied, it is not known who produced the material. In addition, applicants will come up with the most desirable answer. Self-reporting is therefore in fact useless as a selection tool. Even if a questionnaire predicts something in normal circumstances, we know that students will fake if something depends on it. That is why many questionnaires are useless as a selection tool. There are also training agencies that do everything they can to find out what the selection consists of so that they can train applicants. Moreover, secrecy of the method of selection is very difficult because the legislator demands transparency.

Another major problem concerns self-selection: certain categories of pupils will not participate because they think: I will not be able to do that anyway, and that is probably partly determined by someone's background.

Exam grades

The best predictor is still the final exam mean, which in the past also counted in the weighted draw. Unfortunately, it is unusable because the selection must take place before the final exam grades are known. Another proven predictor is an admission test based on a curriculum sample or trial study. A person takes this test after following an education and studying literature that is representative of the study that one wants to do. What's more, the nice thing about this method is that the student gets a good idea of ​​the study. Furthermore, a certain willingness to exercise that is related to motivation is measured. The disadvantage is that there is a good chance that the test cannot be kept secret enough and that we do not know how many resources the candidates will use (training officers, graduates).

It is conceivable that there are courses that require specific skills and that these can be measured reliably and validly. In a letter to the House of Representatives last summer, former minister Jet Bussemaker mentioned two programs (oral health care and nursing) that report spectacular differences between selected and drawn students. This data must be scientifically checked and published so that it can be shared with others.

Unacceptable

Many courses will choose their instruments with the best of intentions. But we see that selection is used to try everything and that is unacceptable when there is already so much knowledge and research. That is why I argue that every study program that selects is obliged to substantiate the empirical evidence on the basis of which the selection has been set up. In addition, everyone must be open about the criteria used and the weighting applied. If it turns out that the proof cannot be provided, we must reconsider whether selection is better than weighted lottery. Otherwise we would have handed in a system (draw) that was random but in any case did not look at color and background for a system that is not fair and transparent.

Klaas Visser was director of the Psychology program at the University of Amsterdam from 2007 to 2015, has published articles on study success and selection, and now works as an independent advisor in higher education.

This page was translated automatically, if you see strange translations please let us know