Student Performance
EECS2030, F19, 6 weeks in (BM)
45 students (a mix of CS/SE/CE) wrote a labtest involving a variation of fizzbuzz (among other things). They had to implement a class that represents a range of integer values (from a specified minimum value to a specified maximum value). The fizzbuzz method is supposed to return a comma+space separated list of the values of the range where values divisible by 3 are replaced by fizz, values divisible by 5 are replaced by buzz, and values divisible by 3 and 5 are replaced by fizzbuzz.
15 of 45 failed to write a correct version of fizzbuzz
An additional 9 of 45 failed to write a version of the class that compiled; whether their version of fizzbuzz was correct or not I do not know.
Summary: In total 24 / 45 (53%) could not write a class that produced the correct string as required by fizzbuzz.
EECS2030, CS students (JW)
Here is a sequence of esentially the same 80 minute Labtest in different years (about 6 weeks into EECS2030), Questions in both labtests are identical: 4 versions run in labtest mode.
- 2030 F17 Labtest 2: average 26.17%
- 2030 F18 Labtest 1: average 58.4% (prior EECS1022 taught by Jackie, 33% failed this Labtest and 33% obtained A or higher).
- 2030 F19 Labtest 1: average 52% (27% received an A or higher; about 60% failed). 144 CS students wrote in total.
Students were given 3 example tests with a scheduled review session to go over the sample test. So students would have been prepared for the test format.
For those students whose code did not compile (in F19), TAs manually fixed the code, regraded and took a 30% penalty. Students seem to have trouble submitting code that compiles!
Students were only given Tests.java
to start with, which contains a list of JUnit tests and they are required to infer and implement the necessary classes and methods so that everything compiles and all tests passes. The solution requires multiple classes with associations (single-valued and array-valued) and iterations.
The main difference between the two classes is that in F18, a large number of students took EECS1022 with me in W18, where I ran the lab sessions in a way close to how we proposed EECS1015: students get to write classes and methods from scratch for each lab exercise, and they were assessed via three network secure labtests (first one on conditional, second on loops, and third on classes and objects).
Therefore, I believe that the substantially higher average in F18 is due to the students being taught "more properly" in their EECS1022, compatible with the ideal we are proposing for EECS1015.
Summary: In the Java solution below, getAmountToPay
is an example of the hardest loop that students had to write. It involves iterating over an array of orders and summing up the price of each order. So this is an example of a simple loop with simple OO.
public class Member {
private double balance;
// Each order has its price and quanltiy.
private Order[] orders;
private int noo; /* number of orders */
Member(double balance) {
this.balance = balance;
orders = new Order[30];
}
// From the current list of stored orders,
// sum up the amount to pay.
double getAmountToPay() {
double amount = 0;
for(int i = 0; i < noo; i ++) {
Order o = orders[i];
amount += o.getPrice() * o.getQuantity();
}
return amount;
}
// Add a new order to the array of orders.
void addOrder(Order o) {
orders[noo] = o;
noo ++;
}
}
EECS2030Z, Feb 2019 (HT) 75 students
Section Z data for Winter 2019 is as follows: Number of Students Attended:75 Average Class Grade (Percentage): 58.6%
- 20/75 (26.6%) students got 100%
- 9/75 (12%) students in between 80% to 95%
- 11/75 (14.6%) students in between 60% to 80%
- 9/75 (12%) students in between 40% to 60%
- 7/75 (9.3%) students in between 20% to 40%
- 5/75 (6.6%) students in between 1% to 20%
- 14/75 (18.6%) students got zero
Summary: about 35% obtained 40/100 or less on computing with simple loops (computing the maximum, of an array, reversing an array etc.).
Not unsurprisingly, 18% managed to obtain zero; over 26% got full marks! 46% are at the D+ level or below. Over one third are failing quite badly (40 or below).
/*
4 questions, 80 minutes. An example:
Your task: Given an integer array,
calculate the average of its elements.
Implement the body of this method,
so that its return value is as expected.
*/
public static double averageOf(int[] ia) {
double sum = 0;
for(int i = 0; i < ia.length; i ++) {
sum += ia[i];
}
double average = sum / ia.length;
return average;
}
EECS1022, August/Oct 2018, 202 CS students (JW)
202 of our EECS1022 students wrote a Labtest of 4 questions given 80 minutes towards the end of the course. This Labtest is similar to one one above given by Hina in the subsequent course EEC20230.
This Labtest would normally have been written in March, but in this term there was a strike so the Labtest was written in Sugust 2018 (and again in October 2018 with similar results).
Students attended extra lectures and scheduled labs before the Labtest. They were also given a number of loop examples to work on in preparation to the Labtest. In the Labtest, they were given 4 loops to write in Java, ranging from “easier" to “harder". E.g. calculate the average of an integer array (easier) to shift an integer array to the right (harder). They were provided with expected outputs so that they knew what to print to the terminal.
The results are as follows:
- The majority (about 55%) were at the D+ level or less.
- About 35% failed.
- Some students did really well [100/100]
Summary: This group did not do well on basic computational thinking. Similar results were obtained in the deferred Labtest in October. This means that after doing the introductory programming course EECS1012, and then more than half way through EECS1022, they were still having difficulty with simple loops using integer arrays. Although some might attribute the poor performance to the strike, I don’t believe that is the full story.
EECS2011 (NV, comments Feb 2019)
Let me start 'my 2 cents' by saying that I taught 2011 last fall (2018), for a class of about 110 students, so I am reasonably fresh on the topic. And, important to point out, approximately 30% of the students were 'retakes' from previous terms. …
The reality 'on the ground' is far gloomier, and in my opinion, we should first make sure that in 2011 we fill in all the 'fundamental holes' left from the first year courses (or you could also say, to help the students 'connect the dots'), before worrying how to add more advanced topics. And, yes, while in an ideal world in the 3rd year you would get a student very proficient in invariants and correctness proofs, you definitively WOULD NOT WANT TO GET a student who struggles with the primitive data types and basic linked lists! As an illustration of how alarming things are, let me point out: 1) When, in the 3rd week of lectures! I asked the students to give me a practical example of when they would choose to declare a (Java) variable as 'short' instead of 'long', there was a complete silence. Then, when I suggested to them to think along the lines of arrays and memory management, there was an 'a-ha' moment, followed by: "But they never taught us to think that way." This was just one of many, many cases where I had to go 'back to the basics' of programming and logic behind ...
2) As for the linked-lists (which are the key for understanding any advanced data structures and most of the material from Goodrich), I do not know even where to start! It is not an exaggeration to say that 80% of students had come to my 2011 with very little (if any) understanding of what linked-lists are and how to code them - and this applies even to those who were retaking 2011! The best proof of this is the midterm score analysis that I've done for the purposes of departmental CLO-data collection. As you can see from the attached document, the class average for a VERY BASIC linked list question was 3.6/10 (after my very generous grading). Realistically it should have been 2/10!
As you can sense from the above, I am still very frustrated with what I saw last term in 2011. And, the situation was/is far worse then in any of the previous years I had taught the course. So, to conclude - from my 'on the ground' experience, I would urge the curriculum committee to base any proposal for 2011 change on the real observations and real students, versus an idealized upper-year perspective.
Study: Academic Skill Deficiencies in 4 Ontario Universities
The recent study Academic Skill Deficiencies in four Ontario universities (Grayson et.al) has some useful data.
There is triubling evidence that the secondary-school system is failing to meet basic pedagogical objectives and the problems persist in the universities. The study team, co-led by York University Department of Sociology Professor J. Paul Grayson, and Western University Department of Sociology Professor James Côté, included associate professor of sociology Robert Kenedy of York University, and researchers Liang Hsuan Chen of the University of Toronto Scarborough and Sharon Roberts of the University of Waterloo.
In order to answer this question, the two professors first conducted a survey of all 22,000 students registered in the faculty of Liberal Arts and Professional Studies (LA&PS) at York University. A year later, students in similar areas of study at the University of Toronto Mississauga (UTM), the University of Toronto Scarborough (UTSC), Western University, and the University of Waterloo were sent the same survey.
York, Western, Waterloo and Toronto, which together enrol 41 per cent of Ontario undergrads — found that “only about 44 per cent of students felt they had the generic skills needed to do well in their academic studies, 41 per cent could be classified as at risk in academic settings because of limited levels of basic skills, and 16 per cent lacked almost all the skills needed for higher learning.”
From the executive summary of the report:
It is generally assumed that certain academic competencies are essential for university graduation, occupational success, and democratic citizenship. Unfortunately, many university-based instructors find that a good number of their students are weak in terms of key academic skills, such as analysis and research. Instructors’ assessments are similar to those of some prominent Canadian employers. The latter have lamented the absence of important skills, such as writing ability, among the graduates they hire. This somewhat negative characterization of Canadian graduates has been verified by studies conducted by, among other agencies, Statistics Canada. It finds a surprisingly low level of literacy and numeracy among a sizable proportion of Canadians with degrees.
Because of a mounting frustration resulting from having to teach a growing number of students unprepared for their university studies, two of this report’s authors (Grayson and Kenedy) decided to investigate if undergraduate students in their faculty agreed that they lacked key academic skills. With this intent in mind, in late 2017, they surveyed students from all disciplines and levels of study enrolled in the faculty of Liberal Arts and Professional Studies at York University. In this faculty, students take courses in the humanities and social sciences and in some professional fields.
In the 2017 survey, 50 questions focusing on skills were asked of nearly 1,000 students. A number of other demographic and background questions were also included in the survey. The skill questions focused on abilities in writing, taking tests, analysis, time and group management, research, giving presentations, and elementary numeracy. Students were asked to rate their ability with these skills when completing tasks requiring them. The long-term goal of collecting this information was to provide foundational assessments of skills and knowledge that could eventually be used to make curricular changes, if a significant proportion of students reported serious academic skill deficiencies.
The results of the survey confirmed both of the instructors’ experiences with their own students, the perceptions of employers, and the results of recent studies such as those reported by Statistics Canada. Overall, based on an advanced statistical classification algorithm, on this suite of questions, only 51% of survey participants scored as “functional” and prepared for the rigors of university life, occupational success, and democratic citizenship. A further 27% were classified by this algorithm as “at-risk.” The remaining 22% were categorized as “dysfunctional.” Grayson and Kenedy wondered to what extent this was a York problem. In order to answer this question, they invited colleagues at Western University, the University of Waterloo, the University of Toronto Mississauga (UTM), and the University of Toronto Scarborough (UTSC) to replicate the study. Each agreed to conduct exactly the same survey in 2018. In order to minimize bias, the surveys were carried out at the same time of year as the one conducted at York. The surveys at the other universities also targeted students in faculties similar to Liberal Arts and Professional Studies at York.
When the results of these additional surveys were analyzed, it was clear that York is not an isolated case. After pooling the results of the five surveys, we found that only 44% of survey participants could be classified as functionally prepared to do well in their university studies. An almost equal percentage (41%) were identified as at-risk. The remaining 16% were classed as dysfunctional. Family background did not make a difference: neither first-generation university attenders nor international students were more likely to be dysfunctional or at-risk.
Differences among universities in terms of this skill- level classification system were minimal. Also, in each university and overall, the percentages of students in various skill groupings did not vary by year of study: apparently large numbers of students enter, and leave, university without having mastered some very basic academic skills.
For various reasons, universities are concerned with outcomes such as student learning, retention, and overall satisfaction with the university experience. But to what extent do the skill deficiencies we discovered have consequences for these outcomes? The answer is that they have a considerable impact. For instance, students in both the at-risk and dysfunctional skill- level groups were less likely than students in the functional group to achieve high grades. In addition, they were considerably more likely to consider leaving their campus prior to degree completion and to be dissatisfied with their university experience. These effects held independently of students having good grades in high school, of being a domestic or international student, of being the first in their families to attend university, of gender, and of having spoken English in their homes while growing up.
In broader terms, the data we collected have important implications. Although the provincial secondary educational system has clearly articulated and laudable objectives, these desiderata are not being met to the extent that most people assume. Our results suggest that large numbers of unprepared graduates of Ontario high schools enter the province’s universities. Moreover, their deficiencies are often not remedied over the course of their studies. As a result, it is likely that many employers end up with new employees who are unable to live up to expectations regarding their ability to process more abstract types of information.
What is to be done? Most importantly, steps need to be taken to ensure that, consistent with provincial objectives, graduates of the Ontario’s secondary schools possess the basic academic skills necessary for university success, future employment, and democratic citizenship. Once these skills are established, they need to be further honed at the university level.
At the same time, hopefully as an interim measure, universities themselves could consider ways of utilizing the curriculum to reduce their students’ skill deficits. Consistent with this possibility, in our study, 69% of students felt the need for a “compulsory, first-year credit course that would cover subjects such as university standards, criteria, and procedures; critical thinking; effective studying; time management; improving writing; and jobs in the field in which you are majoring.” Such a course could be based on, and be a remedy for, some the skill deficiencies identified in this report.
Page 7: In 2017, a student wrote to Professors Grayson and Kenedy, saying, “IM IN FIRST YEAR AND IM DOING SO BAD AND IM SO SCARED BC IM FINDING IT REALLY HARD TO MANAGE MY TIME AND MY ANXIETY HAS GOTTEN SO BAD AND IDK WHAT TO DO AND IM SCARED OF GETTING KICKED OUT AND IM JUST SCARED.” Another student wrote, “Not enough time on tests. Have difficulty citing. We are not taught. Have difficulty with multiple-choice, short and long answer questions on tests. We should be taught how to cite properly!”
Page 29: Among math and science students, high school grades did not vary in a statistically significant way by skill category. Among the functional, 82% received grades greater than or equal to A. The figures for the at-risk and dysfunctional were 73% and 77% respectively. In other words, students in math or science could be dysfunctional and still have received very high grades in high school.
A University System in Crisis
One of the Grayson et al study’s authors, James Côté, co-authored the 2007 book, Ivory Tower Blues: A University System in Crisis, which reported the results of an in-depth analysis of the self-esteem movement’s consequences at Western University in the faculties of arts, social science and natural science. The authors wrote:
Students with high self-esteem based on false feedback are much more difficult to teach because many cannot take criticism and feedback without assuming that it is personal. Experimental research suggests that such people attempt to preserve their self-esteem, not by altering their behaviour so that it becomes more based in reality, but by attacking the source of the threat.
More than one-third of the professors they interviewed identified fewer than 10 per cent of their students as “fully engaged.” Over 80 per cent of professors said they had dumbed down their course work, and had reduced the frequency and difficulty of assignments. But even when course work is made easier, the students are not prepared.