Jan. 30th, 2018

 Varsavsky, C. (1995). The design of the mathematics curriculum for engineers: A joint venture of the mathematics department and the engineering faculty. European Journal of Engineering Education, 20(3), 341-345.
 
Varsavsky and colleagues ran a survey amongst the convenors of 130 engineering courses at their institution (Monash, Caulfield) of what maths topics were needed, by whom and when. The paper was interesting for its broad overlaps with my local situation: the maths taught is similar and the challenges of diversity are similar, as well as the lack of communication between involved parties. The survey found, as I have found in a similar survey, that the overall maths needs are large, but most courses use very little, or only basics. The paper closes with some sensible suggestions for setting up an engineering maths curriculum. I am interested to see that, once again, proofs don’t get a mention.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Perkin, G., Pell, G. & Croft, T. (2007). The Mathematics Learning Support Centre at Loughborough University: staff and student perceptions of mathematical difficulties. Engineering Education, 2(1), 47-58.
 
Loughborough University has a Mathematics Learning Support Centre where students can go to seek help with their maths problems. The authors report on a survey run across staff and students to look at perceptions of difficulties and reasons for attending the centre. The paper reports on a few questions, with staff and student responses. The one of most interest to me was the one on maths difficulties, specifically basic manipulation. Staff see this as a big issue in student difficulties while students see it as a minor issue.
 
“Regarding basic manipulation, there appears to be a huge chasm between staff and student perceptions. Staff perceive a fundamental weakness, whereas students see a problem with the question being posed, which again indicates that staff need a greater awareness of current school mathematics syllabi and the level at which topics are delivered.” (p. 56)
“An interesting point to emerge is that students do not seem to appreciate that it is often lack of understanding of basic and fundamental mathematics that is at the root of their problems. This has implications for any attempts by staff to encourage students to undertake remedial work since students do not believe that they need to.” (p. 56)
 
This echoes what I see in my classes as well. Students’ algebraic manipulative skills are very weak, yet this is not recognised as the big issue it is by the students. Encouraging students to attend to this weakness falls on deaf ears. In my case, I have run compulsory assessments on factorising and manipulating logs (for example) with accompanying worksheets. I insist on an 80% pass and students can rewrite as many times as necessary. We have sessions in class working on these topics. Even with all of this, the students do not value basic manipulation and spend as little time as possible developing these skills. These problems appear to be global.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Redish, E. & Smith, K.A. (2008). Looking beyond content: Skill development for engineers. Journal of Engineering Education, 97(3), 295-307.
 
Redish and Smith summarise key findings in cognitive and neurological research on how learning occurs and is manifested in the brain. They cite Pellegrino’s three main threads of educational research: constructivism, knowledge organisation and metacognition, and his three components of educational practice: curriculum, instruction and assessment. They proceed to link Pellegrino’s summary of educational research with cognitive research into learning, giving what I found was a really useful summary of various key cognitive findings. Their theoretical framework for learning is based on the concepts of activation, association, compilation and control.
 
Activation refers to the activation of neurons, becoming entrained and working together in clusters. Activation is related to the differences between long term memory and working memory and how working memory can only handle about seven “chunks” of knowledge at a time. This essentially constructivist relationship of neurological activation to learning supports the idea that student misconceptions are not rigid but can be changed.
Association refers to entrained neurological pathways becoming associated with one another through repeated practice to create schemas, “skeletal representation[s] of knowledge abstracted from experience that organizes and guides the creation of particular representations in specific contexts” (p. 298). Associational paths link to Pellegrino’s knowledge organisation. Association is highly context dependent: change the context and different associations are made. This can be disadvantageous in educational contexts as correcting a student’s misconceptions in one arena can fail to transfer to another due to different associational paths: “students build up alternative associational paths; one set of knowledge is activated specifically for a physics class but the other intuitive knowledge is not erased but remains for activation in all other situations” (p. 298). The authors report that this idea links to work in cognitive science on “conditioning, attempts to eliminate conditioning (extinction), and its reemergence (relapse)” (p. 299).
Compilation is related to the idea that “items that are originally weakly associated may become very strongly tied together” (p. 299). This is the process by which chunks are created. Experts have many actions strongly tied together, reducing the cognitive load of certain types of problem solving, while novices experience greater cognitive load. Experts have to “reverse engineer” how certain types of problems are solved in order to teach solution processes to students. It can help to observe students solving problems, to see where their difficulties lie.
Control refers to executive control and metacognition. We are constantly (and usually unconsciously) deciding what input from the external world to pay attention to. These decisions are made by control schema which have developed over time and experience. “These control schemas have three important consequences: they create context dependence, they give us a variety of resources for building new knowledge and solving problems, and they control which of these resources we bring to bear in given circumstances.” (p. 299). Context dependence: We all have developed “expectation schemas” which consider available input and decide what is important and how to react.
Epistemological resources: we understand that knowledge can be transmitted and also created. Some knowledge is outside our control and other is within our control.
Epistemological games, or e-games, is the term given to “a reasonably coherent schema for creating new knowledge by using particular tools” (p. 300). Such e-games include an understanding of beginning and end, what information to use, what structure to impose, etc. such as the typical process of solving a physics problem. Choice of e-game is crucial to successful problems solving.
 
Having provided a summary of cognitive research into learning, the authors take a look at the role of mathematics in engineering and how the differences between how maths is typically taught and how maths is used by engineers can create some serious obstacles to learning. In engineering, students are expected to learn the “syntax” of mathematics, but also what it means and how to use it effectively in engineering contexts. How a mathematician or engineer interprets symbols can differ widely. To a mathematician the letters chosen to represent the variables might be arbitrary, but an engineer they carry meaning. “we [engineers] tend to look at mathematics in a different way from the way mathematicians do. The mental resources that are associated (and even compiled) by the two groups are dramatically different. The epistemic games we want our students to choose in using math in science require the blending of distinct local coherences: our understanding of the rules of mathematics and our sense and intuitions of the physical world.” (p. 302). Differences between how maths is used in maths and in engineering include (p. 302):
  • Equations represent relationships among physical variables, which are often empirical measurements
  • Symbols in equations carry information about the nature of the measurement beyond simply its value. This information may affect the way the equation is interpreted and used.
  • Functions in science and engineering tend to stand for relations among physical variables, independent of the way those variables are represented
Students can fail to make the connection between symbols and their physical meaning and their relationship to empirical measurement.

The authors provide a schematic describing modelling: define your physical system, represent the system mathematically, process that mathematical system as appropriate, interpret your results. Later they report that one view of modelling is that it is inseparable from problem solving. Redish and Smith, however, feel that modelling is easier to teach than problem solving and that, in practising modelling, much problem solving is learned along the way. They support this view by framing problem solving in similar terms to their schematic of the modelling process. I am a proponent of the Polya framework for problem solving, which differs somewhat from the Redish and Smith framework. There are similarities, however, and a fruitful comparison of the two could no doubt be made.
 
While the role of proof in engineering mathematics is not a focus of this paper, the authors do have a few things to say which support a relatively high presence of proof.
“Often what our students learn in our classes about the practice of science and engineering is implicit and may not be what we want them to learn. For example, a student in an introductory engineering physics class may learn that memorizing equations is important but that it is not important to learn the derivation of those equations or the conditions under which those equations are valid. This metamessage may be sent unintentionally.” (p. 297)
“Often, in both engineering and physics classes, we tend to focus our instruction on process and results. When we teach algorithms without derivation, we send our students the message that “only the rule matters” and that the connection between the equation we use in practice and the assumptions and scientific principles that are responsible for the rule are irrelevant. Such practices may help students produce results quickly and efficiently, but at the cost of developing general and productive associations and epistemic games that help them know how their new knowledge relates to other things they know and when to use it. As narrow games get locked in and tied to particular contexts, students lose the opportunity to develop the flexibility and the general skills needing to develop adaptive expertise.” (p. 303)
 
There are some “bad habits” which can be learned in maths class. Giving algorithms without derivation gives the impression that the assumptions and scientific principles behind the rules are not important (see quote above). Another bad “maths” habit is substituting numbers early in the problem-solving process. This tends to hide associations between variables and inhibits reflecting back on the process. Another habit learned in maths class is to elevate the “processing” part of the modelling process above the others, thereby hindering transfer of the skills to other courses more based on the whole modelling package.
 
The authors close their paper with a discussion of using cooperative learning in a course teaching modelling. They give some interesting examples of relating physical reality to mathematical models and vice versa.
 
I thoroughly enjoyed reading this paper. Some papers I can summarise in a paragraph. This one took me two pages simply to summarise and I also have four pages of notes! I already knew a lot of the cognitive findings, but some was new to me, such as epistemic games and conditioning. This is the sort of paper one reads and rereads many times.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Leppävirta, J. (2011). The impact of mathematics anxiety on the performance of students of electromagnetics. Journal of Engineering Education, 100(3), 424-443.
 
The author investigated the relationship between mathematical anxiety and performance in an electromagnetics course. There is a literature review of studies on mathematics anxiety showing, in general, that there is a correlation between high anxiety and poor performance. The causal relationship tends to be less clear, however, although there is some evidence to show that poor prior performance leads to higher anxiety which in turns impacts negatively on performance in procedural tasks. Two maths anxiety scales are discusses, the Fennema-Sherman scale and the MARS scale. Those scales and others were adapted to make the Electromagnetics Mathematics Anxiety Rating Scale (EMARS) which was used in this study. The scale had several subscales which measured perceived usefulness of the course, confidence, interpretation anxiety, fear of asking for help, and persistency. The data and results are discussed in some detail. Conclusions include that high anxiety students perform less well in procedural work than low anxiety students, but that conceptual performance is less clearly aligned with anxiety. In addition, high anxiety students felt less confident about their maths ability and also self-describe as being less persistent in solving mathematical problems. The authors close with the suggestion that assessment should be more aligned with conceptual understanding rather than procedural processes.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Engelbrecht, J., Bergsten, C. & Kågesten, O. (2012) Conceptual and procedural approaches to mathematics in the engineering curriculum: student conceptions and performance. Journal of Engineering Education, 101(1), 138-162.
 
The authors develop an instrument to measure performance, confidence and familiarity with both procedural and conceptual problems in mathematics. The students were second-year engineering students in two institutions in two countries – South Africa and Sweden. The authors provide definitions of the relevant terms and take issue with some education literature using terms like “conceptual” and “knowledge” too loosely and conflating them with other terms. The paper presents detailed data and analysis, finding differences and similarities across different groups (read the paper for details), concluding that “the use of mathematics in other subjects within engineering education can be experienced differently by students from different institutions indicating that the same type of education can handle the application of mathematics in different ways at different institutions.” (p. 158/9)
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Pepper, R.E., Chasteen, S.V., Pollock, S.J. & Perkins, K.K. (2012) Observations on student difficulties with mathematics in upper-division electricity and magnetism. Physical Review Special Topics – Physics Education Research, 8(010111), 1-15.
 
The authors focus in this paper on the mathematical difficulties experienced by physics and engineering students in an “upper division” electricity and magnetism course. They categorise the difficulties as (p. 2):
  • “Students have difficulty combining mathematical calculations and physics ideas. This can be seen in student difficulty setting up an appropriate calculation and also in interpreting the results of the calculation in the context of a physics problem. (However, students can generally perform the required calculation.)
  • Students do not account for the underlying spatial situation when doing a mathematical calculation.
  • Students do not access an appropriate mathematical tool. Students may instead choose a mathematical tool that will not solve the relevant problem, or may choose a tool that makes the problem too complex for the student to solve.”
Using the troublesome concepts of Gauss’s Law, various vector calculus techniques, and electrical potential, the authors demonstrate these three categories of difficulty with many examples of specific problems and student responses. Of specific interest to me were the vector calculus issues. They find that students struggle with the “vector nature” of a vector field, struggling to think of magnitude and direction simultaneously. Additionally, students can calculate gradient, divergence and curl easily, but struggle with the physical interpretation of these quantities. The authors hypothesise that the way vector calculus is taught in mathematics class, the students fail to see integrals as “sums of little bits of stuff”, which I would like to think is not true in my maths classes, where the sum nature of integrals is repeatedly emphasised. Another vector integration difficulty is found in students struggling to express dA and dV in suitable coordinate systems.
 
They discuss methods they have used in classroom pedagogy, out of classroom assistance and transformed resources to address these difficulties. Even with all their changes, they find that certain problems remain challenging for the students. They argue that these concepts are hard to understand and that the instructors are not keeping this well enough in mind. They discuss ways of moving forward. I thoroughly enjoyed this paper, found it pertinent to current and future work of mine, and benefited from the thorough literature review.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Huang, H., Wang, J., Chen, C. & Zhang, X. (2013). Teaching divergence and curl in an Electromagnetic Field course. International Journal of Electrical Engineering Education, 50(4), 351-357.
 
The authors teach an electromagnetic field course and recognise that physical interpretation of divergence and curl are a difficulty for students, even though the actual calculations are not. They suggest a teaching method which begins with capturing the students’ interest through fictional and theoretical invisibility cloaks. The maths behind the theory involves Maxwell’s equations and hence divergence and curl. The authors suggest ways of teaching divergence and curl through flux and circulation, beginning with the macro and moving to the micro in logical ways.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Thomas, C., Badger, M., Ventura-Medina, E. & Sangwin, C. (2013). Puzzle-based learning of mathematics in engineering. Engineering Education, 8(1), 122-134.
 
The authors argue for the role of puzzles in the teaching of mathematics. Puzzles are defined as “a problem that is perplexing and either has a solution requiring considerable ingenuity – perhaps a lateral thinking solution – or possibly results in an unexpected, even a counter-intuitive or apparently paradoxical solution.” (p. 122). They show how parallels can be drawn (in certain circumstances) to the more well known problem-based learning. I saw some old favourites here, such as students:professors and the peach problem. They also cover the importance of estimation and of ill-founded problems. I found it interesting that the authors unproblematically accept that word problems are preparation for real-world problems – a point of view with which I disagree. This was a fun paper and had some interesting references to books of puzzles, specifically one by Badger, one of the authors of this paper.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
Dray, T. & Manogue, C.A. (1999). The vector calculus gap: mathematics ≠ physics. PRIMUS: Problems, Resources, and Issues in Mathematics Undergraduate Studies, 9(1), 21-28.
 
Here we have another paper lamenting (justifiably) the difference in the way vector calculus is taught in maths and physics. The authors emphasise how practical applications and situational geometry are far more important in physics (or engineering) than in maths. For example, they discuss how vectors are defined as ordered triples in maths while as arrows in space in physics. Also, div and curl are defined as differential operations on vector fields in maths but in physics are defined first in terms of their physical meaning as represented by Stokes’ Theorem and Divergence Theorem. The coordinates used in maths are almost invariably rectangular coordinates, the authors argue, while physics situations frequently have circular or spherical symmetry and hence use spherical coordinates to simplify the maths. (Some of the paper’s criticisms could apply to my local course, but not all, I think.) The value of the mathematical methods lies in their general applicability, however in physics the types of cases are few and there is an argument for loss of generality in favour of simplification of the common cases. The authors close with an insistence that the relevant departments collaborate closely.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Robertson, R.L. (2013). Early vector calculus: A path through multivariable calculus. PRIMUS: Problems, Resources, and Issues in Mathematics Undergraduate Studies, 23(2), 133-140.
 
The author argues for an ordering of topics in a multivariable calculus course which brings the three big theorems as early as possible. The textbook he uses is a standard maths text, with the three big theorems coming last. He lists the topics to be covered before Divergence Theorem can be covered, locating it (by my estimate) a bit less than halfway through the course. Thereafter he covers a few more topics and get to Stokes’ Theorem (probably about 2/3 of the way through the course). Green’s Theorem is presented as a special case of Stokes’ Theorem. The benefits of this approach are argued for convincingly and a few drawbacks are also covered (such as parametrised surfaces before parametrised curves). This is the second paper I have read which recommends Schey’s (2005) Div, Grad, Curl and All That: An Informal Text on Vector Calculus, so I really must track that text down. The practical interpretations of div and curl are emphasised as in so many papers I’m reading. I found this paper intriguing and I also greatly appreciated that the author broke the course down into sufficient detail that I, or someone else, could easily structure a course as he has done.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Everett, L.J., Alexander, R.M. & Wienen, M. (1999). A grading method that promotes competency and values broadly talented students. Journal of Engineering Education, 88(4), 477-483.
 
The authors describe an assessment regime carried out in an engineering science course, designed to reward students for skills valued in engineers. The assessment regime was run three times in consecutive semesters and the authors feel that it worked well although there were a few challenges. Four assessment types were carried out.
Readiness assessment tests: They encourage being prepared for class, but setting one or two questions based on reading an assignment before class. These seem to have been carried out twice a week, but I could see them being almost daily. It was tricky to set questions which did reward reading and understanding. After the course a significant correlation was found between preparation for class and success at the course.
Basic understanding tests: These were conceptual in nature, testing understanding of physical phenomena. They involved no mathematics. These were held once per week.
Major evening examinations: These were also conceptual in nature and bore the closest resemblance of all the assessments to traditional exams. The questions were tough and required engineering-type skills such as simplification. The problems were also frequently ill-founded and there could be a variety of solutions. There were 3 or 4 of these per semester.
Minimum skills tests: Again, these were once a week and were made up of simpler versions of the prior week’s homework. They were multiple choice, which means no partial credit.
 
The authors argue for these assessments as criterion-referenced, and argue against the use of norm-referenced assessment. I found their insistence on no partial credit interesting. The paper presents analysis of the results as well as comparison with what the grades would have looked like if only the major evening examinations (as closest to traditional exams) were used. Various challenges were discussed such as the trickiness of determining validity and reliability and also the students’ struggles with this new type of assessment and the expectations of them.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Woods, D.R. (2000). An evidence-based strategy for problem solving. Journal of Engineering Education, 89(4), 443-459.
 
The author lists 150 published problem solving strategies, although he complains that few are based in research. In an appendix after the (huge number of) references, he briefly gives each of those strategies. The author finds many similarities between the strategies, such as “understand the problem” and “verify your answer”. The strategies vary in number of stages, but it is usually between two and seven. Some have mnemonics, some draw analogies. The author describes careful criteria for an effective problem solving strategy, such as “If possible, none of the stages should include titles describing a skill or attitude … since that skill or attitude could be used in many different stages” (p. 444). He warns against encouraging a linear mindset towards the strategy, by, for instance, numbering the steps, or giving them in an obvious sequence. The author present a strategy represented on a disc, rather than linearly, with six stages, each carefully described. The strategy was based on the literature, trialled on expert practitioners and refined over years of student use.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Flynn, M.A., Everett, J.W. and Whittinghill, D. (2015). The impact of a living learning community on first-year engineering students. European Journal of Engineering Education (ahead of print) 1-11. DOI: 10.1080/03043797.2015.1059408. 
 
The authors define “living learning communities as “Most LLCs are communities in which students pursue their academic curriculum with a blended co-curriculum involving a theme, concept, or common subject matter while living together in a reserved part of a residence hall.” (p. 2) and “LLCs can be characterised by close working relationships among students and faculty; specialised course assignments; study groups; close relationships among student members; and specialised events, activities and workshops.” (pp. 2-3). They report on a survey carried out in and engineering living learning community (ELC). The authors argue that LLCs might be particularly beneficial to engineering students, given their having to adjust not only to a new environment as entering students, but also to a heavy course load. In this ELC, the students lived together in the same residence hall, took two common courses per semester and their classes encouraged cooperative learning. Students applied to join the ELC, which allowed the authors to compare two cohorts – the ELC students and the non-ELC students enrolled for the same courses. The students were surveyed for their perceptions of transition, student-student relationships, student-faculty relationships and levels of satisfaction with the institution. The findings show that the ELC students perceived their transition to college to be easier than the non-ELC students. They also reported better student-student relationships and greater satisfaction with and connectedness to their institution. The two groups were about the same in their perceptions of student-faculty relationships. The authors conclude “It is recommended that LLCs be used to foster positive student perceptions of transition to college, connectedness to the institution, peer relationships, and their overall satisfaction with the institution.” (p. 9)
 
This paper has many references to other studies on LLCs, reporting on many and varied benefits. For example “Literature suggests that peer interactions of this sort [friendships, networking, study groups] increase student involvement and participation, which in turn are positively linked to institutional retention” (p. 7) and “First-year students who are easily able to transition from high school to college are more likely to stay at the institution and graduate, positively impacting retention rates” (p. 5). I would really like to look up all of those references and see which ones are based on actual hard data. Altogether I enjoyed this paper, found the data interesting and plan to follow up on several of the references.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 van Dyke, F., Malloy, E.J. and Stallings, V. (2014). An activity to encourage writing in mathematics. Canadian Journal of Science, Mathematics and Technology Education, 14(4), 371-387.
 
The authors ran a very interesting study in three stages. The first stage was a short assessment of three MC questions involving the relationships between equations and their representative graphs, the first two only linear, the third linear and quadratic. The questions were quickly answered and easily graded. The second stage was giving summaries of student responses back to the class for discussion (not led by the lecturer). Directly after discussion the students were asked to write about the questions. One group was asked to write about the underlying concepts necessary to answer the questions correctly. The second group was asked to write about why students might have given incorrect answers. These written responses were also evaluated. The third stage of the study was to ask the students to answer a survey (strongly agree/disagree Likert style) testing hypotheses developed during the first two stages. The findings are interesting and point, yet again, to the student tendency to want to do calculations even if the question might not require them - “blind attraction to processes” (p. 379) - and also to the expectation that similar problems should have been encountered before. Interestingly, the students in the second writing group wrote more than those in the first, but did not make many references to actual underlying concepts. The authors stress that if you want students to write or talk about underlying concepts you need to make that explicit.
 
The authors present the design of this study as a way of using writing to encourage reflection without it taking a lot of time or being difficult to grade. I agree and would like to try this myself. Running effective writing assignments in a maths class can be very hard to get right. The authors make reference to cognitive conflict and how resolving a cognitive conflict can lead to cognitive growth. “It is not the intent of this article to explore the efficacy of using writing or conflict resolution in the mathematics classroom but to take that as given …” (p. 373).
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Zavala, G., & Barniol, P. (2013). Students’ understanding of dot product as a projection in no-context, work and electric flux problems. In AIP Conference Proceedings (Vol. 1513, No. 1). American Institute of Physics. American Institute of Physics, Melville NY United States.
 
Zavala and Barniol ran a project investigating students’ understanding of the projection role of the dot product. First they gave three isomorphic problems to a class of physics students, with approximately 140 students seeing each of the three problems. The following semester they interviewed 14 of these students, who solved the three problems (and two more) while thinking aloud. The three problems all involved the same arrangement of vectors requiring the same projection, however one was no-context and the others were in context, namely work and electric flux. The investigation found that the students had a weakly constructed concept of the projection role of the dot product. The students were more likely to answer the question correctly in the contextualised problems than in the no-context problem, however even at best only 39% of the students answered correctly.
 
The authors observe that a majority of the students chose one of the two responses which described scalar quantities, rather than the four other MCQ options which described vector quantities. However in the no-context problem that majority is a disappointingly low 57%. Problematically, I would use the term “projection” differently to Zavala and Barniol, where projection of a vector onto another vector is still a vector, not a scalar quantity. The projection of A onto B in my lexicon is the vector component of A in the direction of B. Zavala and Barniol mean by projection what we elsewhere (Craig and Cleote, 2015) have referred to as “the amount” of A in the direction of B (p. 22) . So, given my definition, there is only one available MCQ option which describes a scalar quantity (option a, a popular incorrect option). I have to assume that the students participating in the study were familiar with the authors’ definition, however, and would have seen that MCQ option as describing a scalar quantity.
 
The authors cite other work reporting students’ difficulties in connecting concepts and formal representations. They see this dot product projection difficulty as part of that more general situation. “In this article we demonstrate that this failure to make connections is very serious with regard to dot product projection’s formal representation” (p. 4).
 
Not much has been written on students’ difficulties with the dot product. It is likely that the computational simplicity of the product masks the conceptual challenge of the geometric interpretation.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
Barniol, P., & Zavala, G. (2014). Test of understanding of vectors: A reliable multiple-choice vector concept test. Physical Review Special Topics-Physics Education Research, 10(1), 010121-1-010121-14.

 
Barniol and Zavala describe a really nicely designed investigative project. In the first phase they conducted several studies over a period of four years, using open-ended problems in order to develop a taxonomy of frequent errors students make when solving vector problems. At the same time, they sought references in the literature to frequent vector errors. In the second phase, they developed a test in multiple choice format, named the "test of understanding of vectors" (TUV). They administered this test to over 400 physics students and thereafter observed the categories of errors and the frequencies of errors in different classes of problems.
 
 
I really admire the TUV, the preliminary work that went into designing it and the detailed analysis of the errors made. I feel the authors left the “so what?” question up to the reader, making a few minor suggestions about other people using the test in similar ways, but not making any broad assertions about teaching or learning or cognitive concept formation. I hope Barniol and Zavala have written further on this topic, as the work laid out in this paper is admirable and provides much food for thought.
 
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 De Laet, T., & De Schutter, J. (2013). Teaching inspires research and vice versa case study on matrix/vector methods for 3D kinematics. In Proceedings of the 41st SEFI Conference (pp. 1-8).
 
De Laet and De Schutter are robotics researchers and lecturers of 3D kinematics and statics. They observed that students struggle with the concepts and the notation of the subject and that their struggles were related to challenges roboticists experience with non-standardised coordinate representations and related software. They developed a semantics underlying the geometric relationships in 3D kinematics and a notation designed to make relationships clearer and eliminate errors experienced while working across different coordinate representations.
 
Neither kinematics nor robotics is a speciality of mine, so I might be phrasing my summary badly. I hope I’m correctly representing the work discussed here. The authors claim that their students have benefited from the new notation, making fewer errors than before, and that roboticists have also welcomed the new notation. I particularly liked two bits of this paper. The first bit is the explicit admission that engineers and engineering students need to be aware of the different terminology and notation which can exist across even closely related disciplines – “it is important that students are aware of the lack of standardisation and the implications this might have when reading textbooks or consulting literature” (p. 2) – which relates to my concern about vector notation. The second bit is the attention the authors pay to threshold concepts, which has long been a theory I have tried to apply to vectors, with little luck so far. Reading this paper has given me some new ideas, not least that I would probably enjoy a SEFI conference!
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Barniol, P., & Zavala, G. (2016). A tutorial worksheet to help students develop the ability to interpret the dot product as a projection. Eurasia Journal of Mathematics, Science & Technology Education, 12(9), 2387-2398.
 
Following their earlier work (2013, 2014) on determining frequent errors in vector problems, the authors developed a tutorial carefully designed to address the conceptual difficulties students experience with vector projections. The tutorial is presented in an Appendix to the paper. It consists of six sections, requiring the students to determine projections geometrically as well as using the mod(A)mod(B)cos(theta) definition, using a range of theta values. The final section of the tutorial explicitly addresses the observed confusion students experience between the scalar product and vector addition. The paper closes with an open invitation to teachers to use their tutorial. I am tempted to do just that.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
Gray, G. L., Costanzo, F., Evans, D., Cornwell, P., Self, B., & Lane, J. L. (2005). The dynamics concept inventory assessment test: A progress report and some results. In Proceedings of the 2005 American Society for Engineering Education Annual Conference & Exposition.
 

The authors report on their progress in developing the Dynamics Concept Inventory, a MCQ format assessment of 30 questions on concepts used or needed by students in a mechanical engineering dynamics course. The process followed to achieve the final product was thorough, involving polling multiple lecturers of dynamics across several institutions, developing questions, piloting the instrument and going through various phases of refining the instrument. The DCI is available online by contacting the developers through the DCI website.
 

In this paper, the authors describe the process towards the development of the final version of the instrument and give a list of the concepts involved. They also provide much statistical evidence for the reliability and validity of the instrument. A few items on the test are pulled out for special scrutiny to illustrate clear evidence of misconceptions. The authors are clearly in favour of the test being used in pre-test/post-test format. Their website encourages this format and the DCI developers request that anyone using the test send them the raw data so that they can use the data to further verify the discriminatory power of the instrument.
 
 
It would be interesting to run the DCI on one of our cohorts of dynamics students and see if any of the results correlate with our vector assessment results.
 
 
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Miller-Young, J. (2010). How engineering students describe three-dimensional forces. In Proceedings of the Canadian Engineering Education Association.
Miller-Young, J.E. (2013). Calculations and expectations: How engineering students describe three-dimensional forces. The Canadian Journal for the Scholarship of Teaching and Learning 4(1), Article 4, 1-11.
 
I have grouped these two papers together since they are almost the same. The first is a 2010 conference paper and the second is a 2013 journal paper which includes all the 2010 work as well as a bit more data. The study was interested in digging into the details of how students visualise three dimensional statics problems when what they are presented with is a 2-d diagram. The data collected was students’ think-aloud processes of answering two questions, one without context and the other in a real-world context. The 2013 paper also included data on a quiz question which was part of a standard course assignment. All three problems required that the students see the page as the given vertical coordinate plane (xy in the three problems) and the third axis (z) extending out of the page in the positive direction. Points with a negative z-coordinate, in other words, are behind the plane of the page.
 
The students seemed to find the problems relatively difficult. The author found three main themes in student errors. (1) The students struggled to visualise points behind the plane of the page or vector which extended behind the plane of the page. The two-dimensional drawing on the flat page had to be visualised as a three-dimensional collection of vectors and the students found that particularly tricky for vectors extending backwards relative to their gaze. (2) The students did not always use the provided context to help them visualise the problems. One of the problems involved a pylon with guy ropes attaching to the ground, which was idealised as the flat xz-plane. All the ends of the guy ropes in this problem were on the xz-plane and had a y-coordinate of zero, yet some students struggled to see that. (3) The students reached too quickly for equations to try and answer questions even when there was not enough information to answer the question that way. The tendency to calculate something using a formula is ubiquitous across all maths and physics teaching and is no surprise. This final data point serves only to add to the depressing mountain of similar results.
 
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.

April 2021

S M T W T F S
    123
45678910
11121314151617
1819202122 2324
252627282930 

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 15th, 2025 03:47 am
Powered by Dreamwidth Studios