Kohn Rådberg, K., Lundqvist, U., Malmqvist, J., & Hagvall Svensson, O. (2020). From CDIO to challenge-based learning experiences–expanding student learning as well as societal impact?. European Journal of Engineering Education, 45(1), 22-37.

Challenge-based learning (CBL) is described as “a multidisciplinary approach that encourages students to work actively with peers, teachers and stakeholders in society to identify complex challenges, formulate relevant questions and take action for sustainable development.” (p. 22). The authors cite Malmqvist et al (2015) with the following definition: “Challenge-based learning takes places through the identification, analysis and design of a solution to a sociotechnical problem. The learning experience is typically multidisciplinary, involves different stakeholder perspectives, and aims to find a collaboratively developed solution, which is environmentally, socially and economically sustainable.”(p. 22)

The paper lists a lot of potential benefits of CBL, such as authentic learning, multidisciplinary teamwork, addressing issues of importance in the future and recognises problem based learning and the CDIO approach (Conceive, Design, Implement, Operate) as precursors to CBL. The authors suggest that CBL expands on CDIO in the areas of problem identification and formulation, included business components and consideration of societal impact, but caution that the multiple aims of CBL and the added value from  PBL don’t come at the cost of student learning.

The article discusses a longitudinal study carried out over three years and three student groups completing their masters projects in the Chalmers Challenge Lab. Data were collected to determine students’ self-perception of fulfilment of learning outcomes as well as additional learning outcomes, and to determine how far along in the CDIO process (cycle?) the masters projects went. Results show that on the whole students did perceive that their experiences had fulfilled the learning outcomes. Three in particular were given high scores, indicating significant learning; those are related to problem formulation, sustainable development and working independently. Additional skills learned including working across disciplines and with stakeholders.

The stages of the projects were problem formulation, idea or model generation, concept development, testing/evaluating within an academic setting and testing/evaluating by external stakeholders. 41% of students reached the second phase, 32% reached the third phase, 23% reached the 4th phase and only 1 reached the final phase. That being said, the progress made on the projects and the relationships built were expected to have ongoing impact nonetheless. The study concludes that the academic learning demands of a masters degree were met by the projects within the Chalmers Challenge Lab. While there are similarities between CDIO projects and CBL experiences, the latter cannot replace the former due to the likelihood of a CBL project not reaching the test/evaluation phase. They suggest that engagement in both types of learning experience would develop a more comprehensive skill set than one alone.

This was an interesting first paper to read on CBL. I shall follow up on some the papers cited.

 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.

AlDahdouh, A., Osorio, A., & Caires, S. (2015). Understanding knowledge network, learning and connectivism. International journal of instructional technology and distance learning, 12(10).

This paper provides an overview of the basic tenets of Connectivism as a theory of learning. According to connectivism “knowledge is a network and learning is a process of exploring this network.” (p. 3). Connectivism argues that the information age includes emerging phenomena in how people learn and a new theory is necessary to explain it.

A network consist of nodes connected by relationships. Nodes can be (literally) neural, conceptual or external. I am interested in conceptual nodes but I am more interested in external nodes, that is “people, books, websites, programs and databases connected by internet, intranet or direct contact” (p. 4). The paper points out how using tech in the classroom is often ineffective because the amount of time it takes to incorporate a certain tech innovation into the classroom results in it already being out of date. We as educators, trying to appropriate tech, cannot keep up. I am interested in how my teaching and learning environment can assist students in connecting with existing sources of knowledge without me having to know it all or be skilled in it all. I am interested in seeing whether viewing knowledge and learning through the lens of connectivism can help me create such an environment.

A relationship is a link between two nodes. There are special characteristics that relationships could have: they can be graded, directional or self-joining, or make patterns. Nodes themselves can be networks. They use an example of a school to show how complex networks and sub-networks can be. There are teachers, students, administrators. There are subnetworks such as a class and the connections within it. There are friend groups which may connect to families as well as the classroom, and so forth.

Connectivism builds, in part, on actor network theory where nodes are actors, things with agency for example humans, animals or certain machines. Connectivism puts more emphasis on technology than ANT and includes technology as both actor and connector. “So, according to Connectivism, technology has actors such as Artificial Intelligence (AI) agents, smart phone devices, electronic books and websites; and connectors such as social network, internet and intranet” (p. 8). Technology makes both the connections and the flow of information more feasible” (p. 9); “The information needs a connection to reach the target and the connection needs the flow of information to stay alive. Therefore, no flow of information exists without connection and no connection remains without flow of information.” (p. 9)

Connectivism is more concerned with known knowledge and less with knowledge creation. Connectivism sees knowledge as abundant and easy to access. What is important is aggregating, discovering and exploring knowledge. Nodes can be unstable. For instance knowledge may go out of date. The currency of a node is very important in connectivism. If knowledge is out of date, people move on and that node is no longer connected to, which in turn weakens all nodes that are connected solely (or mostly) to that dying node.

The knowledge network is dynamic exhibiting patterns that change. Learning is a continuous process of exploring the network and finding patterns. As a conceptual framework connectivism has a “unique vision regarding the interaction between learners and content” (p. 16). Both the content and the learners are nodes in the network. Connectivism sees learning not as internalizing the knowledge but as using and processing content and forming patterns.

It is not generally accepted that connectivism is a learning theory that is distinct from others and not just a version of, say social connectivism. This paper does include several references to other papers both supportive of an critical of connectivism. I found this paper a good introduction to connectivism.



Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
Inglis, M., & Foster, C. (2018). Five decades of mathematics education research. Journal for Research in Mathematics Education, 49(4), 462-500.
http://www.foster77.co.uk/JRME2018-07-462.pdf

In this remarkable paper, the authors look at almost five decades of publications in Educational Studies in Mathematics and the Journal for Research in Mathematics Education to see whether the field of mathematics education research has changed over this period and, if so, how. They are particularly interested in looking at evidence of the “social turn” in maths education research noticed by Lerman in 2000; is this “social turn” still apparent?

In order to see what has changed, there first needs to be some way of clustering papers or categorizing them and then to see how those clusters have altered in prominence (measured how?) over the years. The authors decide to use Lakatos’s notion of scientific research programmes. Within this methodology, “the base descriptive unit of research” (p. 464) is a research programme. A programme is a connected set of theories sharing
  • a hard core (“a collection of key assumptions and beliefs”),
  • a protective belt (“a large collection of auxiliary hypotheses that supplement the hard core and can be used to protect it from being falsified”)
  • and a heuristic (“the collection of methods and problem-solving techniques [used] to make progress”) (p. 464-5).
The paper later addresses critiques of Lakatos’s methodology and argues that its analysis avoids the weaknesses that Feyerabend (1981) criticized.

To illustrate this model of a research programme, the authors give constructivism as an example where radical constructivism and social constructivism can be seen as part of the protective belt of the hard core of constructivism, whereas sociocultural theory has a different hard core to any type of constructivism (although sharing parts of its heuristic with social constructivism) and therefore is a different research programme entirely. There are progressing programmes and degenerating programmes, based on how these deal with and accommodate anomalies.

The paper’s methodology was to download every article from ESM and JRME from their first publications, to remove words that are topic independent (such as the and a) and then to use the computational method called topic modeling to identify words that co-occur. An optimum number of 35 topics was pre-chosen as the best fit (the paper discusses reducing “perplexity” offset against maintaining interpretability). The results of the analysis finally included 28 usable topics, the others being related to journal administration and to foreign language articles. The authors contemplated each topic’s cluster of co-occurring words as determined by the automated computational process and thereafter assigned a descriptive label to each.

The 28 topics were, in alphabetical order: Addition and subtraction, Analysis, Constructivism, Curriculum (especially reform), Didactical theories, Discussions, reflections, and essays, Dynamic geometry and visualization, Equity, Euclidean geometry, Experimental designs, Formal analyses, Gender, History and obituaries, Mathematics education around the world, Multilingual learners, Novel assessment, Observations of classroom discussion, Problem solving, Proof and argumentation, Quantitative assessment of reasoning, Rational numbers, School algebra, Semiotics and embodied cognition, Sociocultural theory, Spatial reasoning, Statistics and probability, Teachers’ knowledge and beliefs, and Teaching approaches.

Once the topics were identified, the authors calculated “the mean proportion of words from each topic published by each journal in each year” (p. 478) and were then able to chart the extent to which each topic was covered across time.

For a detailed analysis of all the trends, you should see the paper. Here I list items of particular interest to me:
  • Proof and argumentation have seen an increase in publications over recent decades; this trend is interesting given my interest in how one teaches proof to engineering students.
  • Problem solving has seen a decrease. What drew me into maths education research to begin with was problem solving, particularly Alan Schoenfeld’s work. My PhD was ultimately on problem solving, so I observe the decrease in problem solving interest rather glumly. Also with some confusion – how can problem solving ever not be interesting and complex?
  • Curriculum (especially reform) has seen a marked increase. I see the Twente Educational Model somehow fitting in under this topic.
  • So has Novel assessment seen an increase. The maths department at the University of Twente is taking digital assessment of linear algebra very seriously and I am peripherally involved with that.
  • Multilingual learners has seen a very slight increase, Equity has remained almost steady and Gender has seen a significant decrease. This trend is worrying (to me). These are all issues I would consider of extreme importance.
  • Constructivism has seen a sharp decline while sociocultural theory is steeply increasing, supporting the hypothesis that there is a “social turn” in maths education research which is continuing to make itself felt.
My PhD was strongly rooted in constructivism and I struggled to publish that part of my work. I believe my use of Piaget’s theory of learning was good work. It got praise from Ed Dubinsky, one of my examiners, and I believe it was strong, thorough work, yet publishing it was extremely hard. I am grateful to AJRMSTE for recognizing the worth of my work and publishing it in 2016. My point here is that the lack of publication of something does not necessarily mean that work is not happening. It might mean that journals are not accepting papers in that topic because they feel that conversation is over, that the topic is no longer sufficiently interesting. So see the data in this paper for what it is: a sign of what the journals (these two journals) are publishing, not necessarily a one to one representation of what researchers are doing or what they are interested in.

One trend the authors make a particular point about is the “experimental cliff” where studies involving randomized experimental designs, once popular, have fallen almost to nil despite US and UK agencies calling for this type of study and making funding available. Looking into publications in experimental psychology, the authors find this a rich area of research and publication, yet these studies are not being published in maths education journals. The authors point out the rich possibilities of information travelling both ways and encourage exposure to multiple research programmes.

The paper addresses the theoretical diversification apparent in the last 15-20 years and seems to fall on the side of Dreyfus (2006) in cautioning of the dangers of too much diversification. The authors recommend finding connections across theories and unifying them where possible.

Altogether I found this a fascinating picture of what ESM and JRME have been publishing over the last (approximately) 45-50 years. The trends in research are interesting and informative. I would love to see someone use the same methodology on a different set of journals. If journals such as the International Journal of Mathematical Education in Science and Technology (my favourite journal) were included I think we might see some different topics emerge.


Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
Sisk, V.F., Burgoyne, A.P., Sun, J., Butler, J.L. and Macnamara, B.N. (2018). To what extent and under which circumstances are growth-mindsets important to academic achievement? Two meta-analyses. Psychological Science 29(4), 549-571. [The version I have is numbered 1-23 with no volume or issue number. Page numbers below will reflect this numbering.]

Mind-sets are somewhat trendy right now. Carol Dweck’s books and Jo Boaler’s book are pop culture phenomena. The paper being discussed here mentions some things I was unfamiliar with, namely the push at a governmental level in the US for development and support of interventions aimed at encouraging a growth mind-set, given a widely accepted view that growth mind-sets have a substantial positive effect on academic achievement. This paper discusses two large meta-analyses which, after careful and detailed investigation, conclude that, thought there might be something there, mind-sets overall are hyped up a bit too much.

The authors begin with defining mind-sets: “Mind-sets (aka implicit theories) are beliefs about the nature of human attributes” (p. 1)”, the underlying premise being that people with a growth mind-set believe attributes are malleable (holding this mind-set can lead to positive outcomes) while people with a fixed mind-set belief attributes are stable (holding this mind-set can lead to negative outcomes).

Meta-analysis 1 (273 effect sizes, student population N =365 915) addressed the research question “What is the magnitude of the relationship between mind-sets and academic achievement, and under what circumstances does the relationship strengthen or weaken?” They considered four potential moderating variables : Academic risk status, student developmental stage, socioeconomic status and type of academic achievement measure. They also used several measures to check for publication bias. Of the 273 effect sizes, 157 are not significantly different from zero, 16 are negative and 100 are positive. Overall the effect was positive, but only slightly. Academic risk status and socioeconomic status did not affect the relationship between mind-set and academic achievement. Developmental stage had a slight effect, for younger students not for post-secondary. There was also a possible suggestion that students with a growth mind-set choose more challenging courses when they have the opportunity. So, in conclusion, there is either no relationship between mind-set and academic achievement or a very slight one, mostly for children.

Meta-analysis 2 (k =43, N = 57 155) addressed the research question “Do mind-set interventions positively impact academic achievement, and under what circumstances does the impact increase or decrease?” They considered 6 potential moderating factors: developmental stage, academic risk status, socioeconomic status, control group, type and mode of intervention and once again they checked for publication bias. Of the 43 effect sizes, 37 are not significantly different from zero, 1 is negative and 5 are positive. Overall the effect is positive, but only just barely. The only moderating factor which had a significant effect was socioeconomic status. For students from low-SES households (for which there were 7 effect sizes in the data) “academic achievement was significantly higher for students who received growth-mind-set interventions relative to controls” (p. 17). Another significant effect was “mind-set interventions administered via reading materials were significantly more effective than when administered via computer programs” (p. 19). Another significant effect was administering the intervention outside regular classroom activities.

Two interesting points raised in the general discussion were (1) that not all mind-set research makes broad claims, but rather focuses on specific principles of mind-set theory, which is probably a good idea, and (2) that it is possible that “unmeasured factors are suppressing effects”. Certainly anecdotal stories in support of the mind-set effect abound, but then where is the hard evidence? The paper closes with this sobering paragraph:
However, from a practical perspective, resources might be better allocated elsewhere than mind-set interventions. Across a range of treatment types, Hattie, Biggs, and Purdie (1996) found that the meta-analytic average effect size for a typical educational intervention on academic performance is 0.57. All meta-analytic effects of mind-set interventions on academic performance were < 0.35, and most were null. The evidence suggests that the “mindset revolution” might not be the best avenue to reshape our education system. (p. 21)
 
The paper came out very recently, yet has already been cited three times (according to Google Scholar, 01/06/2018). I’ll watch this paper’s effect on the field with interest. My first take away from this paper is that I do not faintly have the skills to conduct a meta-analysis. I have enormous respect for the authors for carrying out this meticulous and well-described study. They even made their data open access. My second take away is that mind sets are not closely correlated with academic achievement and mind set interventions are unlikely to have much effect except in quite specific circumstances. That is quite a depressing conclusion, but it is important to know, given the amount of cherry picking and pseudoscience there is clustered around this theory.

Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
Bollen, L., van der Meij, H., Leemkuil, H. and McKenney, S. (2015). In search of design principles for developing digital learning and performance support for a student design task. Australasian Journal of Educational Technology, 31(5), 500-520.

I found this paper by searching for papers on the Twente Educational Model (TEM, or TOM in Dutch). The paper is one of the articles in a special issue on educational design research (EDR), specifically one of the articles located in the early stages of EDR. The context of the paper is on the design and use of a digital environment for second year psychology students within which the students could themselves design a learning environment as a project within the module they were enrolled in. That makes this paper something of a design turducken, given that the design of the environment was itself then studied as EDR.

The authors provide a substantial amount of detail on the methodology guiding their creation of the digital environment, that of Learning by Design (LBD) and also provide detail on what the environment looked like and how the students used it. I shall not summarise that here. (Can we take it as given that ethical approval was sought and granted for the quite intrusive mining and subsequent publication of student access data?) There are others characteristics of this paper that interest me more, that is, design research and TOM.

Every teacher tries things in class to see if they help the students learn. What makes any such intervention stand out and be labelled “educational design research”? Does the intervention have to be built on theory whose printed publication the teacher can hold in her hand, or can it be based on years of experience and intuition? If the intervention is only tried once does that make it not EDR, but if you go back and try again, does that iteration make it EDR? Is it only EDR if it gets published? Is it only EDR if it makes an impact on the theoretical landscape? The authors say “This [design and construction] phase involves rational, purposeful consideration of knowledge and concepts that can be used to address specific problems in practice. As potential solutions are generated and explored, the underlying theoretical and practical rationales are elaborated. This allows the design framework to be evaluated and critiqued.” (p. 500) This definition or description suggests that any thoughtful teaching intervention can be described as EDR. The definition given in the editorial of this special issue “EDR is an intervention and process-oriented approach that uses a variety of methods to examine the development and implementation of instructional solutions to current educational problems” (p. i) also suggests that thoughtful teaching intervention can all be defined as EDR. I struggle with this idea. I’ll need to read more about it as this is not a topic with which I have much familiarity, but it seems to me that it only really starts to earn the name of “design research” if there have been iterations and refinements. In which case the first instance of any intervention only retroactively can be framed as a first stage in EDR – when it first happens it is just a teaching intervention like any other. I admit, this is not something I fully understand. I am cautious of slapping fancy labels on things which don’t deserve them, but we shouldn’t be research snobs and fail to see the worth in what amounts to everyday research in the real classroom. Anyway, food for thought and I should come back to this.

It is 2018 and I am new to UT. The Twente Educational Model (TEM) is the flagship pedagogical model at UT which many people are working hard to make a success. It is a forward thinking educational model for a university trying to educate its students for an unknown, technological, global, entrepreneurial future. Part of my role here will be to see what is working and what is not and how we can be better educators of ours students in this rapidly changing world. The paper I am talking about here was published in 2015, probably about data gathered in 2014, so still very early in TEM’s existence. (September 2013 is when it was rolled out, I believe?) The authors describe TEM in terms familiar today, but then they refer to their own particular context as TEM, that is the digital environment they designed (in Moodle) for their psychology students to use. e.g. “data logs that were recorded in TOM” (p. 509) or “accessing TOM through a VPN connection” (p. 501) . Either I do not understand how the term “TOM” (or TEM) is to be used or the way it is used has changed over the last few years; I suspect the latter. So, while the paper emerges in a literature search for papers on TEM, TEM actually features very little in the paper in its 2018 meaning. That being said, there were some intriguing snippets, for example “[r]ecently, the Board of Directors of the university gave the stimulus for an important renovation of the curriculum for the bachelor programs in all faculties. This led to a uniform roster that better facilitated students to choose from the courses offered throughout the university.”(p. 500). I am interested in the tension of providing mathematics courses which need to be the same every time they are taught in every context so that they can form part of this “uniform roster” yet can still blend into each engineering or science or medical module in order to be part of the TOM ideal of courses clustered around projects. I have not yet been here long enough to see how successful this identity split is working, but I intend to find out!

I found that this paper gave me a lot to think about although not about its central topic, more about the educational design research framework and the local university context and discourse.

Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
Kopcha, T.J., Schmidt, M.M. and McKenney, S. (2015). Editorial 31(5) Special issue on educational design research (EDR) in post-secondary learning environments. Australasian Journal of Educational Technology, 31(5), i-ix.

I am interested in reading about and around the Twente Educational Model (TEM/TOM). Searching for literature online, I found the first of the articles in this special issue as well as the editorial, which mentions it. While I am chiefly interested in the first article, reading the editorial has made me quite keen to read a few of the others.

The editors begin their editorial with a definition of EDR: “EDR is an intervention and process-oriented approach that uses a variety of methods to examine the development and implementation of instructional solutions to current educational problems” (p. i). They give three reasons for the need of this issue, (1) There is some justified concern in the quality of EDR research, so “transparent examples of EDR are needed to fuel the discourse about the practical and scientific value of this approach” (p. ii), (2) he nature of EDR projects to span iterations and contexts and to involve a great deal of data can make it difficult to publish anything process related in the traditional literature and (3) the need for good examples has increased, particularly ones which show variation and testing across multiple iterations.

The editors impose a structure of three EDR phases into which the six papers in the special issue fit. The phases are analysis and exploration, design and construction, and evaluation and reflection. The authors of the papers situate their work within these phases, we are told. Below, I very briefly provde a few words on each of the articles, as described in the editorial:
  • 2 articles which discuss projects early in the EDR process
    • In search of design principles for developing digital learning and performance support for a student design task; second year psychology students; data-driven approach; Twente Educational Model (TOM/TEM); an outcome is three design principles which can inform further iterations
    • Re-designing university courses to support collaborative knowledge creation practices; literature-driven approach; design principles established and then integrated into carefully chosen courses.
  • 3 articles that span cycles and are located in the middle of the ADR implementation spectrum
    • Exploring college students’ online help-seeking behaviour in a flipped classroom with a web-based help-seeking tool; four design principles identified from the literature; three EDR cycles; mixed methods. The editors particularly praise this article for the clarity with which the authors connect theory and design; they recommend this article for anyone seeking to develop their own EDR manuscript.
    • Professional learning in higher education: Understanding how academics interpret student feedback and access resources to improve their teaching; three phases of design-based research on an interactive online environment providing resources and support to faculty based on student evaluations of teaching; concludes with a return to the theory on which the tool was built.
    • R-NEST: Design-based research for technology enhanced reflective practice in initial teacher education; digital storytelling to enhance reflective practice; 6 years and three cycles; scales from prototype to full-scale implementation.
  • 1 article that is to the mature end of the EDR spectrum
    • Conjecture mapping to optimize the educational design research process; “establishes a set of design principles from data collected across multiple iterations and across multiple contexts” (p. v). Multiple phases, two contexts, 5 years; adult learners transitioning to online learning; a conjecture map [sounds intriguing]; not only development of design principles, but example of how comparisons across contexts and iterations can enable analytic generalisation and inform theory.

The editors conclude their discussion of the articles with a closing article which looks at all six contributions and finds commonalities as well as unique contributions. The concluding author argues for EDR researchers to have a shared and long term agenda in order to achieve significant impact.

I have a PhD student engaged in EDR. I shall manage the temptation to be distracted from my TEM/TOM focus by recommending this special issue to her attention (if she has not already found it; she probably has).

Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
Rasmussen, C., Wawro, M. and Zandieh, M. (2015). Examining individual and collective level mathematical progress. Educational Studies in Mathematics, 88, 259-281.

The authors refer to Cobb and Yackel’s (1996) “emergent perspective and accompanying interpretive framework” (p. 259) as an early effort at integrating different theoretical perspectives to understand teaching and learning. The two theoretical perspectives referred to are constructivism (individual cognitive) and symbolic interactionism (sociocultural), citing von Glasersfeld and Blumer respectively. That framework already included a social perspective and an individual perspective, where classroom mathematical practices (social) were paired with mathematical conceptions and activity (individual). The authors split each of those two into two, resulting in four perspectives, two social and two individual.
  • Classroom social practices (social) become
    • Disciplinary practices and
    • Classroom mathematical practices
  • Mathematical conceptions and activity (individual) becomes
    • Participation in mathematical activity and
    • Mathematical conceptions

The questions which can be posed for these four constructs are (p. 261):
  • Disciplinary practices: What is the mathematical progress of the classroom community in terms of the disciplinary practices of mathematics?
  • Classroom mathematical practices: What are the normative ways of reasoning that emerge in a particular classroom?
  • Participation in mathematical activity: How do individual students contribute to mathematical progress that occurs across small group and whole class settings?
  • Mathematical conceptions: What conceptions do individual students bring to bear in their mathematical work?

The focus of analysis was video recordings of small group and whole class discussions in a linear algebra class of students in their second or third year of university in engineering or science degrees. In the paper there is particular focus on the concept of linear dependence. One group of five students was a primary focus group. For each of the four constructs, different modes of analysis were used. An argument of the paper is that using different modes of analysis on the same body of data using these four lenses (for example) can lead to a rich synthesized analysis.
The modes of analysis for the four constructs were:
  • Disciplinary practice: Recognise that the disciplinary practices of professional mathematicians take the form of defining, algorithmatizing, symbolizing and theoremizing [sic]. Employ grounded approach on analysis of student data to see if categories of student activity relate to these professional categories. (When I return to doing stuff on proof I should follow this “theoremizing” strand to see where it leads.)
  • Classroom mathematical practices: Toulmin’s (1958) model of argumentation; the data, the claim, the warrant. I saw Toulmin’s work being cited quite a bit when I was reading about proof. Because of that I ended up citing it myself in my 2017 SASEE paper. To see in detail how this paper used Toulmin’s model will be a key issue bringing me back to read this paper again. [Toulmin: The uses of argument]
  • Participation in mathematical activity: Two frameworks of Krummheuer (2007, 2011) are used to identify roles in a conversation. The production design construct of mathematical conversation identifies authors, relayers, ghostees and spokesmen; the recipient design contract identifies conversation partners, co-hearers, over-hearers and eavesdroppers.
  • Mathematical conceptions: The topic being focused on was linear dependence; Wawro and Plaxco (2013) describe four ways in which students conceptualise this topic and that framework is drawn upon here; travel, geometric, vector algebraic, matrix algebraic.
The paper then presents small group and whole class discussions on an exercise related to linear dependence, specifically that a set of n vectors in a space of dimension m will be linear dependent if n>m. The paper goes into a great deal of interesting detail of analysis of all four constructs using the four different modes of analysis. That analysis takes 11 pages to discuss; better to go and read it all there.

The paper draws the analysis together by suggesting that analyzing the same greater body of data through four different modes of analysis related to four different constructs related to (individual and social) learning at the very least provides nuance to analysis by looking at one thing in four ways. However the four constructs are interrelated and analysis through one lens can augment analysis through another. One can look for correlations – “are different participation patterns correlated with different mathematical growth trajectories?”- or consistency – “In what ways are particular classroom mathematical practices consistent … with various disciplinary practices?” and so forth (p. 278-279). They cite Prediger et al. (2008) as calling for connecting and coordinating theoretical approaches to gain “explanatory, descriptive, or prescriptive power” (p. 278) and offer their framework and associated modes of analysis as an answer to this call. The paper frames itself as a first step in trying out this approach of coordinating analyses and anticipates further work.

I found this paper quite hard to read the first time, it is very dense with a lot of fine grained theory and data analysis. Upon multiple rereads, it all became clear and I can see the power in this framework where quite different analyses are carried out on the same body of data from quite different points of view, yet are tied together through the individual cognitive and sociocultural views of learning. In order for the data to be suitable for analysis it would have to involve a great deal of social interaction, such as these videotaped classroom and small group discussions. That makes me think about how I could (with my data which is less likely to be of that form) create a similar framework of interrelated constructs analysed through different lenses and resulting in a coordinated and nuanced piece of research work.

This is one of those papers you have to come back to multiple times to get full value.

Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
Stewart, S. & Thomas, M.O.J. (2009). A framework for mathematical thinking: the case of linear algebra. International Journal of Mathematical Education in Science and Technology, 40(7), 951-961.
 
This paper presents a framework combining two educational theories to both assist teaching and to understand student learning. One theory is Dubinsky’s APOS (action – process – object – schema) theory and the other is Tall’s three worlds of mathematics (embodied – symbolic – formal). The authors present an array where the segments of the two theories are set orthogonal to one another and, for illustration, two concepts are broken down across the array, that of adding two vectors, and multiplying a vector with a scalar.
 
The students whose work is analysed in the paper volunteered to participate in the study and attended supplemental tutorial classes. The topic of interest was linear algebra, specifically certain sub topics such as scalar multiplication, basis, linear independence, eigenvectors and so forth. The authors suggest that, as a teacher, one can use the array formed by the two theoretical categorisations to represent and explain a concept in a variety of ways, hopefully leading to deeper understanding. The students in the study completed six exercises designed to investigate their conceptual understanding (rather than procedural fluency). Overall student responses revealed primarily a symbolic understanding of the concepts (at both action and process levels). Embodied understanding (Tall) was not particularly evident in the data, nor understanding of the concept as object (Dubinsky). The authors point out the apparent contradiction with the assumption that the worlds in Tall’s theory are hierarchical. If so, one would expect an embodied understanding to precede symbolic, which is not evident in the data. They suggest that, distinct from Tall’s ideal model, in the real world the instructor may teach entirely symbolically and leave embodied understanding to be constructed by the student without instruction. The authors close by suggesting that instructors teach concepts from an embodied point of view as well as symbolic in order to enhance students’ understanding an enrich their representations.
 
In my studies of the historical development of vector analysis and its associated notation, it is interesting that vectors were seen from an embodied and object point of view long before they were well formed in symbolic notation or able to be manipulated in formal modes. The question of how to symbolically represent directed line segments in such a way that they could be added or scaled or multiplied was a sticky problem that occupied some great mathematical minds. The fact that, today, students can make the error discussed on page 955, that of scaling a vector incorrectly by misuse of the component form of the scalar product would be extremely strange to the early developers of vector analysis. It is almost as if the mathematical world has moved from embodied to symbolic to formal and the novice students of today are stranded at the end of a road they have not themselves travelled. Embodied did indeed precede symbolic, just not in the person of the individual student but in the historical development of the concept itself.
 
One final point of personal interest: I enjoyed the parallels between Tall’s three worlds and Piaget’s models of abstraction. The embodied world is kin to Piaget’s empirical abstraction and the other two worlds are kin to Piaget’s reflective abstraction, something which he in the original French broke down into different types of reflection (see von Glasersfeld, 1991).
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.

 Miller-Young, J. (2010). How engineering students describe three-dimensional forces. In Proceedings of the Canadian Engineering Education Association.
Miller-Young, J.E. (2013). Calculations and expectations: How engineering students describe three-dimensional forces. The Canadian Journal for the Scholarship of Teaching and Learning 4(1), Article 4, 1-11.
 
I have grouped these two papers together since they are almost the same. The first is a 2010 conference paper and the second is a 2013 journal paper which includes all the 2010 work as well as a bit more data. The study was interested in digging into the details of how students visualise three dimensional statics problems when what they are presented with is a 2-d diagram. The data collected was students’ think-aloud processes of answering two questions, one without context and the other in a real-world context. The 2013 paper also included data on a quiz question which was part of a standard course assignment. All three problems required that the students see the page as the given vertical coordinate plane (xy in the three problems) and the third axis (z) extending out of the page in the positive direction. Points with a negative z-coordinate, in other words, are behind the plane of the page.
 
The students seemed to find the problems relatively difficult. The author found three main themes in student errors. (1) The students struggled to visualise points behind the plane of the page or vector which extended behind the plane of the page. The two-dimensional drawing on the flat page had to be visualised as a three-dimensional collection of vectors and the students found that particularly tricky for vectors extending backwards relative to their gaze. (2) The students did not always use the provided context to help them visualise the problems. One of the problems involved a pylon with guy ropes attaching to the ground, which was idealised as the flat xz-plane. All the ends of the guy ropes in this problem were on the xz-plane and had a y-coordinate of zero, yet some students struggled to see that. (3) The students reached too quickly for equations to try and answer questions even when there was not enough information to answer the question that way. The tendency to calculate something using a formula is ubiquitous across all maths and physics teaching and is no surprise. This final data point serves only to add to the depressing mountain of similar results.
 
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
Gray, G. L., Costanzo, F., Evans, D., Cornwell, P., Self, B., & Lane, J. L. (2005). The dynamics concept inventory assessment test: A progress report and some results. In Proceedings of the 2005 American Society for Engineering Education Annual Conference & Exposition.
 

The authors report on their progress in developing the Dynamics Concept Inventory, a MCQ format assessment of 30 questions on concepts used or needed by students in a mechanical engineering dynamics course. The process followed to achieve the final product was thorough, involving polling multiple lecturers of dynamics across several institutions, developing questions, piloting the instrument and going through various phases of refining the instrument. The DCI is available online by contacting the developers through the DCI website.
 

In this paper, the authors describe the process towards the development of the final version of the instrument and give a list of the concepts involved. They also provide much statistical evidence for the reliability and validity of the instrument. A few items on the test are pulled out for special scrutiny to illustrate clear evidence of misconceptions. The authors are clearly in favour of the test being used in pre-test/post-test format. Their website encourages this format and the DCI developers request that anyone using the test send them the raw data so that they can use the data to further verify the discriminatory power of the instrument.
 
 
It would be interesting to run the DCI on one of our cohorts of dynamics students and see if any of the results correlate with our vector assessment results.
 
 
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Barniol, P., & Zavala, G. (2016). A tutorial worksheet to help students develop the ability to interpret the dot product as a projection. Eurasia Journal of Mathematics, Science & Technology Education, 12(9), 2387-2398.
 
Following their earlier work (2013, 2014) on determining frequent errors in vector problems, the authors developed a tutorial carefully designed to address the conceptual difficulties students experience with vector projections. The tutorial is presented in an Appendix to the paper. It consists of six sections, requiring the students to determine projections geometrically as well as using the mod(A)mod(B)cos(theta) definition, using a range of theta values. The final section of the tutorial explicitly addresses the observed confusion students experience between the scalar product and vector addition. The paper closes with an open invitation to teachers to use their tutorial. I am tempted to do just that.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 De Laet, T., & De Schutter, J. (2013). Teaching inspires research and vice versa case study on matrix/vector methods for 3D kinematics. In Proceedings of the 41st SEFI Conference (pp. 1-8).
 
De Laet and De Schutter are robotics researchers and lecturers of 3D kinematics and statics. They observed that students struggle with the concepts and the notation of the subject and that their struggles were related to challenges roboticists experience with non-standardised coordinate representations and related software. They developed a semantics underlying the geometric relationships in 3D kinematics and a notation designed to make relationships clearer and eliminate errors experienced while working across different coordinate representations.
 
Neither kinematics nor robotics is a speciality of mine, so I might be phrasing my summary badly. I hope I’m correctly representing the work discussed here. The authors claim that their students have benefited from the new notation, making fewer errors than before, and that roboticists have also welcomed the new notation. I particularly liked two bits of this paper. The first bit is the explicit admission that engineers and engineering students need to be aware of the different terminology and notation which can exist across even closely related disciplines – “it is important that students are aware of the lack of standardisation and the implications this might have when reading textbooks or consulting literature” (p. 2) – which relates to my concern about vector notation. The second bit is the attention the authors pay to threshold concepts, which has long been a theory I have tried to apply to vectors, with little luck so far. Reading this paper has given me some new ideas, not least that I would probably enjoy a SEFI conference!
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
Barniol, P., & Zavala, G. (2014). Test of understanding of vectors: A reliable multiple-choice vector concept test. Physical Review Special Topics-Physics Education Research, 10(1), 010121-1-010121-14.

 
Barniol and Zavala describe a really nicely designed investigative project. In the first phase they conducted several studies over a period of four years, using open-ended problems in order to develop a taxonomy of frequent errors students make when solving vector problems. At the same time, they sought references in the literature to frequent vector errors. In the second phase, they developed a test in multiple choice format, named the "test of understanding of vectors" (TUV). They administered this test to over 400 physics students and thereafter observed the categories of errors and the frequencies of errors in different classes of problems.
 
 
I really admire the TUV, the preliminary work that went into designing it and the detailed analysis of the errors made. I feel the authors left the “so what?” question up to the reader, making a few minor suggestions about other people using the test in similar ways, but not making any broad assertions about teaching or learning or cognitive concept formation. I hope Barniol and Zavala have written further on this topic, as the work laid out in this paper is admirable and provides much food for thought.
 
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Zavala, G., & Barniol, P. (2013). Students’ understanding of dot product as a projection in no-context, work and electric flux problems. In AIP Conference Proceedings (Vol. 1513, No. 1). American Institute of Physics. American Institute of Physics, Melville NY United States.
 
Zavala and Barniol ran a project investigating students’ understanding of the projection role of the dot product. First they gave three isomorphic problems to a class of physics students, with approximately 140 students seeing each of the three problems. The following semester they interviewed 14 of these students, who solved the three problems (and two more) while thinking aloud. The three problems all involved the same arrangement of vectors requiring the same projection, however one was no-context and the others were in context, namely work and electric flux. The investigation found that the students had a weakly constructed concept of the projection role of the dot product. The students were more likely to answer the question correctly in the contextualised problems than in the no-context problem, however even at best only 39% of the students answered correctly.
 
The authors observe that a majority of the students chose one of the two responses which described scalar quantities, rather than the four other MCQ options which described vector quantities. However in the no-context problem that majority is a disappointingly low 57%. Problematically, I would use the term “projection” differently to Zavala and Barniol, where projection of a vector onto another vector is still a vector, not a scalar quantity. The projection of A onto B in my lexicon is the vector component of A in the direction of B. Zavala and Barniol mean by projection what we elsewhere (Craig and Cleote, 2015) have referred to as “the amount” of A in the direction of B (p. 22) . So, given my definition, there is only one available MCQ option which describes a scalar quantity (option a, a popular incorrect option). I have to assume that the students participating in the study were familiar with the authors’ definition, however, and would have seen that MCQ option as describing a scalar quantity.
 
The authors cite other work reporting students’ difficulties in connecting concepts and formal representations. They see this dot product projection difficulty as part of that more general situation. “In this article we demonstrate that this failure to make connections is very serious with regard to dot product projection’s formal representation” (p. 4).
 
Not much has been written on students’ difficulties with the dot product. It is likely that the computational simplicity of the product masks the conceptual challenge of the geometric interpretation.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 van Dyke, F., Malloy, E.J. and Stallings, V. (2014). An activity to encourage writing in mathematics. Canadian Journal of Science, Mathematics and Technology Education, 14(4), 371-387.
 
The authors ran a very interesting study in three stages. The first stage was a short assessment of three MC questions involving the relationships between equations and their representative graphs, the first two only linear, the third linear and quadratic. The questions were quickly answered and easily graded. The second stage was giving summaries of student responses back to the class for discussion (not led by the lecturer). Directly after discussion the students were asked to write about the questions. One group was asked to write about the underlying concepts necessary to answer the questions correctly. The second group was asked to write about why students might have given incorrect answers. These written responses were also evaluated. The third stage of the study was to ask the students to answer a survey (strongly agree/disagree Likert style) testing hypotheses developed during the first two stages. The findings are interesting and point, yet again, to the student tendency to want to do calculations even if the question might not require them - “blind attraction to processes” (p. 379) - and also to the expectation that similar problems should have been encountered before. Interestingly, the students in the second writing group wrote more than those in the first, but did not make many references to actual underlying concepts. The authors stress that if you want students to write or talk about underlying concepts you need to make that explicit.
 
The authors present the design of this study as a way of using writing to encourage reflection without it taking a lot of time or being difficult to grade. I agree and would like to try this myself. Running effective writing assignments in a maths class can be very hard to get right. The authors make reference to cognitive conflict and how resolving a cognitive conflict can lead to cognitive growth. “It is not the intent of this article to explore the efficacy of using writing or conflict resolution in the mathematics classroom but to take that as given …” (p. 373).
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Flynn, M.A., Everett, J.W. and Whittinghill, D. (2015). The impact of a living learning community on first-year engineering students. European Journal of Engineering Education (ahead of print) 1-11. DOI: 10.1080/03043797.2015.1059408. 
 
The authors define “living learning communities as “Most LLCs are communities in which students pursue their academic curriculum with a blended co-curriculum involving a theme, concept, or common subject matter while living together in a reserved part of a residence hall.” (p. 2) and “LLCs can be characterised by close working relationships among students and faculty; specialised course assignments; study groups; close relationships among student members; and specialised events, activities and workshops.” (pp. 2-3). They report on a survey carried out in and engineering living learning community (ELC). The authors argue that LLCs might be particularly beneficial to engineering students, given their having to adjust not only to a new environment as entering students, but also to a heavy course load. In this ELC, the students lived together in the same residence hall, took two common courses per semester and their classes encouraged cooperative learning. Students applied to join the ELC, which allowed the authors to compare two cohorts – the ELC students and the non-ELC students enrolled for the same courses. The students were surveyed for their perceptions of transition, student-student relationships, student-faculty relationships and levels of satisfaction with the institution. The findings show that the ELC students perceived their transition to college to be easier than the non-ELC students. They also reported better student-student relationships and greater satisfaction with and connectedness to their institution. The two groups were about the same in their perceptions of student-faculty relationships. The authors conclude “It is recommended that LLCs be used to foster positive student perceptions of transition to college, connectedness to the institution, peer relationships, and their overall satisfaction with the institution.” (p. 9)
 
This paper has many references to other studies on LLCs, reporting on many and varied benefits. For example “Literature suggests that peer interactions of this sort [friendships, networking, study groups] increase student involvement and participation, which in turn are positively linked to institutional retention” (p. 7) and “First-year students who are easily able to transition from high school to college are more likely to stay at the institution and graduate, positively impacting retention rates” (p. 5). I would really like to look up all of those references and see which ones are based on actual hard data. Altogether I enjoyed this paper, found the data interesting and plan to follow up on several of the references.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Woods, D.R. (2000). An evidence-based strategy for problem solving. Journal of Engineering Education, 89(4), 443-459.
 
The author lists 150 published problem solving strategies, although he complains that few are based in research. In an appendix after the (huge number of) references, he briefly gives each of those strategies. The author finds many similarities between the strategies, such as “understand the problem” and “verify your answer”. The strategies vary in number of stages, but it is usually between two and seven. Some have mnemonics, some draw analogies. The author describes careful criteria for an effective problem solving strategy, such as “If possible, none of the stages should include titles describing a skill or attitude … since that skill or attitude could be used in many different stages” (p. 444). He warns against encouraging a linear mindset towards the strategy, by, for instance, numbering the steps, or giving them in an obvious sequence. The author present a strategy represented on a disc, rather than linearly, with six stages, each carefully described. The strategy was based on the literature, trialled on expert practitioners and refined over years of student use.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Everett, L.J., Alexander, R.M. & Wienen, M. (1999). A grading method that promotes competency and values broadly talented students. Journal of Engineering Education, 88(4), 477-483.
 
The authors describe an assessment regime carried out in an engineering science course, designed to reward students for skills valued in engineers. The assessment regime was run three times in consecutive semesters and the authors feel that it worked well although there were a few challenges. Four assessment types were carried out.
Readiness assessment tests: They encourage being prepared for class, but setting one or two questions based on reading an assignment before class. These seem to have been carried out twice a week, but I could see them being almost daily. It was tricky to set questions which did reward reading and understanding. After the course a significant correlation was found between preparation for class and success at the course.
Basic understanding tests: These were conceptual in nature, testing understanding of physical phenomena. They involved no mathematics. These were held once per week.
Major evening examinations: These were also conceptual in nature and bore the closest resemblance of all the assessments to traditional exams. The questions were tough and required engineering-type skills such as simplification. The problems were also frequently ill-founded and there could be a variety of solutions. There were 3 or 4 of these per semester.
Minimum skills tests: Again, these were once a week and were made up of simpler versions of the prior week’s homework. They were multiple choice, which means no partial credit.
 
The authors argue for these assessments as criterion-referenced, and argue against the use of norm-referenced assessment. I found their insistence on no partial credit interesting. The paper presents analysis of the results as well as comparison with what the grades would have looked like if only the major evening examinations (as closest to traditional exams) were used. Various challenges were discussed such as the trickiness of determining validity and reliability and also the students’ struggles with this new type of assessment and the expectations of them.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
 Robertson, R.L. (2013). Early vector calculus: A path through multivariable calculus. PRIMUS: Problems, Resources, and Issues in Mathematics Undergraduate Studies, 23(2), 133-140.
 
The author argues for an ordering of topics in a multivariable calculus course which brings the three big theorems as early as possible. The textbook he uses is a standard maths text, with the three big theorems coming last. He lists the topics to be covered before Divergence Theorem can be covered, locating it (by my estimate) a bit less than halfway through the course. Thereafter he covers a few more topics and get to Stokes’ Theorem (probably about 2/3 of the way through the course). Green’s Theorem is presented as a special case of Stokes’ Theorem. The benefits of this approach are argued for convincingly and a few drawbacks are also covered (such as parametrised surfaces before parametrised curves). This is the second paper I have read which recommends Schey’s (2005) Div, Grad, Curl and All That: An Informal Text on Vector Calculus, so I really must track that text down. The practical interpretations of div and curl are emphasised as in so many papers I’m reading. I found this paper intriguing and I also greatly appreciated that the author broke the course down into sufficient detail that I, or someone else, could easily structure a course as he has done.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.
Dray, T. & Manogue, C.A. (1999). The vector calculus gap: mathematics ≠ physics. PRIMUS: Problems, Resources, and Issues in Mathematics Undergraduate Studies, 9(1), 21-28.
 
Here we have another paper lamenting (justifiably) the difference in the way vector calculus is taught in maths and physics. The authors emphasise how practical applications and situational geometry are far more important in physics (or engineering) than in maths. For example, they discuss how vectors are defined as ordered triples in maths while as arrows in space in physics. Also, div and curl are defined as differential operations on vector fields in maths but in physics are defined first in terms of their physical meaning as represented by Stokes’ Theorem and Divergence Theorem. The coordinates used in maths are almost invariably rectangular coordinates, the authors argue, while physics situations frequently have circular or spherical symmetry and hence use spherical coordinates to simplify the maths. (Some of the paper’s criticisms could apply to my local course, but not all, I think.) The value of the mathematical methods lies in their general applicability, however in physics the types of cases are few and there is an argument for loss of generality in favour of simplification of the common cases. The authors close with an insistence that the relevant departments collaborate closely.
 
Do not treat this blog entry as a replacement for reading the paper. This blog post represents the understanding and opinions of Torquetum only and could contain errors, misunderstandings or subjective views.

April 2021

S M T W T F S
    123
45678910
11121314151617
1819202122 2324
252627282930 

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 9th, 2025 01:38 pm
Powered by Dreamwidth Studios