Tuesday, September 8, 2015

Lessons from the Modeling Lab

The three Nersessian articles for this week provide several examples of modeling as a scientific practice. Her works provide case studies of laboratory engineers – often with no background in the particular branch of science that they are modeling – who, through modeling, ultimately produce novel scientific concepts.

Nersessian et al.’s work sheds light on the various ways these researchers use modeling in their practices and the ways modeling distributes cognition across the laboratory’s system. In “Building Simulations from the Ground Up,” MacLeod and Nersessian discuss how researchers start to build models of systems that do not have articulated theoretical frameworks. These researchers instead pieced together relevant bits of information about the system and assembled them in a “nest-like fashion” to create a coherent initial model that could be refined later (540). In the “Coupling Simulation and Experiment” article, MacLeod and Nersessian follow researchers who take a bimodal approach and model and experiment simultaneously (unlike those modelers whose experimental data comes from outside collaborators). They then describe how the researchers are able to iteratively tune their model toward their experimental results, in a process not unlike Pickering’s dance of agency. Finally, in the “Building Cognition” paper, Chandrasekharan and Nersessian explore how the model representation becomes coupled to the modeler’s imagination through the construction process, and in turn how this coupling cultivates the modeler’s cognitive powers and enables discovery.

I found the “Building Cognition” piece to be the most interesting, because it was motivated by a desire to understand how novices could make elusive scientific discoveries while working with digital media, such as FoldIt. Chandrasekharan and Nersessian attempt to answer this question by drawing a parallel to modelers without knowledge of the biological pathway they have been tasked to model, and following their discovery process. I don’t know if that parallel was fully justified; after all, these researchers are expert modelers, even though they’re not expert biologists. So, when they describe the cognitive powers that modeling activities helped develop, I couldn’t help but wonder how much experience was actually needed to develop these powers. I suppose it’s the same question we had while reading the Wilensky pieces, though mildly adapted: what information are we missing that would be necessary to make students’ performance with modeling this successful?

“Building Simulations from the Ground Up” resonates with Wilensky’s “Thinking Like a Wolf, a Sheep, or a Firefly.” Students, like the engineers, started modeling their systems without an underlying theory (beyond their own embodied observations) and later refined their models by synthesizing information from relevant literature. I did not feel like the bimodal article had a lot of overlap with the other pieces we’ve read so far, in that we haven’t read an account of students who created a model of a system and experimented on that system in tandem. However, it is reminiscent of Ashlyn’s anecdote of asking her students to observe ant behavior before they modeled the ant activity computationally; I also think of my time in an undergraduate physics lab where we needed to draw a model of a circuit that was hidden within a blackbox by performing experiments on the circuit and taking measurements.

However, I don’t remember doing much of that in my K-12 education. In high school, my labs were mostly procedural, with instructions about what information to graph and a handful of questions at the end that required declarative answers. I wonder what (almost) simultaneous modeling and experimentation would look like in the classroom. I imagine that it would need a lot of scaffolding activities to foster the development of the cognitive powers outlined in the “Building Cognition” piece.

Monday, September 7, 2015

Week 3 - Nersessian x3

In both articles by Nersessian and MacLeod, the authors detail the processes and advantages of molding in integrative systems biology (ISB). In their article Coupling simulation and experiment, Nersessian and MacLeod explore the process of pairing simulation/modeling simultaneously. This process reminded me of the “dance of agency” that Pickering described, especially as detailed in Figure 5. The researcher, C9 in this case, was continually exploring new ideas with her model and testing them through experimentation. She said, “I like the idea that I’m building my model things are popping up in my head oh wow this would be a good experiment. I plan out the experiment myself and then go into the lab and I do it.” The coupling of these two practices can also reduce the complexity of the entire process of investigation. There is less room for error in communication and no lag time with data and ideas being shared between specialists. Again, C9 said, “I personally think [my approach] is better only because...I could tell someone what I needed. But, if they, I think not really under- standing the modeling aspect, they can’t accurately come up with a good enough experimental protocol to get what it is I need.” I do have hesitation, however, in making a one-to-one relation between the dance of agency and the coupling of simulation and experiment. While I did not distinguish between the two here, I wonder if others had a more in depth explanation of the simulation-experiment relationship?

The Chandrasekharan-Nersessian article focused on the cognitive processes and benefits of modeling complex systems, specifically ISB. One idea that struck me was how effective the process of exploring a topic through modeling was as a learning tool. In all three articles, actually, Nersessian was quick to point out that the modeler had a background in engineering, not biology. However, with some data and a bit of reading, this electrical engineer was able to make a significant breakthrough in biology. Nersessian writes, “This is a basic biological science discovery, generated by an electrical engineer, based on a few months of modeling. The finding is remarkable.” I think this illustrates the power of investigating topics through the modeling process.

If they existed, I think I would enjoy reading about similar practices situated around less complex models. That is, each of these examples showed pathways modeled with ODE’s, which clearly wouldn’t be used in the K-12 setting. How does the idea of coupling transfer to that level? Can we expect to see the same benefits in cognition with a simpler model?

Friday, September 4, 2015

Week 3: Managing Complexity and Distributed Cognition

Nersessian’s articles reiterated the theme of The Mangle of Practice, describing how modelers’ intentions changed as they interacted with non-human agents (models they built and tested). C9 particularly represents the dance of agency, as she made significant contributions to her field that were tangential to her initial research question.

Nersessian’s research illustrates several examples of the co-evolution of human intentions and non-human agents, describing the data, computational, and collaborative constraints that modelers face and how modelers manage these challenges. In “Coupling simulation and experiment,” MacLeod and Nersessian show how one researcher manages these challenges using a bimodal strategy, coupling modeling with experimentation to reduce collaborative and data constraints. In “Building simulations” they follow an alternative strategy: modeling while collaborating with experimentalists. They explain that ISB draws on computation, engineering, and biology, and that different individuals will be successful combining these practices in different ways based on their experiences and ability to learn new practices.

This idea is important to apply to modeling complex systems in K-12 instruction. Based on students’ age, experiences, and the teacher’s background, modeling might be best implemented through mathematical or agent-based models. Students may gain more from using reading, observation, or experimentation to inform their models.

Recognizing that researchers have material constraints makes me consider the constraints of time and materials in a classroom. While it might be ideal to model invasive species from data collected by tracking their growth, students might not have time to monitor these plants, or might not have access to them. Teachers will need to plan deliberately to accommodate such constraints.

Models working from mesoscopic views recognize that they do not need to understand every interaction within a system, just the inputs and outputs and important pathways within the systems. To implement modeling, teachers and students will also need to be able to recognize what portions of their system should be modeled and what can be simplified.

I think the cost of managing complexity is worthwhile based on the “Building Cognition” article. Many of theories presented about Distributed Cognition could be applied to a classroom setting. Chandrasekharan and Nersessian describe how models serve as an “external imagination.” Especially for younger students, having a tool that can help them visualize the interaction of agents in complex systems could help them gain a deeper understanding of these systems, rather than only focusing on agent behaviors or aggregate outcomes. This “external imagination” could help them avoid “slippage” between levels that Wilensky describes by providing them with space to organize their ideas and could help them breakdown systems in ways that experimentation could not (page 35).
Models allow researchers to collaborate effectively and encourage collaboration since researchers depend on others’ data. I wonder if model would help support ELL students, low readers, or young children who do not have the vocabulary to describe complex systems, but do have the sensorimotor skills to manipulate and develop models.

In a classroom, would teachers or students manage complexity? How? Can children benefit from distributed cognition as researchers do?  


Wednesday, September 2, 2015

Week 2 Modeling in Science

Pickering describes modeling as “an open-ended process with no determinate destination “ (p. 19) which seems to align with his discussion throughout the chapter of what constitutes a practice. He perceives the modeling process/practice as influenced by culture and therefore intertwined with the world. Modeling in this sense is not copying, nor the generation of a simplistic, static object, machine, procedure, or goal. This conception appears to be at odds with the more commonplace thinking when the term “model” is used. A model is often a copy, a miniaturized or simplified snapshot. The idea of the cookbook lab could be considered a model in the traditional or more commonplace sense. For example, a student could follow a procedure to observe photosynthesis in plants then answer questions. It is possible to push student thinking about photosynthesis through the use of discussion and critical thinking but the outcome of a very simple cookbook lab is somewhat predetermined. Pickering as well as Collins, White,& Fadel are proposing a different approach to thinking about learning and instruction in the sciences. Pickering’s perspective is through the lens of learning science as an interactive, constantly changing, and messy series of practices instead of facts and predetermined outcomes.

Collins, White, & Fadel reiterate Pickering’s conception of a model and go further by identifying various types of models and characterizing their productivity. They discuss a continuum of scientific inquiry “a process of oscillating between theory and evidence” (p. 2). Modeling is a means of supporting this oscillation. It can touch on the four aspects of scientific inquiry they describe “theorizing, questioning and hypothesizing, investigating, and analyzing and synthesizing “ (p. 3). Their breakdown of the types of models provide examples of this intersection and ways to push scientific thinking through instruction.  

After reviewing these readings, of particular interest to me in thinking about modeling in the sciences is the modeling of invisible processes. I have seen some computer simulations for chemistry involving atomic and molecular collisions, but many biological processes happen on a microscopic or “invisible” level such as aerobic respiration. There are visible effects that I think do lend themselves well to modeling but I am curious about how students could go about investigating a process through the derivation and revision of models when the process may be microscopic and unfamiliar. Which of the types of models that Collins, White, & Fadel describe would be more or less productive in that endeavor? This is something I have been thinking about for some time and will continue considering.


Tuesday, September 1, 2015

Mangling with Practices

Our readings this week focused on both the practice and the practices of science. Unlike the articles read last week, these did not specifically relate to classroom instruction or learning environments, but instead focused on the community of scientists.

Collins detailed four different areas that he considered scientific knowledge, theories and models, forming questions and hypothesis, designing and carrying out investigations, and data analysis, although each of these areas can be considered a part of the scientific process. Collins argued that schools too often base science education on memorization of facts and definitions, but instead science actually “involves inquiry to make sense of phenomena in the world and find solutions to problems we face as a society”. I tend to agree that most of what we consider to be science education falls woefully short of empowering students with the tools to be an active participant in the scientific community.

Pickering offers an even bigger picture view. He likens scientific practice to the culture of science. He describes it not as a static body of information that is to be learned and memorized (as it is traditionally taught), but as a dynamic relationship between human agents (scientists) and nonhuman agents (machines) in which they struggle together to make sense of the natural world around us. He calls this relationship the mangle. Both Pickering and Collins acknowledge that each new discovery leads to more questions. I appreciated the quote from a philosopher on page 6 from Collins, “the mains source of problems in the world is solutions”. I think it aptly sums up why we must consider the culture of science as a priority in science education.

As I read these articles and tried to situate them in what I understand about K-12 education, I was excited and overwhelmed by the possibilities. Collins details many different forms of models and gives examples to show how different models can be used to highlight different properties of a particular phenomena. As a teacher, how necessary is it to be explicit about labeling types of modeling? Is it beneficial to discuss the different types of models as part of learning the process of modeling? Or does the idea of labeling them turn this dynamic process into static information. In other words, is it better for students to explore their own ideas before placing labels on them?

Monday, August 31, 2015

Thinking about "Practice" and "practices"

In week 1, we read two accounts of scientific modeling in action, in which secondary school students used computational, agent-based models to investigate a number of different complex systems. We discussed some of the immediate advantages to modeling exercises and pragmatic obstacles to implementing modeling in the K-12 classroom.

Our conversation about the affordances of modeling was directed mainly towards agent-based modeling. But Collins describes many more types of models that are used in science. Agent-based modeling is just one type of behavioral model, while Collins also lists examples of structural and functional/causal models. Not all of these models are conducive to thinking about complex systems, nor are they all equally viable as computational (i.e., programmable) models. But, the wide variety of models he names are useful for framing conversations about the range of scientific practices we want students to engage with. His article as a whole, outlining the process of scientific inquiry, complements the proposals of the Next Generation Science Standards for reforming science classrooms. Therefore, his list of models is a powerful tool for arguing for modeling throughout the science curriculum, and for evaluating potential scaffolds for instruction.

We also discussed obstacles to model implementation, one of which was the limitation of assessments to evaluate students’ thinking in relation to modeling. Standardized assessments are well designed for measuring learning from direct instruction (of bodies of factual knowledge), but they are not as well suited for measuring learning from guided discovery experiences, such as modeling (which ask students to participate through disciplinary practices). These disciplinary practices, at the heart of modeling, are the procedures and processes that Collins describes in his article as the practice of inquiry. Collins does not use “practice” in the same sense as Pickering does when he refers to “scientific practice;” in Pickering’s case, “practice” refers to the overarching scientific culture, although he uses “practices” in the plural to refer to the everyday procedures of scientists in the discipline.

As I reread Pickering, I was able to clarify a confusion that I had from one of my initial read-throughs. Originally, I recognized that Pickering has a heavy emphasis on machines when he talks about “material agency,” and I had trouble applying his theory to agent-based modeling, which typically deals with other living beings or chemical relationships found in nature (which in some sense can be brought under human control). I wonder if, when Pickering mentions human vs. nonhuman agency, what he is lumping into “non-human.”

While I agree with most of Pickering’s assumptions and find them to be useful points of analysis, I don’t think he gives enough attention to the limitations of human cognition and the individual differences between people in his account. I recognize that is a little beyond the scope of his argument, but it is necessary to consider for instructional and design purposes. If we are going to make the image of modeling presented in the Wilensky articles a wide-scale reality, we need to discuss how Pickering’s – and Collins’ – ideas can be useful for designing needed scaffolds for both teachers and students to implement modeling successfully.


Week 2: Modeling by scientists and by students

All four of our readings have emphasized emergent levels within scientific knowledge and practice. Last week, in “Thinking in Levels,” Wilensky defined emergent levels as “levels that arise from interactions of objects at lower levels." He explained levels in the context of science content, focusing on examples in biology, chemistry and physics, and ecology.

This week, Pickering explained that the practice of science itself also consists of emergent levels in “the Mangle of Practice.” On page 22, Pickering describes the “dance of agency.” This dance consists of human scientist agents and non-human machine agents that interact in an emergent process. Scientists attempt to capture material agency with their machines, and the machines fail and present opportunities for revisions and accommodations to scientists' theories, models, and technology. The scientists’ actions emerge from their observation of current machines and from their interaction with a higher “level:” the aggregate human society and its goals for the future, modelling new technology in accordance with these goals. The machines’ agency emerges from the practices of humans and the intentions of the humans. In this dance, individual and aggregate levels contribute to emergent outcomes in the development of culture and technology. 

Collins reiterates Pickering’s ideas about the mangle of practice with his description of design science, which “attempts to design systems that have desired properties” (human intentionality), while drawing on present understandings of the natural world (nonhuman agents).

Pickering also connects to Wilensky’s point that randomness on one level could lead to a desired outcome on another level, explaining on page 24 that, “captures and their properties just happen,” and that we have to find out, in practice, through “brute chance,” how scientists and machines will develop.

Like Wilensky, Collins argues that students in K-12 schools do not receive a science education based on complex or emergent systems and argues that people need a better understanding of the way scientists approach models in “an increasingly complex world.”

Though Collins described four types of scientific knowledge (theories and models, forming questions and hypothesis, designing and carrying out investigations, and data analysis), I thought it was interesting that all four types of knowledge were rooted in modeling. Models were used to generate and answer questions, were formed from exploratory studies and evaluated in confirmatory studies, and were generated from data analysis. 

The emphasis on modeling in the chapter underscores the importance of modeling in K-12 classrooms to me. I was excited by the quantity types of models described within the chapter. This made me realized that although I focused on modeling as a teacher, I only exposed my students to a narrow slice of scientific modeling through agent-based modeling. These other types of modeling present many more opportunities for modeling as part of science education. I wonder how best to implement these in a classroom – would it be best to model the same phenomena in different ways? Match different phenoma to appropriate modeling strategies? Use the same type of model throughout a year to develop students’ skills with that type of model?