Intentional Tech: Principles to Guide the Use of Educational Technology in College Teaching
Chalkboards and projectors are familiar tools for most college faculty, but when new technologies become available, instructors aren’t always sure how to integrate them into their teaching in meaningful ways. For faculty interested in supporting student learning, determining what’s possible and what’s useful can be challenging in the changing landscape of technology.

Arguing that teaching and learning goals should drive instructors’ technology use, not the other way around, Intentional Tech explores seven research-based principles for matching technology to pedagogy. Through stories of instructors who creatively and effectively use educational technology, author Derek Bruff approaches technology not by asking “How to?” but by posing a more fundamental question: “Why?”

1130783738
Intentional Tech: Principles to Guide the Use of Educational Technology in College Teaching
Chalkboards and projectors are familiar tools for most college faculty, but when new technologies become available, instructors aren’t always sure how to integrate them into their teaching in meaningful ways. For faculty interested in supporting student learning, determining what’s possible and what’s useful can be challenging in the changing landscape of technology.

Arguing that teaching and learning goals should drive instructors’ technology use, not the other way around, Intentional Tech explores seven research-based principles for matching technology to pedagogy. Through stories of instructors who creatively and effectively use educational technology, author Derek Bruff approaches technology not by asking “How to?” but by posing a more fundamental question: “Why?”

99.99 In Stock
Intentional Tech: Principles to Guide the Use of Educational Technology in College Teaching

Intentional Tech: Principles to Guide the Use of Educational Technology in College Teaching

by Derek Bruff
Intentional Tech: Principles to Guide the Use of Educational Technology in College Teaching

Intentional Tech: Principles to Guide the Use of Educational Technology in College Teaching

by Derek Bruff

Hardcover(1st Edition)

$99.99 
  • SHIP THIS ITEM
    In stock. Ships in 1-2 days.
  • PICK UP IN STORE

    Your local store may have stock of this item.

Related collections and offers


Overview

Chalkboards and projectors are familiar tools for most college faculty, but when new technologies become available, instructors aren’t always sure how to integrate them into their teaching in meaningful ways. For faculty interested in supporting student learning, determining what’s possible and what’s useful can be challenging in the changing landscape of technology.

Arguing that teaching and learning goals should drive instructors’ technology use, not the other way around, Intentional Tech explores seven research-based principles for matching technology to pedagogy. Through stories of instructors who creatively and effectively use educational technology, author Derek Bruff approaches technology not by asking “How to?” but by posing a more fundamental question: “Why?”


Product Details

ISBN-13: 9781949199154
Publisher: West Virginia University Press
Publication date: 11/01/2019
Series: Teaching and Learning in Higher Education
Edition description: 1st Edition
Pages: 240
Product dimensions: 5.00(w) x 8.00(h) x 0.70(d)

About the Author

Derek Bruff is the director of the Vanderbilt University Center for Teaching, where he helps faculty and other instructors develop foundational teaching skills and explore new ideas in teaching and learning. He is the author of Teaching with Classroom Response Systems: Creating Active Learning Environments.

Read an Excerpt

CHAPTER 1

Times for Telling

SEVERAL YEARS AGO, my oldest daughter's preschool held a Science Day. Parents were invited to come to school that day and do sciencey things. I volunteered, and I brought along a roll of Mentos breath mints and a two-liter Diet Coke. As the class of five-year-olds stood a safe distance in front of me, I opened the two-liter, poured the sleeve of breath mints in, and jumped back. Half a second later, the soda exploded in an eight-foot-tall geyser. Yes, I was the cool dad who showed the kids what happens when you put Mentos in Diet Coke. As it turned out, the five-year-olds hadn't seen all the viral videos yet.

Then the five-year-olds did what five-year-olds do. They asked, "Why?"

Now, I could have prefaced my demonstration with a ten-minute lecture (with PowerPoint, naturally) on the carbon dioxide dissolved in the soda, the activation energy necessary for that carbon dioxide to transition to gaseous form, the way the breath mint's surface roughness and ready dissolution decrease that activation energy, and the directional foaming of the water caused by all that carbon dioxide gas and the shape of the bottle. Any of the preschoolers still awake at that point would likely have enjoyed the subsequent Diet Coke geyser, but I'm not sure that any of them would have understood or cared about my lengthy explanation. Instead, I led with the demonstration, and when the kids asked "Why?" I offered a simple explanation about gas bubbles in the soda and how the breath mints cause them to come together all at once. That was all the explanation needed that day, and I still got to be the cool dad with the exploding Coke.

But I was struck by how much the order of my actions mattered.

By starting with the demonstration, I had created what Daniel Schwartz and John Bransford call a "time for telling." The preschoolers were ready for an explanation, and they were ready in two ways. Cognitively, they had seen the soda geyser and were thus ready to understand an explanation, at least more ready than they would have been without seeing the geyser. Affectively, they were motivated to hear the explanation. Having seen the geyser, they wanted to know how it worked.

This notion of creating times for telling is one of the most useful teaching principles I share in my consultations with faculty and other instructors. As experts, most of us have an intuition that we should explain first, then have students do something with that explanation. In a literature course, an instructor will lecture on the history and context of a text before having students read that text. In a mathematics course, the professor will present a theorem, then prove the theorem, then show how the theorem applies in a few examples. In an engineering program, students will take a series of courses on engineering principles but wait until senior year to engage in actual design projects. But in many cases, reversing this intuitive order leads to deeper learning.

A few years ago, a group of education researchers at Stanford University led by Bertrand Schneider developed a tabletop simulation of the vision system within the human brain called BrainExplorer. The system featured polymer reproductions of parts of the brain and eyes, along with cameras and infrared lights. This "tangible user interface," as the researchers called it, allowed users to explore the simulated neural network, trying different configurations to discover how light enters the eye and travels to the brain for processing. The researchers took a group of twenty-eight undergraduate and graduate students, none of whom had studied neuroscience, through a sequence of learning activities using BrainExplorer. Half of the students spent time playing with the tabletop simulation, while the other half read textbook-style introductions to the neuroscience of vision. Then the groups switched activities, and were tested on their understanding of the topic. The result? The students who started with BrainExplorer performed 25 percent better than the students who started with text-based explanations. The researchers ran the study again, replacing the text explanations with video explanations, and they got the same results.

Order matters. And, at least in some situations, explanations should follow, not precede, hands-on experience.

Why do students learn better when times for telling are created? For one possible reason, travel with me from Nashville, Tennessee, to the streets of central London. If we stand at a crosswalk on some of those streets and look down at our feet, we will see signs that read "Look Right." These signs likely saved my life more than once during trips to London. As an American, I'm used to road configurations where people drive on the right. That means that when I approach a crosswalk as a pedestrian, my brain expects oncoming traffic to come from the left. The "Look Right" signs in London are helpful reminders that people drive on the other side of the road in London and that I need to look right to check for oncoming cars.

I have in my head, as do you, a mental model of traffic flow. My mental model has been shaped by my experiences living in the United States, where people drive on the right. It's a fine mental model, and it serves me well where I live. But it's the wrong mental model for traffic in London and other places where people drive on the left. When I'm standing at a crosswalk in London that lacks those helpful reminder signs, I can close my eyes, visualize driving on the left, and correct my mental model for the current situation. But changing mental models is hard work. That's why I'm so thankful for those little signs.

In the same way, our students enter our classrooms with all kinds of mental models about how the world works. Some of those mental models are robust and accurate and helpful. Others are incomplete or inaccurate or useful in only certain situations. None of our students enter as blank slates, ready for us to pour information into their heads. Our job as instructors is to help our students develop better mental models, models that are more accurate, more useful, and more flexible to solve the problems our students encounter in the world. But changing mental models is hard work, and we humans aren't inclined to do it. Typically, we don't change our mental models unless directly confronted by some deficiency. When we face a problem our mental models can't help us solve, we're far more open to updating those models and willing to put in the effort to do so.

Lots of people, when confronted by challenges to their mental models, decide not to change. This is why Facebook debates over politics or Twitter arguments over climate change tend to go nowhere. But if we can create for our students experiences where they recognize that their mental models need improving, we can generate times for telling in which students are ready — cognitively and affectively — to change and learn. And when used intentionally, technology can help make that happen.

Bill the Jazz-Playing Accountant

I occasionally teach a statistics course, and in that course I include a unit on probability. Here's a probability question I like to ask my students.

Take a minute and answer the question. List out the statements (A, B, C, and so on) in order, from most likely to be true about Bill to least likely to be true about Bill. I ask my students to do the same, giving them a couple of minutes to work individually on this task.

Then I ask my students to get out their phones. I have a question for them about Bill, and I want them to answer the question using a classroom response system. Classroom response systems are technologies that enable instructors to rapidly collect and analyze student responses to multiple-choice and sometimes free-response questions. In the early years of the twenty-first century, these systems used dedicated handheld devices, often called "clickers," assigned to each student. These days, clicker systems are still available, but many instructors use a bring-your-own-device (BYOD) system that makes use of students' mobile devices — phones and laptops and tablets. Most BYOD systems support text-message responses, so students don't need expensive smart phones, and some systems are free for up to forty students per class, which means such systems are often feasible even in low-resource environments. I use a BYOD system in my stats course, thus my request for students to get out their phones.

Here's the multiple-choice question I give my students, asking them to report out the relationship among three of the statements in their personal ordering:

Which of the following is true for your ranking? Here, > means "is more likely than.")

1. C > D > G 2. C > G > D 3. D > C > G 4. D > G > C 5. G > C > D 6. G > D > C

Which of these was true for your ranking of the eight statements about Bill? Figure 1 shows a typical distribution of responses from my students.

Recall that C = "Bill is an accountant," D = "Bill plays jazz," and G = "Bill is an account who plays jazz." Almost all of my students reported that, among these three statements, it was most likely that Bill is an accountant. Sure, that's a little stereotypical about accountants, but I'll buy that. Here's where things get weird: Half of my students thought it was more likely that Bill is a jazz-playing accountant than Bill plays jazz. That is, they reported the more specific outcome (jazz + accountant) as more likely than the more general outcome (jazz). That can't be!

Why not? Let's imagine we assembled all the people in the world who fit Bill's description in one room. For easy math, let's assume there are 100 such people. We ask all the Bills who play jazz to raise their hands. Suppose 20 of them do. That means there's a 20 percent chance that the more general outcome (jazz) is true. Next, we ask all the Bills who play jazz and are accountants to raise their hands. None of the other 80 Bills put their hands up, because none of them play jazz. Of the 20 jazz-playing Bills, maybe 17 of them put their hands down, since they aren't accountants. That leaves 3 jazz-playing, number-crunching Bills. That means there's a 3 percent chance that the more specific outcome (jazz + accountant) is true. I've made up numbers here, but regardless of the actual jobs and hobbies of the Bills of this world, that more specific probability (jazz + accountant) has to be less than the more general probability (jazz).

Here's another way to think about it (without numbers): If you were to bet twenty dollars on something, would you rather bet that Bill plays jazz, or that Bill plays jazz and is an accountant? If you bet on the latter, and Bill turned out to be a jazz-playing dentist, you would regret your bet. The more general outcome is just more likely than the more specific outcome.

Half of my students made the wrong bet. Why do so many people rank the probability of these statements incorrectly? Likely because we find the more specific outcome (jazz + accountant) more descriptive or useful than the more general outcome (jazz). This is an example of the conjunction fallacy, identified by Nobel Prize–winning economist and psychologist Daniel Kahneman in his book Thinking, Fast and Slow. It's really common, and it gets in the way of the kind of probability modeling that my students need to do in my stats course.

During class, after the students have responded to the polling question and we've looked at the bar graph together, I ask a few students to talk through their reasoning. I point out that not all accountants are as dull as Bill seems to be (my dad, for instance, was a lot of fun), and then I explain the conjunction fallacy and how it applies to this problem, usually with a Venn diagram to supplement the room-full-of-Bills explanation I shared above. The students struggle a little with the explanation, but they all want to hear it. Why? Because through this sequence of technology-assisted activities, I have created a time for telling.

The technology is key here. First, the students need a chance to think about and respond to the multiple-choice question about their rankings independently. If I just asked for volunteers to respond, some students would, but other students would likely just wait and see what their more vocal peers said. For this classroom experiment to work, I need all the students to participate and to participate on their own, without being influenced by their peers. Second, the classroom response system collects and displays the aggregated student responses. That's critical because for my students to realize there's something challenging going on, they need to see that the class is split between two alternatives. That bar graph provides the right kind of challenge to their mental models. It effectively says to my students, "Your mental model might not be right." And this prepares them to listen to and make sense of the explanation that follows.

Creating a time for telling isn't the only way to use a classroom response system, but it can be a very effective way. Consider the following example from a different discipline.

Carl and His Rhinoceros

Ed Cheng teaches a course on evidence at the Vanderbilt University law school. He likes to create an active learning environment in his classroom, and he was an early adopter of classroom response systems. Although modern BYOD systems allow for a variety of free-response questions, including free text, numeric response, and clickable image questions, Cheng gets a lot of mileage out of old-fashioned, multiple-choice questions. He finds that the constrained structure of multiple-choice questions can help create times for telling.

Cheng participated in a working group on classroom response systems at the Vanderbilt Center for Teaching in 2016, and he shared a series of questions he used at the beginning of one week of class, to help students review material from the previous week. All of the questions involved a guy named Carl and his rhinoceros. The first one was fairly straightforward, as Cheng told me.

The correct answer is C. I am not a lawyer, but I understand from Cheng that Carl is responsible for predictable damages resulting from keeping a wild animal — like the ramming of a car — even if Carl takes steps to prevent those damages. There's a clear, unambiguous legal framework to apply in this case. Cheng said that, happily, all of his students answered this question correctly. He had a couple of students share their legal reasoning with the class, and, satisfied that they had conveyed the correct reasoning, he moved on.

The second question was more difficult.

The correct answer is C again. In this case, the wild animal rule arguably doesn't apply because the perceived danger of the rhino is from ramming and stampeding, not its weight causing subterranean damage. Carl's behavior is therefore judged under the regular rule, which asks if he took "reasonable care," and Carl did his part by keeping his pet behind a double electrified fence. Cheng's students didn't do well on this question. Only 6 percent selected the right answer, although 18 percent were half-correct in selecting choice A. This was a harder question for the students, but, like the first one, there's a single correct answer. Cheng had some students share their perspectives on the question, but then he told them that the most popular answer, choice D with 76 percent of the vote, was wrong. That was Cheng's first time for telling. When his students realized that most of them had chosen incorrectly, they sat up and listened to his explanation of the hypothetical.

Cheng's third question was very different.

Cheng reported that his students were split on this question: 31 percent for A, 39 percent for B, and 31 percent for C. Again, he called on students to justify their choices, drawing on all the legal reasoning they could muster. There were solid legal arguments for all three choices, and Cheng's students felt stuck. It was a time for telling!

Here's the thing about this third question: It doesn't have a single correct answer. A good lawyer could argue any of the three positions.

Cheng had two main goals in asking his students this series of questions about Carl and his rhinoceros. One was that he wanted to see if his students could muster the appropriate legal arguments for the situations in the first two questions, as a matter of review of the previous week's materials. Where students fell short, he worked with them to correct and refine their legal reasoning. The second goal, however, was more subtle and more important. Cheng wanted his students to know that in some situations, there's a clear legal answer to a question, and in other cases, there's isn't. A good lawyer knows when there's room for interpretation and can make compelling arguments within that gray area.

This series of clicker questions, with two single-correct-answer questions followed by a no-correct-answer question, created a useful cognitive dissonance for Cheng's students. They were expecting a single correct answer for the third question and, in fact, many of them likely carried mental models that assumed that all legal situations had correct answers. When the bar graph on the classroom projector showed that the students were split on the third question and when Cheng told them that there was no correct answer, they were ready to hear his message about legal gray areas and the kinds of critical thinking that lawyers have to do.

(Continues…)


Excerpted from "Intentional Tech"
by .
Copyright © 2019 West Virginia University Press.
Excerpted by permission of West Virginia University Press.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Introduction
1. Times for Telling
2. Practice and Feedback
3. Thin Slices of Learning 
4. Knowledge Organizations
5. Multimodal Assignments
6. Learning Communities
7. Authentic Audiences
Conclusion
Notes
Bibliography
From the B&N Reads Blog

Customer Reviews