Three Acts – Counting with dots and first graders


I had an amazing time this afternoon visiting my wife’s first grade class. I’ve been talking forever about how great it is to take a step out of the usual routines in class and look at a new problem, and my wife invited me in to try it with her students.

Here’s the run-down.

Act 1

Student questions (and the number of students that also found the questions interesting):

  • Why do the dots come together? (8)
  • Why are the dots making pictures and not telling us what they mean? (8)
  • Why are some dots going together into big dots, and others staying small? (13)
  • Why do some of the dots form blue lines before coming together?

My questions (and the number of students that humored me):

  • How many dots are there at the end? (8)
  • What is the final pattern of dots after the video ends? (11)

Guesses for the number of dots ranged from a low of 20 to a high of 90.

Act 2

What information did they want to know?

  • They wanted to see the video again.
  • Seven students asked about the numbers of tens or ones in each group. (I jumped on the use of that vocabulary right away – it seemed they are comfortable using this vocabulary based on my conversations with them.)
  • I showed them the video and gave them this handout since I didn’t have video players for all of the students:
    grouping dots

    What happened then was a series of amazing conversations with some really energetic and enthusiastic kids. They got right to work organizing and figuring out the patterns.
    Screen Shot 2013-06-11 at 3.29.15 PM

    Screen Shot 2013-06-11 at 3.31.36 PM

    Act 3

    We watched the video and discussed the results and how they got their answers. Lots of great examples of student-created systems for keeping track of their counting. We then watched the Act 3 video:

    While nobody had the total number correct, I was quite impressed with their pride in being close. More interesting was how little they cared that they didn’t get the exact answer. I asked who was between 70 and 80, and a few kids raised their hands, and then the same with 50 – 70. One student was one off. Most were within ten or so of the correct answer. The relationship between the guesses and their answers after analysis was something we touched upon, but didn’t discuss outside of some one-on-one conversations.

    The absolute highlight of the lesson was when I asked why they thought nobody had the exact answer. One student walked up to the projector screen with out hesitation and pointed here:
    Screen Shot 2013-06-13 at 4.43.53 PM

    She said “this is what made it tough” and then sat back down.

    We had a little more time, so we watched a sequel video:

    I asked what they saw that was different aside from the colors. One student said right away that he figured it out, the same student that first shouted out ‘tens!’ in Act 1. We lacked the time to go and figure it out, so we left it there as a challenge to figure out for the next class.

    Footnotes:

    • Any high school or middle school math teacher that wants to see how excited students can be when they are learning math needs to go take a group of elementary students through a three act. I wish I had done this during the dark February months when things drag for me. My wife asked me to do this to see how it works, but I think I got a lot more enjoyment out of the whole experience.
    • I made a conscious decision not to include any symbolic numbers in this exercise. It adds an extra layer of abstraction that takes away from the students figuring out what is going on. I almost put it back in when I wasn’t sure whether it was obvious enough. I am really glad I left it out so the students could prove that they didn’t need that crutch.
    • This is written in Javascript using Raphael. You can see a fully editable version of the code in this JSFiddle.
    • All files are posted at 101 Questions in case you want to get the whole package.

Leave a comment

Filed under reflection, teaching stories

My latest app project: 5K Race Timer


I happened to attend a meeting a little more than a month ago for the committee that organizes the Dragon Run. This is one of the school’s biggest events and requires quite a group of people to make happen. One of the biggest challenges that the group faces is the timing of the race and management of this data for the 120+ runners that participate in the official event.

The scheme used in previous years has been a very well thought out system of spotters with pencil and paper lists and a race timer placed at the finish line. When runners register, they give an estimated time for their run, which places them in a few different speed categories. Each spotter has a list of runner numbers from each category so that they are searching for particular runner numbers throughout the time span of the race. When a runner crosses, the spotter records the time on their sheet. This time is then later fed into a spreadsheet that gives everyone’s time. These results are then collated and printed to give the results list after the race.

I’m trying not to be a hammer looking for a nail here, but this seemed like a perfect opportunity to try to use the power of the computer to reduce some of the mental and paper load of this task. My learning obsession with Python web apps and even more recent desire to learn about databases quickly helped me see some easy ways to do this. These were the main points that I wanted as part of the UI:

  • Upon seeing a runner approach the finish line, the spotter should be able to send a ‘stop’ command at the moment that runner crosses the line. Calculating the finish time relative to the start of the race and recording that information is screaming for a computer solution. This capitalizes on the human spotter’s ability for to recognize a runner’s number by sight, leaving the rest of the work to the program to do.
  • We would need a simple interface for starting the race and stopping individual runners with a button press.
  • A non-trivial number of runners register on the day of the race. There needs to be a way to manually add runners to the database easily.
  • Mistakes also come up in recording times and entering data. Editing a runner’s information, including finish time, is a necessity.
  • Manually entering all the runners into the database before the race? Heck no. The organizers use a spreadsheet to record all of the registration information, which is a CSV file asking to be made and inputted to the database automatically.
  • Creating a list of runners based on category and ranked according to race finish time is another exhausting task when done purely by spreadsheet. This process in the program should make the most of SQL queries and Python/Bottle template features to generate the race results automatically.
  • To properly see if this system would work, I’d need a way of showing numbers passing by similar to what actually happens during a race. I put together a Javascript simulator to do this using Raphael that can be found here. This was especially important in testing the system with my student volunteers.

The organizers agreed to let me run my software as a beta test to see if it would work for future years. More insight and conversation led to the idea of a mobile application to be used to enter runner numbers. I agreed that this would be an easier way to locate runners than looking down a list, but had no idea how to do this. I did research and figured out the jQuery Mobile would be the way to do it. This was a difficult learning process having never done this sort of thing before. I battled with the “ghost click” problem for a while until discovering that the ‘touchend’ event was an easy fix.
Screen Shot 2013-05-26 at 5.12.20 PM
Here’s the software as used on race day:

The system worked really well, but ran into some of the same challenges that the pencil-and-paper spotters have been battling since the event’s inception. It’s really hard to simultaneously grab the numbers of a group of 4-5 runners that all come in at once. The system that my students devised for identifying who was going to enter a particular runner approaching the finish line broke down in two specific instances of this, and we missed runners. Luckily pencil and paper picked up the ones we missed. Definitely still in beta. The process of generating results lists and recording times overall worked quite smoothly, and I’m really happy with how it turned out.

Notes:

  • Bottle, Twitter Bootstrap, jQuery Mobile, and vanilla Javascript were all in play.
  • I learned at the race that there are already software packages out there. Now that I’ve done a quick search, it seems that while there is a lot of software out there, the ease of running it through a web interface (and snagging runners through a mobile interface) is a relatively young feature. This project was about me learning to do some new things, and in the end it cost me (and the school) nothing other than time.
  • I learned a lot about user centered design through this project. Usability was a necessity, so I had to start from there and work backwards to build the code needed to make it happen. I really like thinking this way,

2 Comments

Filed under computational-thinking, programming

Speed of sound lab, 21st century version


I love the standard lab used to measure the speed of sound using standing waves. I love the fact that it’s possible to measure physical quantities that are too fast to really visualize effectively.

This image from the 1995 Physics B exam describes the basic set-up:
Screen Shot 2013-05-16 at 3.43.30 PM

The general procedure involves holding a tuning fork at the opening of the top of the tube and then raising and lowering the tube in the graduated cylinder of water until the tube ‘sings’ at the frequency of the tuning fork. The shortest height at which this occurs is the fundamental frequency of vibration of the air in the tube, and this can be used to find the speed of sound waves in the air.

The problem is in the execution. A quick Google search for speed of sound labs for high school and university settings all use tuning forks as the frequency source. I have always found the same problems come up every time I have tried to do this experiment with tuning forks:

  • Not having enough tuning forks for the whole group. Sharing tuning forks is fine, but raises the lower limit required for the whole group to complete the experiment.
  • Not enough tuning forks at different frequencies for each group to measure. At one of my schools, we had tuning forks of four different frequencies available. My current school has five. Five data points for making a measurement is not the ideal, particularly for showing a linear (or other functional) relationship.
  • The challenge of simultaneously keeping the tuning fork vibrating, raising and lowering the tube, and making height measurements is frustrating. This (together with sharing tuning forks) is why this lab can take so long just to get five data points. I’m all for giving students the realistic experience of the frustration of real world data collection, but this is made arbitrarily difficult by the equipment.

So what’s the solution? Obviously we don’t all have access to a lab quality function generator, let alone one for every group in the classroom. I have noticed an abundance of earphones in the pockets of students during the day. Earphones that can easily play a whole bunch of frequencies through them, if only a 3.5 millimeter jack could somehow be configured to play a specific frequency waveform. Where might we get a device that has the capacity to play specific (and known) frequencies of sound?

I visited this website and generated a bunch of WAV files, which I then converted into MP3s. Here is the bundle of sound files we used:
SpeedOfSoundFrequencies

I showed the students the basics of the lab and was holding the earphone close to the top of the tube with one hand while raising the tube with the other. After getting started on their own, the students quickly found an additional improvement to the technique by using the hook shape of their earphones:
Screen Shot 2013-05-16 at 4.03.13 PM

Data collection took around 20 minutes for all students, not counting students retaking data for some of the cases at the extremes. The frequencies I used kept the heights of the tubes measurable given the rulers we had around to measure the heights. This is the plot of our data, linearized as frequency vs. 1/4L with an length correction factor of 0.4*diameter added on to the student data:
Screen Shot 2013-05-16 at 4.14.22 PM

The slope of this line is approximately 300 m/s with the best fit line allowed to have any intercept it wants, and would have a slightly higher value if the regression is constrained to pass through the origin. I’m less concerned with that, and more excited with how smoothly data collection was to make this lab much less of a headache than it has been in the past.

4 Comments

Filed under physics, teaching stories

Visualizing the invisible – standing waves


I wrote a post more than a year ago on a standing waves lesson I did. Today I repeated that lesson with a few tweaks to maximize time spent looking at frequency space of different sounds. The Tuvan throat singers, a function generator, and a software frequency generator (linked here) again all made an appearance.

We focused on the visceral experience of listening to pure, single frequency sound and what it meant. We listened for the resonant frequencies of the classroom while doing a sweep of the audible spectrum. We looked at the frequency spectrum of noises that sounded smooth (sine wave) compared to grating (sawtooth). We looked at frequencies of tuning forks that all made the same note, but at different octaves, and a student had the idea of looking at ratios. That was the golden idea that led to interesting conclusions while staring at the frequency spectrum.

Here is a whistle:
Screen Shot 2013-05-13 at 3.10.40 PM
…a triangle wave (horizontal axis measured in Hz):

Screen Shot 2013-05-13 at 3.09.45 PM

…a guitar string (bonus points if you identify which string it was:
Screen Shot 2013-05-13 at 3.12.14 PM

…and blowing across the rim of a water bottle:
Screen Shot 2013-05-13 at 3.14.04 PM

The ratios of frequencies for the guitar string are integer multiples of the fundamental – this is easily derived using a diagram and an equation relating a wave’s speed, frequency, and wavelength. It’s also easily seen in the spectrum image – all harmonics equally spaced with each other and with the origin. The bottle, closely modeled by a tube closed at one end, has odd multiples of the fundamental. Again, this is totally visible in the image above of the spectrum.

I’m just going to say it here: if you are teaching standing waves and are NOT using any kind of frequency analyzer of some sort to show your students what it means to vibrate at multiple frequencies at once, you are at best missing out, and at worst, doing it plain wrong.

Leave a comment

Filed under physics, teaching philosophy

Rethinking the headache of reassessments with Python


One of the challenges I’ve faced in doing reassessments since starting Standards Based Grading (SBG) is dealing with the mechanics of delivering those reassessments. Though others have come up with brilliant ways of making these happen, the design problem I see is this:

  • The printer is a walk down the hall from my classroom, requires an ID swipe, and possibly the use of a paper cutter (in the case of multiple students being assessed).
  • We are a 1:1 laptop school. Students also tend to have mobile devices on them most of the time.
  • I want to deliver reassessments quickly so I can grade them and get them back to students immediately. Minutes later is good, same day is not great, and next day is pointless.
  • The time required to generate a reassessment is non-zero, so there needs to be a way to scale for times when many students want to reassess at the same time. The end of the semester is quickly approaching, and I want things to run much more smoothly this semester in comparison to last.

I experimented last fall with having students run problem generators on their computers for this purpose, but there was still too much friction in the system. Students forgot how to run a Python script, got errors when they entered their answers incorrectly, and had scripts with varying levels of errors in them (and their problems) depending on when they downloaded their file. I’ve moved to a web form (thanks Kelly!) for requesting reassessments the day before, which helps me plan ahead a bit, but I still find it takes more time than I think it should to put these together.

With my recent foray into web applications through the Bottle Python framework, I’ve finally been able to piece together a way to make this happen. Here’s the basic outline for how I think I see this coming together – I’m putting it in writing to help make it happen.

  • Phase 1 – Looking Good: Generate cleanly formatted web pages using a single page template for each quiz. Each page should be printable (if needed) and should allow for questions that either have images or are pure text. A function should connect a list of questions, standards, and answers to a dynamic URL. To ease grading, there should be a teacher mode that prints the answers on the page.
  • Phase 2 – Database-Mania: Creation of multiple databases for both users and questions. This will enable each course to have its own database of questions to be used, sorted by standard or tag. A user can log in and the quiz page for a particular day will automatically appear – no emailing links or PDFs, or picking up prints from the copier will be necessary. Instead of connecting to a list of questions (as in phase 1) the program will instead request that list of question numbers from a database, and then generate the pages for students to use.
  • Phase 3 – Randomization: This is the piece I figured out last fall, and it has a couple components. The first is my desire to want to pick the standard a student will be quizzed on, and then have the program choose a question (or questions) from a pool related to that particular standard. This makes reassessments all look different for different students. On top of this, I want some questions themselves to have randomized values so students can’t say ‘Oh, I know this one – the answer’s 3/5’. They won’t all be this way, and my experience doing this last fall helped me figure out which problems work best for this. With this, I would also have instant access to the answers with my special teacher mode.
  • Phase 4 – Sharing: Not sure when/if this will happen, but I want a student to be able to take a screenshot of their work for a particular problem, upload it, and start a conversation about it with me or other students through a URL. This will also require a new database that links users, questions, and their work to each other. Capturing the conversation around the content is the key here – not a computerized checker that assigns a numerical score to the student by measuring % wrong, numbers of standards completed, etc.

The bottom line is that I want to get to the conversation part of reassessment more quickly. I preach to my students time and time again that making mistakes and getting effective feedback is how you learn almost anything most efficiently. I can have a computer grade student work, but as others have repeatedly pointed out, work that can be graded by a computer is at the lower level of the continuum of understanding. I want to get past the right/wrong response (which is often all students care about) and get to the conversation that can happen along the way toward learning something new.

Today I tried my prototype of Phase 1 with students in my Geometry class. The pages all looked like this:

Image

I had a number of students out for the AP Mandarin exam, so I had plenty of time to have conversations around the students that were there about their answers. It wasn’t the standard process of taking quiz papers from students, grading them on the spot, and then scrambling to get around to have conversations over the paper they had just written on. Instead I sat with each student and I had them show me what they did to get their answers. If they were correct, I sometimes chose to talk to them about it anyway, because I wanted to see how they did it. If they had a question wrong, it was easy to immediately talk to them about what they didn’t understand.

Though this wasn’t my goal at the beginning of the year, I’ve found that my technological and programming obsessions this year have focused on minimizing the paperwork side of this job and maximizing opportunities for students to get feedback on their work. I used to have students go up to the board and write out their work. Now I snap pictures on my phone and beam them to the projector through an Apple TV. I used to ask questions of the entire class on paper as an exit ticker, collect them, grade them, and give them back the next class. I’m now finding ways to do this all electronically, almost instantly, and without requiring students to log in to a third party website or use an arbitrary piece of hardware.

The central philosophy of computational thinking is the effort to utilize the strengths of  computers to organize, iterate, and use patterns to solve problems.  The more I push myself to identify my own weaknesses and inefficiencies, the more I am seeing how technology can make up for those negatives and help me focus on what I do best.

1 Comment

Filed under computational-thinking, programming, teaching philosophy

Assessing assessment over time – similar triangles & modeling


I’ve kept a question on my similar triangles unit exam over the past three years. While the spirit has generally been the same, I’ve tweaked it to address what seems most important about this kind of task:
Screen Shot 2013-04-30 at 3.27.28 PM

My students are generally pretty solid when it comes to seeing a proportion in a triangle and solving for an unknown side. A picture of a tree with a shadow and a triangle already drawn on it is not a modeling task – it is a similar triangles task. The following two elements of the similar triangles modeling concept seem most important to me in the long run:

  • Certain conditions make it possible to use similar triangles to make measurements. These conditions are the same conditions that make two triangles similar. I want my students to be able to use their knowledge of similarity theorems and postulates to complete the statement: “These triangles in the diagram I drew are similar because…”
  • Seeing similar triangles in a situation is a learned skill. Dan Meyer presented on this a year ago, and emphasized that a traditional approach rushes the abstraction of this concept without building a need for it. The heavy lifting for students is seeing the triangles, not solving the proportions.

If I can train students to see triangles around them (difficult), wonder if they are similar (more difficult), and then have confidence in knowing they can/can’t use them to find unknown measurements, I’ve done what I set out to do here. What still seems to be missing in this year’s version is the question of whether or not they actually are similar, or under what conditions are they similar. I assessed this elsewhere on the test, but it is so important to the concept of mathematical modeling as a lifestyle that I wish I had included it here.

Leave a comment

Filed under reflection, teaching philosophy

(Students) thinking like computer scientists


It generally isn’t too difficult to program a computer to do exactly what you want it to do. This requires, however, that you know exactly what you want it to do. In the course of doing this, you make certain assumptions because you think you know beforehand what you want.

You set the thermostat to be 68º because you think that will be warm enough. Then when you realize that it isn’t, you continue to turn it up, then down, and eventually settle on a temperature. This process requires you as a human to constantly sense your environment, evaluate the conditions, and change an input such as the heat turning on or off to improve them. This is a continuous process that requires constant input. While the computer can maintain room temperature pretty effectively, deciding whether the temperature is a good one or not is something that cannot be done without human input.

The difficulty is figuring out exactly what you want. I can’t necessarily say what temperature I want the house to be. I can easily say ‘I’m too warm’ or ‘I’m too cold’ at any given time. A really smart house would be able to take those simple inputs and figure out what temperature I want.

I had an idea for a project for exploring this a couple of years ago. I could try to tell the computer using levels of red, green, and blue exactly what I thought would define something that looks ‘green’ to me. In reality, that’s completely backwards. The way I recognize something as being green never has anything to do with RGB, or hue or saturation – I look at it and say ‘yes’ or ‘no’. Given enough data points of what is and is not green, the computer should be able to find the pattern itself.

With the things I’ve learned recently programming in Python, I was finally able to make this happen last night: a page with a randomly selected color presented on each load:
Screen Shot 2013-04-18 at 9.51.51 PM

Sharing the website on Twitter, Facebook, and email last night, I was able to get friends, family, and students hammering the website with their own perceptions of what green does and does not look like. When I woke up this morning, there were 1,500 responses. By the time I left for school, there were more then 3,000, and tonight when my home router finally went offline (as it tends to do frequently here) there were more than 5,000. That’s plenty of data points to use.

I decided this was a perfect opportunity to get students finding their own patterns and rules for a classification problem like this. There was a clearly defined problem that was easy to communicate, and I had lots of real data data to use to check a theoretical rule against. I wrote a Python program that would take an arbitrary rule, apply it to the entire set of 3,000+ responses from the website, and compare its classifications of green/not green to that of the actual data set. A perfect rule for the data set would correctly predict the human data 100% of the time.

I was really impressed with how quickly the students got into it. I first had them go to the website and classify a string of colors as green or not green – some of them were instantly entranced b the unexpected therapeutic effect of clicking the buttons in response to the colors. I soon convinced them to move forward to the more active role of trying to figure out their own patterns. I pushed them to the http://www.colorpicker.com website to choose several colors that clearly were green, and others that were not, and try to identify a rule that described the RGB values for the green ones.

When they were ready, they started categorizing their examples and being explicit in the patterns they wanted to try. As they came up with their rules (e.g. green has the greatest level) we talked about writing that mathematically and symbolically – suddenly the students were quite naturally thinking about inequalities and how to write them correctly. (How often does that happen?) I showed them where I typed it into my Python script, and soon they were telling me what to type.

rgbwork

In the end, they figured out that the difference of the green compared to each of the other colors was the important element, something that I hadn’t tried when I was playing with it on my own earlier in the day. They really got into it. We had a spirited discussion about whether G+40>B or G>B+40 is correct for comparing the levels of green and blue.

In the end, their rule agreed with 93.1% of the human responses from the website, which beat my personal best of 92.66%. They clearly got a kick out of knowing that they had not only improved upon my answer, but that their logical thinking and mathematically defined rules did a good job of describing the thinking of thousands of people’s responses on this question. This was an abstract task, but they handled it beautifully, both a tribute to the simplicity of the task and to their own willingness to persist and figure it out. That’s perplexity as it is supposed to be.

Other notes:

  • One of the most powerful applications of computers in the classroom is getting students hands on real data – gobs of it. There is a visible level of satisfaction when students can talk about what they have done with thousands of data points that have meaning that they understand.
  • I happened upon the perceptron learning algorithm on Wikipedia and was even more excited to find that the article included Python code for the algorithm. I tweaked it to work with my data and had it train using just the first 20 responses to the website. Applying this rule to the checking script I used with the students, it correctly predicted 88% of the human responses. That impresses me to no end.
  • A relative suggested that I should have included a field on the front page for gender. While I think it may have cut down on the volume of responses, I am hitting myself for not thinking to do that sort of thing, just for analysis.
  • A student also indicated that there were many interesting bits of data that could be collected this way that interested her. First on the list was color-blindness. What does someone that is color blind see? Is it possible to use this concept to collect data that might help answer this question? This was something that was genuinely interesting to this student, and I’m intrigued and excited by the level of interest she expressed in this.
  • I plan to take a deeper look at this data soon enough – there are a lot of different aspects of it that interests me. Any suggestions?
  • Anyone that can help me apply other learning algorithms to this data gets a beer on me when we can meet in person.

6 Comments

Filed under computational-thinking, reflection, teaching stories

Building a need for math – similar polygons & mobile devices


The focus of some of my out-of-classroom obsessions right now is on building the need for mathematical tools. I’m digging into the fact that many people do well on a daily basis without doing what they think is mathematical thinking. That’s not even my claim – it’s a fact. It’s why people also claim the irrelevance of math because what they see as math (school math) almost never enters the scene in one’s day-to-day interactions with the world.

The human brain is pretty darn good at estimating size or shape or eyeballing when it is safe to cross the street – there’s no arithmetic computation there, so one could argue that there’s no math either. The group of people feeling this way includes many adults, and a good number of my own students.

What interests me these days is spending time with them hovering around the boundary of the capabilities of the brain to do this sort of reasoning. What if the gut can’t do a good enough job of answering a question? This is when measurement, arithmetic, and other skills usually deemed mathematical come into play.

We spend a lot of time looking at our electronic devices. I posed this question to my Geometry and Algebra 2 classes on Monday:
Screen Shot 2013-04-10 at 2.45.41 PM

The votes were five for A, 5 for B, and 14 for C. There was some pretty solid debate about why they felt one way or another. They made sure to note that the corners of the phone were not portrayed accurately, but aside from that, they immediately saw that additional information was needed.

Some students took the image and made measurements in Geogebra. Some measured an actual 4S. Others used the engineering drawing I posted on the class blog. I had them post a quick explanation of their answers on their personal math blogs as part of the homework. The results revealed their reasoning which was often right on. It also showed some examples of flawed reasoning that I didn’t expect – something I now know I need to address in a future class.

At the end of class today when I had the Geometry class vote again, the results were a bit more consistent:
Screen Shot 2013-04-10 at 3.56.40 PM

The students know these devices. Even those that don’t have them know what they look like. It required them to make measurements and some calculations to know which was correct. The need for the mathematics was built in to the activity. It was so simple to get them to make a guess in the beginning based on their intuition, and then figure out what they needed to do, measure, or calculate to confirm their intuition through the idea of similarity. As another chance at understanding this sort of task, I ended today’s class with a similar challenge:

Screen Shot 2013-04-10 at 4.04.31 PM

My students spend much of their time staring at a Macbook screen that is dimensioned slightly off from standard television screen. (8:5 vs. 4:3). They do see the Smartboard in the classroom that has this shape, and I know they have seen it before. I am curious to see what happens.

2 Comments

Filed under geometry, reflection, Uncategorized

Volumes of Revolution – Using This Stuff.


As an activity before our spring break, the Calculus class put its knowledge of finding volumes of revolution to, well, find volumes of things. It was easy to find different containers to use for this – a sample:
DSC_0164

IMG_0573

We used Geogebra to place points and model the profile of the containers using polynomials. There were many rich discussions about wise placement of points and which polynomials make more sense to use. One involved the subtle differences between these two profiles and what they meant for the resulting volume through calculus methods:

Screen Shot 2013-04-08 at 4.19.33 PM

The task was to predict the volume and then use flasks and graduated cylinders to accurately measure the volume. Lowest error wins. I was happy though that by the end, nobody really cared about ‘winning’. They were motivated themselves to theorize why their calculated answer was above or below, and then adjust their model to test their theories and see how their answer changes.

As usual, I have editorial reflections:

  • If I had students calculating the volume by hand by integration every time, they would have been much more reluctant to adjust their answers and figure out why the discrepancies existed. Integration within Geogebra was key to this being successful. Technology greases the rails of mathematical experimentation in a way that nothing else does.
  • There were a few many lessons that needed to happen along the way as the students worked. They figured out that the images had to be scaled to match the dimensions in Geogebra to the actual dimensions of the object. They figured out that measurements were necessary to make this work. The task demanded that the mathematical tools be developed, so I showed them what they needed to do as needed. It would have been a lot more boring and algorithmic if I had done all of the presentation work up front, and then they just followed steps.
  • There were many opportunities for reinforcing the fundamentals of the Calculus concepts through the activity. This is a tangible example of application – the actual volume is either close to the calculated volume or not – there’s a great deal more meaning built up here that solidifies the abstraction of volume of revolution. There were several ‘aha’ moments and I saw them happen. That felt great.

Leave a comment

Filed under calculus, geogebra, teaching stories

Coding IS a super(edu)power


I’ve been really impressed by the Dan Meyer/Dave Major collaboration. If you don’t know what I’m talking about, you need to click on that link immediately. Seeing both Dan and Dave post on their respective blogs about the thought and rationale that goes into these activities is like a masters class in pedagogy, digital media, and user design.

The common thread that I really like about these tools is the clean and minimalist way they pose an idea, encourage a bit of play and intuition, and then get out of the way. Dan has talked about these ideas philosophically for a while, and seeing Dave make these happen is really exciting. They talk about this being the future of textbooks, but I am willing to wager that textbooks will get fidgety at displaying a task to a user atop a blank white screen. The trend has been so far in the other direction that I am skeptical, but I am hopeful that they will start to listen. These exercises are like a visit to the Museum of Modern Art. Textbooks and online learning otherwise tends to look either like a visit to Chuck-E-Cheese or the town library, over-thinking or under-thinking the power of aesthetics to creating a learning environment that is stimulating enough, but not distracting.

Being a committed Twitter follower, I of course interrupted their workflow with suggestions. I was looking for an easy way to collect student responses to a question along the lines of Activeprompt, but for tasks that are not about finding a location. I had posed a question to my Geometry class and was really excited about greasing the rails for gathering student responses and putting them in one place. This is the same idea as what Dan/Dave had done, but with a bit less of a framework pushing it in a direction.

Dave’s suggestion was, well, intimidating:

Screen Shot 2013-03-21 at 4.48.19 PM

I had been playing around with web2py, Django, Laravel, and other template frameworks that said they would make things easy for me, but it just didn’t click how they would do this. I have done lots of small Python projects, but the prospect of making a website seemed downright unlikely. I spent three hours putting together this gem using the CSS I had learned from CodeAcademy:
Screen Shot 2013-03-21 at 4.56.54 PM

I was not proud of this, but it was the best I thought I could do.

Through the power of Twitter, I was able to actually have a conversation with Dave and learn how he put his own work together. He uses frameworks such as Raphael.js and Sinatra in a way that does just enough to achieve the design goal. I learned that he wasn’t doing everything from scratch. He took what he needed from what he knew about these different tools and constructing precisely what he envisioned for his application. I prefer Python to Ruby because, well, I don’t know Ruby. I found Bottle which works beautifully as a small and simple set of tools for building a web application in Python, just as Dave had done with his tools.

Using Bottle and continuing to learn how it works, I made this yesterday.
BFwz3k5CQAEE2ts.png-large

I shared it with Dave, and he revealed another of his design secrets: Bootstrap. Again, dumbstruck by the fact this sort of tool exists, but also that I hadn’t considered that it might. This led me to clean up my previous submission and reconsider what might be possible. With a bit more tinkering, I turned this into what I had envisioned: a flexible tool for collecting and sharing student responses to a question
Screen Shot 2013-03-21 at 5.12.14 PM

I was just tickled pink. Dave had shown me his prototype for what he made in response to my prompt – I was blown away by it, as with the rest of his work. Today, however, I proudly used my web app with two of my classes and was happy to see that it worked as designed.

The point behind writing about this is not to brag about my abilities – I don’t believe there is anything to brag about here. Learning to code has gotten a good mix of press lately on both the positive and negative side. It is not necessarily something to be learned on its own, for its own sake.

I do want to emphasize the following:

  • My comfort with coding is developed enough at this point that I could take my idea for how to do something in the classroom using programming and piece it together so that it could work. I got to this point by messing around and leaving failed projects and broken code behind. This is how I learn, and it has not been a straight line journey.
  • If I was not in the classroom on a regular basis, I doubt I would have these ideas for what I could do with coding if I had the time to focus on it completely. In other words, if I ditched the classroom to code full time (which I am not planning to do) I would run out of things to code.
  • Twitter and the internet have been essential to my figuring out how to do this. Chatting virtually with Dave, as an example, was how I learned there was a better way than the approach I was taking. There are no other people in my real world circles who would have introduced me to the tools that I’ve learned about from Dave and other people in the twitterverse. Face to face contact is important, but it’s even better getting virtual face time with people that have the expertise and experience to do the things I want to learn to do.
  • I have been writing code and learning to code from the perspective of trying to do a specific and well defined task. This is probably the most effective and authentic learning situation around. We should be looking for ways to get students to experience this same process, but not by pushing coding for its own sake. As with any technology, the use needs to be defined and demanded by the task.
  • The really big innovations in ed-tech will come from within because that’s where the problems are experienced by real people every day. Outsiders might visit and see a way to help based on a quick scan of what they perceive as a need. I’m not saying outsiders won’t or can’t generate good ideas or resources. I just think that tools need to be designed with the users in mind. The best way to do this is to give teachers the time, resources, and the support to build those tools themselves if they want to learn how.

You can check out my code at Github here. Let me know if you want to give it a shot or if you have suggestions. This experiment is far from over.

Leave a comment

Filed under computational-thinking, programming, teaching philosophy