Every time you fiddle around with anything, the world changes.
My US2 class is finally up and running, though it still needs some filling in. At least the ground work is done and the structure is set up. I’m one of those people who works from the top down. Once I’ve got an outline, once I’ve thought through all the general twists and turns of where I’m headed, the rest is just grunt work. Thinking is hard; working is easy.
I’ve taught this class before, but never in this format. I’m teaching it from Home Campus, where I am the only dancing bear in the circus – there is just me, alone in a room, staring at a video camera and a couple of screens. On one of the screens I can see myself. The other is divided into three parts, and on each part is a group of high school students almost 300 miles away. In between us is another campus – they take the feed in from the three high schools and send it down to me, and vice verse. It’s an interesting set-up.
One of the things I learned from having US1 in this set-up last semester is that having discussions in this format does not work. For one thing I can’t really see individual students – they’re just moving blobs at the distance I have to stand from the screen. And for another, they can’t really interact with each other very well because of the mechanics of actually speaking in class, combined with the small lag times that you get with this sort of arrangement. So discussions always ended up as me asking each student individually what they thought of the readings. That's not a discussion. That's an oral quiz.
This semester I decided I would move the discussions online. The university has a system set up expressly for that sort of thing, and I have made use of it before for other classes. It works pretty well, once you get the students started on it. You have to make them post things across multiple days, because otherwise people just swoop in, post stuff, and never return – but that’s not that hard to arrange.
The problem is that once you decide to organize things this way you have to have some formal rules governing the process. Students have to make X number of posts and each post has to have A, B, and C in it, and so on. This is a lot more work than the previous discussions, which were (or should have been) – wait for it – discussions. People talking.
And if you’re going to make them work harder, you have to give them appropriate credit for doing so. This means raising the share of the total grade that discussions are worth, devising grading rubrics so you can evaluate the students' effort and grade them appropriately, and so on.
And if you are giving more credit for that, then you have to give less credit for something else. You can only have 100% of a grade. If discussions suddenly go from 10% of the grade to 40% of the grade, something else has to be worth less. In that class, there are only exams besides that. There used to be three of them, and they were collectively worth 90% of the grade – but now they’re only worth 60%.
And if you are going to make them worth less, then you have to restructure them to reflect that. This means making them a bit smaller – though not too much smaller, because you want them to keep working and learning, and if the exams get too small then they become quizzes which are a whole different animal when it comes to pedagogy and assessment.
And if you make them smaller, then you can have more of them to balance that out. So now I have four exams, each worth 15% of the grade – half of what each exam was worth before.
And if you have more exams, then you have to rejigger your break points. Each exam covers one unit, and each unit is a coherent story. Where do you draw the lines? The new line was fairly easy to figure out, actually. I’ve long been unhappy with the post-WWII unit, since it seems to me that there ought to be a break point around 1970 – the quarter-century after WWII is a very different place from the half-century since. But that means taking the 50s out of the middle unit and that squashes that unit of the class, so I have to move that break point back from 1917 to 1900. It makes that unit a different story, having it run from 1900 to 1945 instead of 1917 to 1960, but no less a coherent one. It also makes the first unit (1865 to 1900) a different story as well. So everything has to be refocused as well as rejiggered.
So I start with one decision (“You know, the discussions would work a whole lot better online – I should try that!”) and everything else just cascades down from there.
All of life is pulling at loose threads and then wondering what happened to your sweater and why there is a pile of yarn on the floor and where did that breeze suddenly come from, anyway.
I have always felt that WWI and WWII should be taught as a single war - hence a single unit in a history class, so I like your new break point. I mean, I see the point of breaking at 1917 - much of an old order passed away in the Grest War, but it was breaking anyway, and if the Great War hadn't done it, something like a repeat of 1848 would have.
ReplyDeleteBut too much emphasis on WWII in the US is not accurate history. The French and the Germans see things differently - they have always seen it as one conflict, but I saw a quote about American GIs after the war who said that they came back and suddenly their problem was that they coudl not get a drink, and then suddenly they had no money. The 20s erased the Great War from our memory, to our detriment when the Great War Part II started.
I'm reading The Myth of the Great War right now, which is a very well-researched piece of English Language revisionism on the direction of the War in 1914 and 1915. I think that it, and Flying in Flanders - how can you not love the memiors of someone who once landed on top of an obsrvation balloon when his engine quit).
Weird, Blogger ate part of my comment. This part:
ReplyDeleteI think that it, and Flying in Flanders
OK, it really doesn't like that code.
ReplyDeleteI thank that it and "A Peace to end all Peace" are my favorite books about the was, plus the Willy Coppens memior.
I actually have The Myth of the Great War but haven't gotten around to reading it yet. I'll have to move it up the list. I went on a WWI reading jag about fourteen months ago, and soo far my favorite WWI book remains Paul Fussel's The Great War and Modern Memory, mostly it must be admitted because of his writing style.
ReplyDeleteI think you can profitably look at WWI and WWII as The Second Thirty Years War, and I have always kept them as part of the same unit in my Western Civ classes. But that is a very European perspective.
From the American perspective, they were to very different wars.
Their root causes were different to Americans. Their conduct was different. Their effects were different. And most importantly, their domestic contexts - before, during and after - were different in a way that was much more meaningful to Americans than similar differences were to Europeans. The link between WWI and WWII is just a lot sturdier on the eastern side of the Atlantic than it is on the western side.
WWI never really gets erased from American memory (and I would argue that to the extent that it does, the Depression - which hit the US harder than anywhere else - is more effective at that than the 20s). But when WWII rolled around Americans thought "again!" while Europeans thought "still!" And those are not the same thoughts.
Apparently I cannot spell this morning. Oh well.
ReplyDeleteThe "not being able to get a drink" quote is form the Myth of the Great War. And by 20s, I include and especially mean 1929. :p
ReplyDeleteI agree about the American / European split in worldview, but Iwould submit that the European view is the correct one, and the American psychological orientation will be a lot less important to historians 200 years from now, as the orientation from WWI was simply a failure to learn from or grasp the situation, a state of mind that was rectified by WWII. We were mice, the Europeans were monkeys. *
*This is an inside joke to my industry. Animal techs bitch and moan when monkies are specified as a test species. The first time you draw blood from a mouse, it looks surprised. The second time you draw blood, it looks surprised. The first time you draw from a monkey, it looks surprised. The second time you draw from a monkey, it bites you, or more accurately, it bites you before you even get the chance to draw, after having flung shit at you from its cage the moment you walk into the room.
One also can't forget that the American experience of the Great War was much shorter than Europe's, and the Spanish Flu epidemic immediaterly after had almost as great a demographic impact on the US as did the War, which also helped obscure the War in the public's memory.
ReplyDeleteCorrect one for whom? I would argue that your assertion that one or the other view of the wars is “the correct one” is meaningless.
ReplyDeleteFor Europeans, who lived amid the direct consequences of the wars, the correct view is that it was one long slog. For Americans, for whom the wars were something outside that imposed itself and then went away, the correct view is that the two conflicts were separate. That these views conflict is not relevant.
Since we agree on the European perspective, I will not go into that further. But consider this from the American perspective. How is it wrong to say that, in regards to the US, these were two separate conflicts? We approached them as such, their causes (for the US) were such, their conducts was such, their contexts were such, and their consequences were such. Europeans had always been at war as far as the US was concerned (remember, WWI was originally known as “The Third Balkan War”) – why should the US connect these two wars in particular?
Yes, a European-centered argument could answer that, but such an argument would hold no water here. Thus for a US history class, it is legitimate to divide them into separate units if one chooses to do so. Making that division in a European history class would be far more problematic.
WWI meant very different things to the US and Europe. For the US it was a coming out party onto the world stage. For Europe it was the opening act of a decades-long suicide. WWII also meant very different things to both parties. To say that one perspective is “correct” and – by inference – the other is “incorrect” strikes me as an unproductive way to examine the issue.
I think of history in terms of modeling. "Correct" is the correct set of inputs, assumptions, and weights that will get you to correctly predict the future in a given model. Something along the lines of Asimov's psycho history, but a bit more nuanced. I believe that some time in the future (perhaps 200 years is a bit optimistic) we will be able to more accurately predict human behavior on a large scale, and "correct" will have more meaning than it does now.
ReplyDeleteBut even here and now, I mean by "correct" a worldview that allows one to more accurately anticipate likely future events. Even with our current very poor predictive ability, "correct" has meaning in terms of policy analytics. We do this in business all the time - part of what I used to do was attempt to predict aggregate consumer behavior, and different people, different firms, would come away from economic events with different views of what went on. Sometimes, it was a religious discussion, but often later events proved that there was a correct, or at least a "more correct" view of the first event.
The American predictive assumptions after WWI were clearly wrong, and they led us to poor policy decisions at Versailles (though Wilson's illness and vanity also played a part) and after, which I think Kenynes rather presciently laid out in "The Economic Consequences of the Peace". Seeing the conflict as bound to continue would have led to better policy on our part. Though the Depression would have thrown a wrench in whatever we came up with if we'd been clearer thinkers as well. However, if we hadn't had to engage in the revolving door loan system to Germany in the first place, the Depression would not have wrecked that, and post-Weimar Germany would have looked quite a bit different.
Ah. So your answer to my question is "Correct for 21st (or 23rd) century policy makers." Fair enough. But that is a profoundly ahistorical perspective.
ReplyDeleteHistory is not a predictive science and never will be, Isaac Asimov and/or behavioral psychology notwithstanding. More importantly, it is not designed to be. Historians do not seek to predict the future. Historians seek to understand the past.
What I want to know about those wars is not how they can help me predict events in the future, but why they happened the way they did at the time and going forward, and how that affected other things at the time and going forward. We are time-bound creatures, historians, and our goal is to understand the past in a way that makes sense to us and would have also made sense to those who lived through it.
Also, since most of what happened at Versailles can be attributed to the British and French (Wilson gave away most of his 14 Points on the theory that he could use the League of Nations to win them back later, and then the US never ratified Versailles or joined the League, plus the whole reparations issue was a European initiative over the strenuous objections of the US) I'm not sure how that can be laid at the feet of the US.
The bottom line is that you intend to use history for purposes other than history. There is nothing wrong with that, but it isn't history and that needs to be recognized.
History is not a predictive science and never will be, Isaac Asimov and/or behavioral psychology notwithstanding.
ReplyDeleteThere I disagree with you, though as you define the field of History, I would have to say you are correct. Is that the prevailing professional view?
Maybe it's because we spend so damn much money and feel the need to justify that, but science and engineering types are always talking about the practical implications of our work, even when to claim there are such implications is to stretch the truth into a monomolecular string.
And perhaps because they were teaching at an Engineering school, where "what good is this shit?" is bluntly asked of every humanities prof whose class is taking up time that could be devoted to a *really* fun elective (i.e. one that involves math / and or explosions :D), my undergrad History profs were also all about the policy applications of History. As were the Founding Fathers, but then, none of them considered themselves Historans as we understand the term, I guess.
But speaking as a former applied psychologist (which is what Marketers are), I believe that human behavior is predictable in the aggregate if you can describe enough variables and put the proper weights on them. I believe that History will become a semi-quantitative science similar to evolutionary biology at some point in the future. Maybe that part will split off from traditional History as you define it into another discipline.
Is that the prevailing professional view?
ReplyDeleteYes. There are innumerable people on this planet who claim to predict the future. Everywhere you turn someone has yet another vision of what is to come. Find any three who agree and you win a prize. But to my knowledge only historians seek to come to a rigorous understanding of the past - one based on the reality of the past rather than ideological wish fulfillment or shifting memory, one with a recognized methodology and a standard of proof (and, thus, disproof). It's our niche.
Thus I summarize an entire semester of graduate study into one paragraph. And he does it without a net! ;)
science and engineering types are always talking about the practical implications of our work
I know - I married into your tribe. Two things about that, though:
1. Historians often seek practical implications. The problem is that with all of recorded history to work from, the analogies come fast, conflicting and confusing. I remember the First Gulf War, for example - the buildup before Kuwait. Was it a rerun of WWII, wherein we should learn not to be Neville Chamberlain again and stand up against aggression? Or was it August 1914 again, as we rushed pell-mell into war for no good reason? Or was it something else? Or was it unique?
That's the problem with history as a predictive science - all you get is unique case studies and analogies. You can't run repeat experiments. All you can do is argue your case.
The Founding Fathers definitely did. They looked at Rome, at Athens, at the English Commonwealth, at the Glorious Revolution, at any number of things, and they tried to draw lessons from them. They used history to make policy, which is what you urge. And again, this is perfectly reasonable, even laudable. But using history is not the same as doing history. I am your raw material.
2. From personal experience, having both married a scientist and taught classes with a different scientist (nice guy, not really my gender of interest), I can say with confidence that the mentality of the scientist and the historian do not overlap much.
speaking as a former applied psychologist (which is what Marketers are)
I have a BA in psychology, with an emphasis on social psych - the behavior of individuals within groups - so I believe I can meet you as an equal here.
The question is can you describe enough variables? How many is enough? You have to account for culture. You have to account for individual psychology (something even psychologists disagree upon), group psychology (likewise), and ideology. You have to account for weather, resources, accidents, economics, and any number of outside forces as well. And you have to do it for each individual, each group, each culture that meets on the playing field of each event. And you have to do it for each unique circumstance, with no repeat trials.
At some point, this begins to strike me as unlikely.
If marketing were such a science, there wouldn't be remainder shelves. It's an art, informed by psychology, history, and economics, but not a science.
There are historians who approach things quantitatively - I was trained to do that at the Master's level. It's a useful methodology, especially when examining groups that were not very articulate (the working poor of 18th century England, for example, who were mentioned in few contemporary documents and wrote even fewer themselves). You can understand the past that way. There is a divide between quantitative historians (often called "social historians") and more traditional historians - a divide that, having been on both sides, strikes me as counterproductive. But such is the discipline.
My guess is we will have to agree to disagree on this one.
.