OpenEdTools

OpenEdTools Symposium: Translating msgs for normals

OpenEdTools Symposium: Translating msgs for normals

At the end of last week, I had the pleasure of spending two days in meetings with directors, consultants, and designers at an open education tools symposium. A more detailed transcript can be found by searching Twitter for the hashtag #OpenEdTools, or by reviewing the documentation authored by staff at Hypothes.is and other attendees. A big thanks to The Hewlett Foundation, the Moore Foundation, and Hypothes.is for organizing the event, whose primarily goal was to sync up the larger edtech-supported OER efforts.

Context

Before jumping into the meat of this write up, it’s important to consider some context around my own involvement in the event.

1) I am not the producer of any OER tools. Yes, I’ve authored a fair bit of OER content, and have strung together multiple OER tools to serve my own purposes, but I had a different stake in this than many of the other attendees. I was there as a “power user” of sorts, and fully intended to exploit what I learned at the meeting to serve my newly-focused audience: teachers and learners in higher education.

And 2) I’m very fortunate to have this narrowed audience to serve, as opposed to the OER providers and organizations at the meetings. When I was with Creative Commons (CC), strategies often became blurry and it was easy to default to a position of “let’s serve everyone.” It would not be a stretch to think most of the other attendees were dealing with mission statements and strategies that are becoming blurred as well, and so I considered myself fortunate to have clear goals and a more well-defined audience to serve.

Also worth mentioning is that there was representation from several “big” OER providers/initiatives at the meetings, including Lumen Learning, OER Commons, OpenStax, NROC, and the K12 OER Collaborative. A full list of attendees can be found here.

You also may notice the title of this post is a little odd. What are “normals,” you ask? In short, “normals” was a term used throughout the meetings to refer to the many folks that open education tools are meant to serve. “Normals” are less often makers of tools themselves, more often the creators of OER content, and thus rely on existing technology to work with OER. On some level, I actually believe we are all “normals” and that distilling the conversations from the meetings will serve us all. Without “normals” there would be no reason for us to be creating tools for open education, anyway.

And now the fun stuff: Takeaways.

Field Notebook and a pencil

Image by Helloquence on unsplash.com / CC0

1) Accessibility and Inclusive Design remains an afterthought

Jutta Trevinarus and Jess Mitchell from the Inclusive Design Research Center (IDRC) in Toronto were on hand, ensuring that the conversations considered the needs of all learners. IDRC has created many resources over the years, such as FLOE, a tool that makes it simple to enlarge, highlight, and otherwise manipulate Web-based content to be easier for all individuals to consume. Still, I found it hard to look past the fact that tools for inclusivity like those put out by IDRC have not been implemented across the board. As I mentioned above, many of the tool and platform providers at the meetings have broad, expansive audiences that may or may not have inclusive design atop their own list of needs. But still, I was surprised to learn that accessibility features of open education tools are not consistently implemented. Folks at the meeting were sympathetic to the messages brought forth by Jutta and Jess but there (unfortunately) were more fundamental infrastructure and interoperability issues that took precedence these two days.

2) Underneath it all, HTML still rules

A topic that emerge at the meeting that I appreciated more than most was that of interoperability between platforms. How can the OER providers align their underlying technology to make passing content between them easier? What can be done for content re-users that prefer to move OER out of a platform and into their own learning management system? How can the ingestion process for getting content into platforms be made easier? What’s causing the friction?

At the core, the big OER projects all use content schemas that are similar, yet different enough than to allow easy migration of content. As a case in point, OER content released by OpenStax is being actively migrated into Pressbooks format, where it can be more easily adapted and localized. From my understanding, the migration of this content was done with permissions granted by the open license, but that it involves extensive hands-on checks that cannot be automated. Would a common schema between the major providers help ease the pain of migration? In a breakout session devoted to this topic, it eventually surfaced that content in the major platforms is primarily held in HTML, with an XML-based version (or wrapper) that relies upon the individual platform.

Without getting too lost in the technical jargon, let’s imagine for a minute that you find an wonderful piece of OER at OER Commons. But your institution provides professional development and support for use of Lumen Learning’s Pressbooks variant called Candela. You take a copy of the content exported from OER Commons and attempt to load it into Candela. Does it know what it’s looking at, or does it need lots of help and massaging to simply look similar to what it did in OER Commons? Because there is no common schema for describing OER content that may pass between platforms, these types of migrations always require intervention, and this intervention can be above the technical knowledge level of “normals”. I’ll admit that there was no simple solution found for this issue, but Kathi Fletcher from OpenStax expressed an interest in convening the major OER content providers to scope and prototype what such as schema could look like. It was highly encouraging to see this interest in making OER content interoperable across systems, because right now it is not.

3) Version control ain’t much of a thing, yet

Of all the topics discussed at the meetings, version control was at the top of my own list. Always in search of incentives I can offer faculty who are considering adopting or creating OER is that they will become part of the OER ecosystem, and that they will benefit from the collective adaptations, updates, and potential improvements of content they share into the system. For example, I’ve considered the idea of the University of Hawaii system having a core set of OER textbooks that are course-specific. At the beginning of the semester, faculty can take a fresh copy of the textbook and over the course of the semester make tweaks that improve the applicability, accuracy, and overall fit of the content for their teaching style and their learners. At the end of the term, there might be several copies of the textbook, each of which has it’s own unique set of changes that might be worthy of being rolled into the “master” copy of the textbook, providing an improved starting point for all teachers going forward.

But there’s no toolchain or mechanism in place to allow this.

The closest thing I found at the meetings was a parent-child marking system in OER Commons, basically allowing users to see if content they are viewing is a “child” (derivative) of another piece of OER. In a healthy OER platform, you’ll see lots of copying and forking of content, each copy a little (or a lot) different from it’s parent. After speaking with Lisa Petrides during a breakout session focused on version control, it seems that OER Commons can probably provide some of the version control functionality described earlier. But what about content that passes between system, or has been exported and lived outside any OER platform for some time. How do we easily signal that our OER has changes that can benefit others who are using similar content?

OER Commons version history

During the discussion, Mike Caulfield did mention work done with the Federated Wiki project several years ago, where metadata on a piece of OER content describes changes made since the copy was made, and allows users to “roll back” changes to earlier versions. But this metadata isn’t listened for or understood by any other OER platform, and so this functionality is lost the moment the content escapes into the ether. An undesirable yet function solution was brought up: Leave a code snippet hidden in OER content that can track content wherever it goes. But since we (everyone) is being tracked more than they’d prefer, this idea sank quickly. Another idea that surfaced was to indicate the parent OER content using HTML “rel” tags, but this is a hack and the “rel” tag was never intended to support the kind of functionality discussed at the meetings. I do think it’s worth paying attention to this topic, as the OER schema conversation ended up blending with the version control topic. I hope a working group is formed to carry this interoperability work forward.

4) OER assessments and outcomes alignment is not easy

Though there were many specific issues and topics about OER brought up at the meeting, none surprised me as much as the discussion around assessment banks. Assessment banks are essentially repositories where formative and summative assessment items (multiple choice questions, prompts, etc) are stored. During the breakout session specific to the topic, it became clear that there is no useful specification governing how OER providers store and share assessment items. Sure, you can use specifications like QTI and LTI to format assessment items for fitment in a content delivery platform, but each provider uses their own methods for storing and managing them.

A few themes emerged in this specific breakout session:

  1. Assessment banks are essential to the adoption of OER, since many faculty will be resistant to adopting OER if it’s not paired with an assessment system that can automate grading. Proprietary content producers have stepped up to offer a homework and testing solution that works in tandem with their content, and faculty who have grown to appreciate these systems will be less likely to drop the proprietary content or textbook if an OER tool to replace it is not available.
  2. Alignment with standards, competencies, and outcomes is tricky. In many cases, embedded assessments (often called formative assessments) rely on the context created by the content they live inside. When an OER is revised, it’s difficult to maintain the relevance and applicability of assessments when the content itself has changed. OER providers like Lumen Learning and NROC have aligned OER content to outcomes, but this has been done in a way that can (too) easily break when the content is removed from their system or revised without also revising the outcomes and assessment items. If there were a centralized clearing house for open assessments, the assessment banks of individual OER providers could be merged and shared between systems. There are many detailed that would need to be hashed out, but there was support for the idea. We’ll see.
  3. The question of how learners are tested also bubbled up. In an age where long-used assessment banks from faculty and proprietary publishers can be found with a Google search, should we be using these types of assessment anyway? Shouldn’t learners be offered varied opportunities to demonstrate their newly-acquired skills, knowledge, and attitudes? Yes, but the kind of assessments we’d prefer to give learners do not scale well, and cannot be as easily automated as multiple choice tests. This is an example of how concessions have been made to allow technology to serve more learners, even at the expense of authenticity in assessment.
Hand holding a compass

Image by Heidi Sandstrom on unsplash.com / CC0

There surely are other important points raised and ideas hashed out at the #OpenEdTools meetings, but the above are the items that stuck with me. As we head deeper into the Spring semester, I will continue to work with faculty who want to adopt or create OER, and will attempt to share our successes and struggles with the folks who dictate the direction OER platforms move in. It goes without saying that at the end of the day, I care most about the end-users (or “normals”) of open education tools. But providing important feedback with OER platforms and being involved as they recalibrate their compasses and rewrite their roadmaps is extremely important if we want OER to strengthen its hold. The University of Hawaii system has a commitment to provide students with the best education possible, and OER needs to be central to this mission.

If you have thoughts about any of the above, feel free to leave a comment on this blog post or tweet using the #OpenEdTools hashtag to become part of the conversation.

Posted by Billy Meinke in Conference, OER