Does Investment in Online Correlate to Enrollment Growth in US Higher Ed?

TL;DR: Yes, as the proportion of HE students taking online courses has grown, so has overall enrollment.

In theory, marketing online courses and programs offers enrollment growth opportunities to higher education institutions (HEIs). For instance, it’s obvious that the largest US HEIs are also those with massive online programs, and the draw of many academic programs to develop and launch new online courses (especially through OPMs) is enrollment growth.

We also know that, for many institutions, students enrolling in online courses are often on-campus students who are leveraging online offerings to avoid course bottlenecks, accelerate time to completion, or just provide flexibility in their schedules.

NCES data doesn’t provide us with the exact details we’d want to investigate whether investing in online (aka distance) education results in consistent enrollment growth across HEI sectors, however we have enough data to do some basic probing:

For all degree-granting HEIs with 1000 or more students, I used the last five years of each institution’s total student enrollments to derive their compounded annual growth rate (CAGR). After tossing outliers, I correlated this with a several NCES indicators that suggest their investment in online education:

  • Students enrolled in one or more online courses (latest year)
  • Students enrolled in all online courses (latest year)
  • CAGR of students enrolled in one or more online courses (last five years)

Across all sectors, the strongest correlation was between 5-year enrollment growth and the 5-year growth of online as a proportion of enrollments: 0.61 for 4Y+s and 0.70 for 2Ys.

I.e. as the proportion of students taking online courses has grown across HEIs, so has overall enrollment.

There are also some notable correlations for some sectors between…

  • Enrollment growth and total enrollment (large institutions tend to have grown faster)
  • Enrollment growth and fully online enrollments, both as a count and a % (HEIs with large fully online populations tend to have grown faster).

Correlations suggest relationships between variables for a population, but they don’t tells us much about individual institutions. So here are a couple visualizations of US HEIs that do a better job of illustrating how different institutions map to % of online enrollments and enrollment growth:

And finally, here’s a table of correlations for each sector:

New DOJ accessibility rule: Ed techs must conform to WCAG 2.1 AA

It’s been a busy summer, but way back in June the Civil Rights Division of the US DOJ released a final rule revision to the American with Disabilities Act. This new rule establishes WCAG 2.1 AA as the new minimum web content accessibility standard for all public entities.

TL;DR: Ed techs who serve US “public entities” must conform to WCAG 2.1 AA by April 24, 2026*, according to a recent DOJ rule revision of ADA law.

FAQ

Disclaimer: I’m not a lawyer, so please consult with your own legal counsel before making any business decisions based on the following information and opinions.

What changed and why?

Public entities have been required to ensure the accessibility of their web content and apps for a long time. The DOJ now says that in order to comply with ADA law public entities must ensure their web content and apps conform to a specific, well-defined accessibility standard: WCAG 2.1 AA.

A little background: In the US, federal law is in place to protect and support the rights of individuals with disabilities, primarily three interrelated laws: ADA, the Rehabilitation Act (504/508), and IDEA. Technically, ADA requires that any federally funded organization provides “reasonable accommodations” for individuals with disabilities. The Rehabilitation Act requires that all orgs that serve the public provide equal access to individuals with disabilities.

In the internet era, these have been broadly interpreted as requiring that any digital content or applications are equally accessible to individuals with disabilities.

And while the WCAG standard has long been the de facto accessibility standard for all web content and apps, it has never been the official standard — until now.

This shouldn’t be a big change for ed techs. Educational institutions have long relied on WCAG as the accessibility standard in evaluating ed tech purchases, so this mostly formalizes what was already a well-established practice.

When did this happen?

The final rule is in effect as of June 2024, following a Notice of Proposed Rulemaking that was published back in August 2023.

Who does this ruling apply to?

Any “public entity” (e.g. public schools and districts, Title IV colleges and universities), as well as the vendors that provide content or applications to those public entities.

Are you sure this applies to ed tech vendors, too?

Yes. Subpart H clarifies that “a public entity’s ‘website’ is intended to include not only the websites hosted by the public entity, but also websites operated on behalf of a public entity by a third party”, and that “the general requirements for web content and mobile app accessibility apply when the public entity provides or makes available web content or mobile apps directly or through contractual, licensing, or other arrangements.” See The Department of Justice 28 CFR Part 35, p 86.

This includes (very specifically) digital content, digital textbooks, etc. (see p 214)

When do we have to be WCAG 2.1 AA conformant?

Deadlines are in place for public entities, so it depends on who your customers are:

  • April 24, 2026 for public entities with < 50,000 persons
  • April 24, 2027 for public entities with >=50,000 persons

What is WCAG 2.1?

The W3C‘s Web Content Accessibility Guidelines (WCAG) is the internationally recognized technical standard to ensure that digital content and applications are consistently accessible to people with disabilities.

WCAG is used to benchmark accessibility according to specific “Success Criteria” mean to guide developer and evaluators toward ensuring web content is Perceivable, Operable, Understandable, and Robust for all users.

2.1 simply refers to a version of WCAG, which was recommended by the W3C in Sept 2023.

What’s this AA stuff?

A web site or app can meet WCAG conformance at three different levels: A (the lowest), AA, or AAA (the highest). Anything below A is simply non-conformant.

The DOJ is saying that Level AA conformance is the minimum requirement. This corresponds with the common practice amongst ed tech buyers to require AA conformance of their vendors at a minimum.

In order to be Level AA conformant, An ed tech solution must meet every Level A and Level AA success criteria.

For example, when providing pre-recorded video content, Level A requires that accessible alternatives be provided (e.g. text or captions), but Level AA also specifically requires an audio description of any video be made available (Success Criterion 1.2.5).

How do we become WCAG 2.1 AA conformant?

The short answer for ed techs is…

  1. Evaluate your product against the WCAG 2.1 guidelines (internally or externally).
  2. Update your product as needed to achieve AA conformance.
  3. Publish the WCAG conformance results via a VPAT on your web site.

This all should probably start with a conversation between an ed tech’s CEO, CPO, CTO, and legal counsel.

Isn’t WCAG a moving target? What happens when WCAG 2.2 comes out?

WCAG will continue to evolve. In fact, right now 2.2 is the latest standard, and WCAG 3 is a working draft. Thankfully, the differences between versions tend to be pretty incremental.

Regardless, as long as WCAG remains the popular standard, ed techs should expect to do what’s necessary to conform with newer versions of WCAG over time — whether the DOJ explicitly updates its rules to include newer versions or not.

Could this new rule be made irrelevant without Chevron?

Maybe, but don’t count on it.

It does seem like the US Supreme Court’s June 2024 Loper Bright Enters decision to overturn the 1984 “Chevron deference” ruling could potentially impact this revision of ADA rules, especially if a public educational institution or ed tech provider were to successfully sue. The Chevron deference essentially says that when Congress passes a law that lacks specificity (in this case, ADA), courts should defer to federal agency decisions (in this case the DOJ) to enforce the law. That’s no longer acceptable under this latest Supreme Court ruling, and so laws must only be followed as written and/or interpreted by courts (when applicable).

That said, I think it’s unlikely that this latest ADA rule will be challenged. If it were to be obviated, we might expect Congress to simply pass new legislation formalizing a WCAG requirement as part of the ADA law.

And even if that doesn’t happen, ed techs should understand that supporting web accessibility is the right thing to do, and ed tech customers will continue to expect or require conformance with WCAG.

Want to Learn More?

First, check out Accessible.org’s fact sheet. It’s more in-depth than this FAQ.

Second, Rarebird is working on a new introductory course on ed tech accessibility for product and engineering leaders . Let me know if you’re interested in the upcoming course — or in getting advice on achieving product accessibility.

Ed Techs @ the 2024 AIR Show Skew Young

Surprise! Ed techs sponsoring the 2024 AIR Show: AI Revolution tend to be very young companies.

Of the 80+ ed techs that sponsored ASU+GSV’s inaugural AI event…

  • Average year founded = 2017
  • Median year founded = 2019

Here’s a histogram:

Histogram showing the count of ed techs sponsoring the AIR Show by age

In my last post, I noted that ed tech sponsors at the AIR Show tilted heavily toward Teaching & Learning Tools. Here’s a chart showing the average age of ed techs by product category:

Bar chart showing all ed tech product categories sponsoring at the 2024 AIR Show, sorted by average age and indicating the total number of companies.

This isn’t a large enough sample of ed techs to make any definitive statements about the state of AI in ed tech. However, this chart does reiterate that not only are a few product categories over-represented, they skew young as well:

  • AI Student Assistance: 10 ed techs; avg. age = 2.4 years
  • Learning Design & Dev: 9 ed techs; avg. age = 3.6 years
  • AI Teacher Assistance: 8 ed techs; avg. age = 2.6 years

Foretelling Some Hot Competition

Most of these ed techs are focused on K-12, though some address HE needs as well. And while there is surely some differentiation amongst these ed tech products and companies, I think this view is sufficient to suggest that competition in the more obvious AI-based ed tech product categories is really going to heat up in the next couple of years.

Those AI-based ed tech startups who survive will likely be those who prove product market fit the fastest. They need to solve real, painful problems for students, teachers, or administrators in a way that proves their ROI, and at a price that fits with the realities of of education budgets.

Easier said than done.

Looking at these pretty noisy, emergent categories we can expect that those AI ed techs who will survive will succeed in differentiating their product / message, and optimizing their GTM to get wins quickly — and quickly snowball into ARR. Succeeding here won’t immediately knock competitors out of the running, however. Only when AI ed techs are able to prove a positive impact on the problem they are helping customers solve and thereby renew contracts should they actually start to feel confident in the long-term success of their business.

Ed Techs @ the 2024 AIR Show Tilt Toward Teaching & Learning

Earlier this week, ASU+GSV hosted the inaugural AIR Show: AI Revolution just ahead of its own annual summit. Over 80 ed tech companies sponsored the AIR Show, and most showcased their new AI-based products or features during the event.

Touring the exhibit hall, I couldn’t help but see some patterns, so when I got back from ASU-GSV I set to categorizing all of the ed tech sponsors by product type and primary market served:

A few notes about this analysis:

  • Box size represents number of ed techs
  • Color represents target education market
  • 1 category per ed tech (though some do transcend)
  • Relative usage of AI is not represented; some ed techs are totally reliant on AI while others use AI to complement pre-existing products
  • Excludes non-product companies as well as horizontal (not primarily edu) companies

You can download / share the original here: https://rarebird.tech/wp-content/uploads/2024/04/air-show-2024_rarebird.png

Over- and Under-Represented Ed Tech Categories

The vast majority (81.4%) of ed tech sponsors at the 2024 AIR Show are in the Teaching and Learning cluster — tools used primarily by teachers or students to facilitate learning.

Within that cluster, 25% are what I call AI Assistance tools for teachers or students. These are primarily based on generative AI and aim to help teachers and students in a variety of ways. For example…

  • Providing information
  • Synthesizing content
  • Assisting in planning / scheduling
  • Generating assessment prompts, items, rubrics, etc
  • Assessing student work
  • AI “Tutoring” or “coaching”

Etc. For example, Magic School AI, which appears to be a tool box of AI assistance prompts and interfaces for K-12 teachers.

Note that the AI Teaching Assistance category does not include AI tools specifically for learning content design or development. Tools that were directly focused on helping teachers plan lessons, design modules, or generate content felt quite distinct and appear as the Learning Design / Dev category.

Speaking of which, Learning Design / Dev tied for first with over nine ed techs represented at the AIR Show (>10% of the overall exhibitors). On the one hand, this should be no surprise, as generating content and organizing or synthesizing information is what the current wave of AI seems best at.

On the other hand, this is just one example of how noisy ed tech AI is right now, and how competitive it will be for ed techs — at least in the near-term.

Because the AIR Show’s sponsors leaned heavily toward K-12 markets, it’s no surprise that there were a lot of domain-specific tools for K-12, like literacy, STEM, and languages. Ed techs in these categories are primarily content-based, but because they often see specialized products or companies I break them out from the more general Digital Content & Curriculum category.

I was surprised that I saw only 2 ed techs in the Originality Checking (aka Academic Integrity) category: Turnitin and GPTZero. I get that policing students sucks, but figuring out academic integrity in the age of AI is critical for education (and society writ large). No matter what your personal opinions of these products are, their proposed solutions need to be part of the conversation .

Who Else Wasn’t There?

If you look at the list of AIR Show’s sponsors, there are many other notable absences from the roster. I know some of those ed techs did participate at the larger ASU+GSV Summit later in the week. But why weren’t they also at AIR Show?

Perhaps it was the added cost. Perhaps they had nothing (yet) to contribute to the AI conversation. Or perhaps they concluded that the business ROI for them would be low (read: lead gen).

Expanded / Alternate Product Categories

Most of the product categories here are ones that you may already be familiar with, and many have proven their value in education even before the rise of generative AI. However, coming out of ASU+GSV I have altered some of the product categories that I use in Rarebird’s consulting services and market maps:

  • AI Teaching Assistance (expanded from “AI Grading”)
  • AI Student Assistance (expanded from “AI Tutor”)
  • Answers / Solutions (broken out from “Study Support”)

This last category includes products that are primarily focused on providing learners with immediate answers or solutions to questions. It is actually not entirely new. For example, it could include long-running online platforms like Chegg or CourseHero. Prior to the AIR Show I would have classified those as simply “Study Support”.

But as I looked at their products and their online marketing, I think it’s time to call a spade a spade.

Both of the products in this category aren’t shy about proclaiming their purpose: They derive the answers so you don’t have to.

And, as you can see in the screenshot above, this is clearly meant to be answers to assigned class work, including live answers during an online quiz or test.

Both products play some lip service to helping students “learn” or working with institutions, but I don’t take that seriously, since neither withholds answers until students work through a learning interaction.

Wrap-Up

I’ll admit it: The inaugural ASU+GSV AIR Show was fun.

I remain cautiously optimistic about the impact generative AI can have on education, though I do have concerns about how AI seems to be sucking the air out of the room in terms of both educators’ attention and investors’ interest. But AIR proved to be a good way to do a pulse check on what’s happening amongst AI-driven startups in ed tech.

Assuming AI doesn’t implode this year, I expect ASU+GSV to do this again in 2025, and I expect to attend.

Rarebird Learning Sessions at Utah Tech Week

Utah Tech Week happened this past week, and it was great to participate this year through Rarebird in support of local ed techs. We hosted the only two ed tech-specific sessions afaik, but the audience turn-out suggested a real demand for ed tech knowledge-sharing and community within the state.

Here’s a recap of those sessions in case you missed it:

The 4 Product Non-Negotiables in Ed Tech

Our first session was The 4 Product Non-Negotiables in Ed Tech, presented by industry veterans Karl Lloyd (product management, tech standards), Q Wade Billings (dev ops, security), and I.

We shared info, stories, and guidance so that early-stage ed tech founders and product leaders won’t ever be caught off guard by the 4 main “deal-killers” that districts and colleges will commonly require of vendors.

Karl Lloyd sharing advice on how to leverage partnerships with larger ed techs.
Karl Lloyd sharing advice on how to leverage partnerships with larger ed techs.

We covered a lot of ground rapidly during this session, and certainly only skimmed the surface on some important aspects, so it was good to see most of the audience stick around well after the time allotted to dig in deeper. There definitely seemed to be enough interest (and lingering questions) to warrant a full 60-90 minute session each.

Thus, we’re now developing in-depth workshops for 3 of the 4 topics, aiming for launch this spring.

Karl and I digging into the non-negotiables with attendees after the session.
Karl and I digging into the non-negotiables with attendees after the session.

Evolving Beyond Founder-Led Sales (Panel)

The second session was a panel, Evolving Beyond Founder-Led Sales, featuring 4 local ed tech founders and CEOs:

The panel tackled the question, How can an ed tech startup excel at – and eventually evolve beyond – founder-led sales?

Sean Traigle (SVP of Sales at Logos) and I co-hosted this session, and I think we had as much fun as the audience did with this thoughtful and lively group.

Panelist Neylan McBaine (Duet) responding to an audience question.
Panelist Neylan McBaine (Duet) responding to an audience question.

The conversation was far-reaching and represented the unique experiences of each panelist, but here are some takeaways worth pondering:

  • Founder-led sales is crucial for startups, as it not only tests the validity of the product, it develops the skills of the founder and builds relationships that prove invaluable in the long-run.
  • At the same time, because many founders are product or technical people who don’t fancy sales, having a sales pro as a co-founder can be a great advantage 😉
  • Like product validation, founder-led sales must be iterative. The developing methods and lessons learned may become the DNA of the company’s future GTM.
  • Once the startup has proven product-market fit, developed a playbook for founder-led sales, and grown at a rate that outpaces what the founder can respond to, it’s probably time to plan a sales hire.
  • The profile of a successful first sales hire will be heavily dependent on the needs of the business – e.g. it may be a market dev person or a field rep or even a savvy educator who can learn sales.
  • Founders should take their time with their early hire(s) to ensure they have what it takes. Better to continue to do double-duty in sales yourself for a while than wrestle with the costs and repercussions of a bad early hire.
  • A new hire must not only be competent with sales and capable of navigating customers’ organizations, the new hire must carry on the founder’s proven methods. This includes emulating the founder’s evangelism and reflecting their driving passion.
  • Document your process as it develops, and build a bare-bones system. Doesn’t have to be Salesforce, but it should at least be spreadsheets. You’re not building a mature rev ops team at this stage, but you can’t just leave it all in your head, either. A transition to your first sales hire(s) will be so much easier when you have systems and docs in place.
  • It’s easy to sell a product that you believe in. Both product leaders and sales leaders should take note.
  • Founders should never fully step away from sales. Just as product leaders should always be talking to customers, sales is something everyone in the company should care about and practice, even if they’re not building purchase orders.
  • Authenticity reigns. It’s easy to sell a product that you’re passionate about, and customers can smell BS a mile away. This is especially true in education, where ed tech startups will live or die by relationships, trust, and (eventually) referrals.

Those are some of the things we learned. Let’s hear from you:

  1. If you attended either of our UTW sessions, what did you learn?
  2. If you had participated in the founder-led sales session what would you have asked the panelists?
  3. If you’re in ed tech, what are the top challenges your startup faces in 2024?

Share your thoughts on any of these questions in the comments below.

And if you want to formalize your thoughts on #3, please complete The 2024 Ed Tech Priorities Survey: bit.ly/etsurvey24 – it only takes 7 min!

Huge thanks to everyone who attended, and especially the panelists and presenters who so generously shared their time and expertise!

Visualization of Ed Tech Categories

“Ed Tech” is a loose term. From a school’s perspective, it’s often used to refer to software specific to teaching and learning. From an investor’s perspective, it may refer to any company who’s business is focused on serving education.

Rarebird’s definition of “Ed Tech” includes four major categories of offerings:

  • Software — specifically software that enables teachers, students, or staff to accomplish part of their educational role.
  • Content — specifically digital content.
  • Hardware — when that hardware is offered by a company that has specifically targeted education.
  • Services — but only those services that also involve hardware, software, or digital content.

Each of these categories can be sub-divided, e.g. software for teaching and learning, content for assessment, hardware for classrooms, services for admissions and enrollments, etc.

There’s an important trend toward delivering ed tech via platforms and marketplaces.  A platform is technology from which new products or innovations can be built. A marketplace is a network for sharing & finding (usually community-created) products or services.

The following diagrams were created to define these categories, using a sample of contemporary ed tech companies to illustrate the potential for overlaps:

A venn-like diagram of 4 overlapping categories of ed techs: hardware, software, content, services, with platforms and marketplaces as sub-categories touching each. This diagram shows examples of contemporary companies for each category and sub-category.
Ed Tech Categories – US K-12 Version

 

A venn-like diagram of 4 overlapping categories of ed techs: hardware, software, content, services, with platforms and marketplaces as sub-categories touching each. This diagram shows examples of contemporary companies for each category and sub-category.
Ed Tech Categories – US Higher Ed Version

A few notes on these diagrams:

  1. These are not market maps. They each use just a small sample of ed tech companies to illustrate the categories and their overlap.
  2. These do not represent market size. The shapes simply define category boundaries.
  3. These are imperfect for the sake of simplicity. Any 2-dimensional representation struggles to represent all possible overlaps. For example, there are companies at the edges, e.g. Software + Services, that can’t be represented neatly.

You’ll also note that these diagrams maintain a strict definition of “platform”. A platform is not simply…

  • …a broad range of integrated tools and capabilities — that’s a system.
  • …a collection of interrelated products — that’s a suite.

A platform must enable users to create their own products or innovations and share those with others.

One of the purist examples of an ed tech platform today is Learnosity, which is an assessment platform used by both other ed tech companies and even institutions to create quizzing and testing products that are used or sold by others.

There are fuzzier examples, too: Instructure offers a suite of products, but it’s flagship Canvas LMS is also a platform, as Canvas has enabled both customers and other companies (e.g. Aspiredu, Cidi Labs, Atomic Jolt, eLumen) to build totally new products based on its open APIs.

That said,  it’s quite normal to market ed tech products as platforms these days, as “platform” sounds more capable, more mission-critical, or just less stodgy than “system”. (Ironically, Instructure markets itself as “a learning platform” in this more popular, generic sense of the term, too.) But over-using the term waters down and confuses what a technology platform actually is and why it’s important.

 

Evaluating Ed Tech Efficacy Through a Wider Lens

A young man wearing a virtual reality headset, holding controllers.

Image: “‘STEM Boys Night In’ at NASA Goddard”, CC BY 2.0

I’m an avid reader of Ryan Craig’s Gap Letter, a bi-weekly newsletter that analyzes key challenges in education seasoned with poignant cultural references. His latest newsletter, Research Universities Love Research… Except When It Involves Student Learning, addresses one of the thornier issues in ed tech: efficacy research.

While I broadly agree with Ryan’s observations and conclusions, it’s worth pointing out that efficacy research on ed tech is less black and white than we may want to believe.

By ed tech, I mean any technology tool or digital content that is used to support educational processes — usually teaching and learning specifically. 

By efficacy, I mean how well the ed tech tool does what it promises to do, either explicitly or implicitly. 

Here are some promises that an ed tech may make:

  • Improved student learning outcomes
  • Higher retention and/or graduation rates
  • Better student preparedness for college or career
  • Higher levels of student engagement
  • Lower levels of student personal problems (e.g. financial, behavioral, mental, etc)
  • Broader or deeper learning experiences
  • Increased teacher or learner autonomy or agency

The first 3-4 promises in the above list are probably top of mind when one thinks about researching the efficacy of ed tech. But there are other valuable promises that ed techs might make, for instance:

  • Time-savings, for students, teachers, or administrators
  • Lower costs, for students or institutions
  • Greater flexibility in instructional practices or content
  • Increased access to education
  • Reduced pain, stress, or extraneous cognitive load, for students, teachers, or administrators

Though these latter benefits relate more to the “business” of education, they are legitimate benefits nonetheless. A technology that addresses the process of education but not the outcomes of education is still educational technology, though its “efficacy” may need to be measured differently. 

The idea that some ed tech is designed primarily to facilitate educational processes takes us toward an important realization: 

The impact of any educational technology is co-determined by how it is used.

One of the lessons of educational technology that I brought to Instructure’s first Research and Education team was that technology does not improve anything by itself; people do. People can leverage technology to improve access to education, or student engagement, or learning outcomes, and some technology is better suited for these purposes than others, but any kind of change begins and ends with people.

In other words, you can lead a horse to water but you can’t make it drink.

This is especially true in higher education, where the proverbial horses (no offense!) may be instructors (who claim academic freedom to drink or not drink) or students (who are adults under no compulsion to succeed) or administrators (who have plenty of other responsibilities to keep them busy).

And while all ed tech will show some opinion about how it should be used, some ed tech products will purposely be open to a wide variety of uses, while others will exert their opinion more forcefully. 

A continuum between “open” and “opinionated” may be helpful here, where an ed tech product may be deemed more open to various uses versus limiting variations in use based on the “opinion” of the product designer.

I should probably note that “opinionated” is not meant to be a pejorative here, but I didn’t want to use “bias” either as that tends to steer product designers in a different direction than what I intend. If you have a better term, let me know in the comments.

For example, let’s say we are interested specifically in ed tech products that may influence student outcomes such as learning or engagement – i.e. teaching and learning tools or content.

On the one end of this continuum we’d find products that are wide open to a variety of instructional uses based on the preferences and opinions of the user. Microsoft Office 365 seems like an obvious example, but so would Blackboard Ally, a product created to ensure files are available to learners in a variety of formats to support accessibility.

On the other end of the continuum we’d find products that are created for a single, specific use and reflect strong pedagogical opinions. 7 Mindsets seems like a clear example of a more “opinionated” product, as it provides a complete, out-of-the-box digital curriculum, designed to reflect the company’s beliefs about mindset research and delivered according to their pedagogical position.

Learning Management Systems would fall more toward the “open” side than the “opinionated” side, and teaching tools designed for specific types of learning activities would fall more toward the “opinionated” side.

Here’s a quick, hypothetical example of how a sample of ed tech products might land on such a continuum:

Example continuum that asks how opinionated different teaching and learning ed techs are, with open on the left and opinionated on the right and many ed tech company logos in between.

Someone’s going to get upset with how I placed a particular product on this spectrum, which is fine. This is not meant to be precise, it’s merely meant to illustrate a point:

The efficacy of any ed tech product will be heavily influenced by how it is used. And some ed tech products allow a greater variety of uses than others.

This naturally makes those ed tech products harder to evaluate from an efficacy perspective – independent of a specific implementation within a specific context by specific users with specific instructional theories and practices. 

That doesn’t mean we can’t evaluate the efficacy of these products; it’s just much harder to do, and much more dependent on a variety of other factors – many of which are difficult to control (especially in higher ed).

But compare that to, for example, out-of-the-box digital curriculum aimed at improving reading comprehension amongst K-6 ELL students. Evaluations of this kind of ed tech are going to be much easier to control. Certainly there will be some of the same confounding variables as one might see in more “open” ed tech (e.g. student demographics, ability levels, and teacher support), but researchers wouldn’t have to spend as much time controlling the instructional design or methodology as those will be baked into the product.

This is all to say that we should be careful when we think about ed tech products from the perspective of efficacy; not all ed tech products can be easily evaluated, and many ed tech products are actually designed to allow for a broad range of uses, in this case instructional practices and philosophies.  

When we do judge the efficacy of ed tech products, we must do so in context of the problem they are trying to solve, and not conflate their purpose with other ed tech products with different problems to solve. 

And, frankly, sometimes the efficacy of an ed tech product is obvious without research, even though it could be measured. This is more often true for products that make “business” promises, such as time-savings or access.

Let’s go back to Blackboard Ally as an example: It’s obvious that, in order to learn/achieve, students need to be able to access the learning resources provided for them. But some students (e.g. students who are deaf or hard of hearing) can not learn from those resources due to the form they were created in (e.g. a video). Therefore, providing those resources in an alternative form that is accessible to all students is beneficial. I don’t need efficacy research to prove the value. 

But I will need to know that the product is easy to use and integrates well with other systems, like the LMS.

I think this is partly why some higher ed decision-makers put so much weight on ed tech characteristics that aren’t directly related to educational efficacy, as Ryan Craig points out in Gap Letter:

When a recent survey asked over 300 universities about the most important attributes for making edtech purchasing decisions … , here’s what mattered, in order of priority: 1. Ease of use; 2. Price; 3. Features; 4. Services; 5. Sophistication of AI; 6. Evidence of outcomes

Another factor is surely that those higher ed decision-makers tend to favor more “open” ed tech products, knowing that each instructor or faculty member has academic freedom.

Which leads us to an elephantine question: Should higher ed be more doctrinaire in terms of faculty teaching practices and curriculum? 

Perhaps.

Regardless, the reality is, if we look at adopting ed tech as a means of changing educational outcomes, we must first expect to change educational practices.

Some ed tech companies try to bake that change into their product, essentially requiring any users who might adopt their product to change their practices. Change is hard, of course, and no one wants to be forced to do something they’ve always done in a different way unless they absolutely have to. I expect that this may be a factor in the startlingly low rates of adoption amongst some ed tech tools. But encouraging users to adapt their practices by providing “golden paths” or other means of encouragement and reward is something I think all ed tech product managers must consider, especially if their product is more opinionated.

Some ed tech companies integrate some kind of change management services or support along with their products throughout its full life-cycle, not just through initial implementation (Ryan mentioned Mainstay; I’ll add Feedback Fruits as another ed tech that seems to do a good job of this). A change management partnership approach such as this certainly requires greater commitment and more resources on both sides. But it also holds the greatest promise for ed tech companies to be able to affect positive change within educational institutions, and thereby secure their value with decision-makers.

This kind of approach should also create natural opportunities for efficacy research. If practiced broadly, it should also result in a wider variety of organizational contexts from which to draw on as examples or models for similar institutions. 

No matter the approach, I believe progress in efficacy research depends on ed techs and educators working together to define a better way forward:

  1. Every ed tech must get real about the actual outcomes that their product is designed to deliver (and it’s not always “improve learning”). They must then decide how to measure that from customer to customer in a way that legitimizes their effort, and isn’t simply an empty gesture for marketing points.
  2. Every ed tech customer should recognize that this effort is impossible without them, and thus part of the “cost” of adopting an ed tech should be to have staff time, data, and perhaps even researchers available to contribute.

4 Mind-Shifts for Startup Founders Who Fear Being Salesy

Business people shaking hands in agreement

“Business people shaking hands in agreement”​ – RawPixel on Flickr – CC BY

Sales are, of course, critical for any startup. Especially for early-stage startups, it’s up to the founder(s) to determine product-market fit and map a sales strategy through, what else? Selling.

But founders aren’t always comfortable selling assertively. They may have their own negative perception of sales people, and thus are reluctant to come off as “sales-y”. Founders may also be so personally invested in their product/solution that they may have developed a fixed mindset as a defense mechanism against criticism or flaws, which disinclines them to be an a position where they might face rejection.

Founders of ed tech startups may have additional reasons to have concerns about selling assertively. Oftentimes they are pitching to highly educated individuals who are trained to thinking critically and analytically. These are people who purposefully embraced the mission of a non-profit organization, and may approach any corporate offering with significant skepticism.

But it is possible to sell assertively and authentically, even for founders who aren’t naturally comfortable doing so. Here are 4 suggestions for startup founders who personally struggle with internal feelings arising from selling:

1. Don’t focus on the customer’s perception of you (i.e. as sales-y), focus on your passion for helping the customer solve a problem. Authenticity owns perception.

2. Don’t focus on your product or company at all (at first), focus on the customer. A legitimate understanding of the customer’s problems in their context should naturally lead to your solution. If it doesn’t, take note* and move on.

3. Don’t be self-conscious about imperfections in your solution: Many weaknesses in your product/solution can only be addressed through the learnings you will glean from customer adoption. (It’s OK for customers to understand this, too.)

4. Don’t be shy about driving a deal to close: The only way you can help your customer is if they choose to formally adopt your solution. The sooner they adopt, the sooner you can make a positive impact on (at least one corner of) the world.

As someone who is a chronic self-doubter and has always been reticent of coming off as sales-y, I’ve applied these 4 ideas until they have become habits. Over time, habits trump inclinations, and you’ll find yourself increasingly comfortable assertively selling.

Those are just my ideas. What about you?

Are you self-conscious about sales? If so will these suggestions help?

Or perhaps you were once self-conscious but aren’t any more. If so, what caused you to change?

Maybe you’ve never been self-conscious about sales. What mindset do you hold that makes selling straight-forward?

*Repeated rejection should be examined, as it may mean you’re doing a poor job engaging with customers and representing the value, or your product/solution is a poor fit for the target customers’ needs. In either case, a pivot may be in order — in your methodology, your segment strategy, or your product strategy.

Why the Drop in VC Funding Doesn’t Signal a Lack of Faith in Ed Tech

"Online teacher professional learning 2"​ Photo by Allison Shelley for EDUimages - CC BY-NC 4.0

“Online teacher professional learning 2″​ Photo by Allison Shelley for EDUimages – CC BY-NC 4.0

HolonIQ just reported that ed tech VC funding is down dramatically, from $20.8B globally in 2021 to $10.6B in 2022. This doesn’t necessarily mean investors have lost interest in ed tech, however. It’s more likely a natural (and healthy) correction from the unprecedented levels of investment that were spurred by school closures and remote instruction during the pandemic.

In fact, if you look just at the US, ed tech VC funding in 2022 was still up considerably over what a pre-pandemic forecast would have predicted:

Chart: A forecast of ed tech VC funding.
A forecast of ed tech VC funding suggests that 2023 investments may be lower than 2022, but not anomalous from the pre-pandemic growth trend.

As HolonIQ writes, “…alongside the biggest boom in venture funding’s history (2020-2021), the party’s over and it’s back to fundamentals and outcomes”

This sounds right to me.

As I’ve talked with VCs who have ed tech as part of their investment strategy, there is more curiosity than trepidation about the future of ed tech.

For example, they want to know what technology trends will carry forward in education. Are virtual or hybrid classrooms still important? How are institutions adapting to the online trend? And, of course, is ChatGPT/AI mostly hype or will it have long-lasting impact?

They also want to know what declines in enrollments will mean for the overall health of the industry. Where did all those K12 students go? What are higher ed institutions doing to address both the pandemic-related enrollment declines and the anticipated demographic cliff? How might ed tech help districts or institutions expand or grow enrollments, or economize/optimize their operations?

They also they want to see which ed techs have come out of the pandemic era stronger and more resilient. Who was able to scale during the pandemic boom? And I don’t just mean grow revenue. What companies were able to deliver on their promises to customers as new implementations and user adoption boomed? And what lessons have ed techs learned about running their business? Do post-pandemic projections overestimate new customer growth? Are ed techs seeing significant customer attrition as institutions have re-opened campuses?

These are all important questions for ed tech companies to answer for themselves and incorporate into their near- and long-term strategy. And though I’ve heard anecdotally that ed techs may be having a harder time getting VC attention, my impression is that there is still money to be invested, and still strong interest in this space. Conversations, therefore, may be less exuberant and more focused on the fundamentals that have (or should have) always mattered: e.g. realistic assessments of the market opportunity, proven product-market fit, strategic competitive differentiation, repeatable GTM plans, ability to scale, plans for profitability, etc. As Crunchbase wrote, “Lower investment [in ed tech] looks more like prudence than pessimism”.

Indeed, now may be the beginning of a golden era for ed tech, where advances in technology are meeting a sea-change in attitudes toward online and blended learning. Where the demands and expectations of new audiences of learners in the modern world is creating constructive pressure on traditional educational models. I can’t think of a better time in my 20+ years in education for bright ed tech founders to create products and services that deliver significant human and social impact while building a sustainable — and profitable — business.

These are just my impressions from informal conversations with VCs and ed tech founders. You can add to the story:

If you’re a VC invested in ed tech, how does this match what you’re thinking? What should ed techs be aware of as they seek funding in 2023?

And if you’re an ed tech founder, how are you thinking about the next couple years of your business and your reliance on VC funding?

Planning to Prepare is a Key to Personal Growth

Inc. included a small piece of advice that I penned in their article, How to Push Yourself to the Next Level:

7. Plan to prepare.

Too often we miss opportunities to be our best and make the most of our time with others because we didn’t find time to prepare. Granted, most leaders are pretty good at winging it–directing meetings on the fly or extemporaneously presenting ideas. But we can be much more effective–and use everyone’s time more efficiently–when we deliberately set aside time to prepare our next meeting, presentation, or project. It’s not easy to find extra time in the day, so rather than hoping you’ll find the time you need when you need it, block the time you want for prep on your calendar. Schedule 15 minutes before meetings to review objectives. Block an hour immediately before a presentation to clear your mind and rehearse. Carve out as many hours as you need to create and practice the pitch.

Read the entire Inc. article here.

Of course, being an expert means that you have practiced and prepared, formally or informally, over the years, and in a variety of situations. That’s why you’re an expert. But if we’re looking to go beyond our current level of performance, if we’re looking to level-up our expertise, we need to both fairly self-evaluate our performances and we need to plan to improve the next time.