Tag: AI

  • FAQ: 2025 FIPSE Grants for AI in Higher Ed

    FAQ: 2025 FIPSE Grants for AI in Higher Ed

    The US Department of Education has just published a call for proposals for FIPSE (Fund for the Improvement of Postsecondary Education) grants which includes 1 area of national need centered on the advancement, understanding, and implementation of AI in post-secondary education.

    Within this area of national need, $25m in funding (~$1-4m per grant) is allocated to each of these Absolute Priorities (APs):

    • Absolute Priority 1 – Advancing the Understanding of Artificial Intelligence in Postsecondary Education. This AP will fund projects that help colleges and universities better understand and use AI to improve teaching, learning, and student success.
    • Absolute Priority 2 – Ensuring Future Educators and Students Have Foundational Exposure to AI and Computer Science. This AP will fund projects that train future and current teachers—especially those in teacher-prep programs—to effectively teach about and with AI.

    Given the extremely small window for proposals (Dec 3 2025 deadline!), there has been a lot of buzz, frustration, and questions about this funding opportunity.

    The most common question I’ve heard from institutions is, “Can funds be used to license AI software.”

    The short answer is yes — if AI software is a critical part of a project (see project award criteria below).

    The most common question I’ve heard from ed techs is, “Can I help institutions put together a proposal?”

    I’m not an attorney, but assuming the institution and the proposal adhere to strict federal regulations (especially around competition), the answer is perhaps. At the very least, ed techs should be able to advise customers based on their domain expertise.

    There are many more questions addressed in the FAQ below.

    NOTE: This FAQ represents my interpretation of the FIPSE grant. Always rely on the actual language and regulations put out by the federal government:

    FAQ

    1) advancing the understanding of and use of Artificial Intelligence (AI) technology in postsecondary education,

    2) promoting civil discourse on college and university campuses, 

    3) promoting accreditation reform, and 

    4) supporting capacity-building for high quality short-term programs.

    $50 million split evenly across the 2 Absolute Priorities (full text of these below).

    “I]nstitutions of higher education, or consortia thereof [including SEAs & LEAs], and such other public agencies and nonprofit organizations”.

    Sortof: An HEI can be the lead on only one grant per national need (e.g. either AP 1 or 2), but can be a partner on other applications.

    Up to 48 months.

    The estimated range of awards for AP 1 & 2 = $1m – $4m total. $4m is the max.

    No.

    Dec 3 11:59:59 p.m Eastern Time

    35pp or less.

    Yes, as part of a project that meets award criteria.

    Adoption of AI technology is implicit in the grant and there is nothing in the grant that says software can not be included in budgets. 

    Further, this July 22, 2025 Dear Colleague letter, while preceding this grant announcement, clearly states that federal discretionary grant funds can be used for AI technology.

    That said, HEIs must evaluate and select vendors fairly, as described in Federal code § 200.319 Competition (c.)6. This suggests that a grant proposal should not specify the name brand of a single application, but can either (1) specify budget for AI software generally during the proposal stage, and then fairly evaluate and select a vendor, or (2) fairly evaluate and select a vendor ahead of the proposal stage.

    Yes. “The use of AI in the development of grant application materials is allowable. Applicants submitting a grant application must certify on the standard application form to the ‘true, complete, and accurate’ nature of all the contents of their grant application, regardless of whether it is generated by AI.”

    Applications must be submitted online using a Grants.gov shared Workspace.

    See Common Instructions & Info for format instructions, necessary institutional information.

    Any narrative sections and all other attachments should be in PDF format (no password protection).

    While not full proposals, you could also check out recent post-secondary abstracts for winning projects on other topics as examples:

    The following (summarized) criteria are worth a total of 100 points: 

    1. Significance (30 pts)
      1. Innovativeness (15 pts)
      2. Outcomes (15 pts)
    2. Project design (45 pts)
      1. Integration  (15 pts)
      2. Replicability  (15 pts)
      3. Iteration / Continuous improvement (15 pts)
    3. Management plan (10 pts)
    4. Evaluation / evidence (15 pts)
      1. Evaluation methods (5 pts)
      2. Feedback (5 pts)
      3. Replication guidance (5 pts)

    See pp 30-32 for details.

    Additional evaluation considerations include…

    • Past performance on previous awards;
    • Risk assessment (especially for grants over $250k);
    • Willingness to open license & disseminate any newly developed work (not pre-existing works).

    Projects must “propose project-specific performance measures and performance targets” versus a baseline measurement.

    Reports are required annually and at the end of the project including performance and financials.

    For specific requirements, see www.ed.gov/fund/grant/apply/appforms/appforms.html

    Absolute Priority 1 – Advancing the Understanding of Artificial Intelligence in Postsecondary Education.

    Priority: Projects or proposals to improve academic instruction and student learning, including efforts designed to assess the learning gains made by postsecondary students (section 744(c)(2)) of the HEA), through one or more of the following:

    (a) Supporting the integration of AI literacy skills and concepts into teaching and learning practices to improve educational outcomes for students, including instruction about how to use AI responsibly, and how to detect AI generated disinformation or misinformation online; and

    (b) Partnering with State Educational Agencies (SEAs) or Local Educational Agencies (LEAs) to do one or more of the following:

    (i) use AI technology to provide high-quality instructional resources, high-impact tutoring, and college and career pathway exploration, advising, and navigation to improve educational outcomes.

    (ii) integrate AI-driven tools into classrooms to personalize learning, improve student outcomes, and support differentiated instruction. This integration may include, but is not limited to, adaptive learning technologies, virtual teaching assistants, tutoring, and data analytics tools to support student progress.

    (iii) utilize AI in the classroom and/or for school operation efficiency, including but not limited to: improving teacher training and evaluation, reducing time-intensive administrative tasks, or improving instruction or services for students with disabilities.

    Absolute Priority 2: Ensuring Future Educators and Students Have Foundational Exposure to AI and Computer Science.

    Priority: Projects or proposals to leverage AI to improve teacher preparation by doing one or more of the following:

    (a) Deliver AI and computer science credentials in rural communities;

    (b) Embed AI and computer science into an institution of higher education’s general preservice or in-service teacher professional development or teacher preparation programs;

    (c) Provide additional support for teacher preparation programs that are preparing future computer science educators in K-12 education; 

    (d) Expand offerings of AI and computer science courses as part of an institution of higher education’s general education and/or core curriculum;

    (e) Provide resources and support for the use of AI in teacher preparation programs;

    (f) Partner with SEAs and/or LEAs to provide resources to K-12 students in foundational computer science and AI literacy, including through professional development for educators; and (g) Partner with SEAs and/or LEAs to encourage the provision of dual-enrollment course opportunities so that students can earn postsecondary credentials and industry-recognized credentials in AI coursework concurrent with their high school education.

    Implied that the ED may award money later; “For FY 2025 and any subsequent year in which we make awards from the list of unfunded applications from this competition…”

    What’s Your Take?

    If you’re a professional grant reader with experience in FIPSE grants, or an institution or AI company working on a proposal, we’d love to hear your interpretation or further questions in the comments below.

  • Ed Techs @ the 2024 AIR Show Skew Young

    Ed Techs @ the 2024 AIR Show Skew Young

    Surprise! Ed techs sponsoring the 2024 AIR Show: AI Revolution tend to be very young companies.

    Of the 80+ ed techs that sponsored ASU+GSV’s inaugural AI event…

    • Average year founded = 2017
    • Median year founded = 2019

    Here’s a histogram:

    Histogram showing the count of ed techs sponsoring the AIR Show by age

    In my last post, I noted that ed tech sponsors at the AIR Show tilted heavily toward Teaching & Learning Tools. Here’s a chart showing the average age of ed techs by product category:

    Bar chart showing all ed tech product categories sponsoring at the 2024 AIR Show, sorted by average age and indicating the total number of companies.

    This isn’t a large enough sample of ed techs to make any definitive statements about the state of AI in ed tech. However, this chart does reiterate that not only are a few product categories over-represented, they skew young as well:

    • AI Student Assistance: 10 ed techs; avg. age = 2.4 years
    • Learning Design & Dev: 9 ed techs; avg. age = 3.6 years
    • AI Teacher Assistance: 8 ed techs; avg. age = 2.6 years

    Foretelling Some Hot Competition

    Most of these ed techs are focused on K-12, though some address HE needs as well. And while there is surely some differentiation amongst these ed tech products and companies, I think this view is sufficient to suggest that competition in the more obvious AI-based ed tech product categories is really going to heat up in the next couple of years.

    Those AI-based ed tech startups who survive will likely be those who prove product market fit the fastest. They need to solve real, painful problems for students, teachers, or administrators in a way that proves their ROI, and at a price that fits with the realities of of education budgets.

    Easier said than done.

    Looking at these pretty noisy, emergent categories we can expect that those AI ed techs who will survive will succeed in differentiating their product / message, and optimizing their GTM to get wins quickly — and quickly snowball into ARR. Succeeding here won’t immediately knock competitors out of the running, however. Only when AI ed techs are able to prove a positive impact on the problem they are helping customers solve and thereby renew contracts should they actually start to feel confident in the long-term success of their business.

  • Ed Techs @ the 2024 AIR Show Tilt Toward Teaching & Learning

    Ed Techs @ the 2024 AIR Show Tilt Toward Teaching & Learning

    Earlier this week, ASU+GSV hosted the inaugural AIR Show: AI Revolution just ahead of its own annual summit. Over 80 ed tech companies sponsored the AIR Show, and most showcased their new AI-based products or features during the event.

    Touring the exhibit hall, I couldn’t help but see some patterns, so when I got back from ASU-GSV I set to categorizing all of the ed tech sponsors by product type and primary market served:

    A few notes about this analysis:

    • Box size represents number of ed techs
    • Color represents target education market
    • 1 category per ed tech (though some do transcend)
    • Relative usage of AI is not represented; some ed techs are totally reliant on AI while others use AI to complement pre-existing products
    • Excludes non-product companies as well as horizontal (not primarily edu) companies

    You can download / share the original here: https://rarebird.tech/wp-content/uploads/2024/04/air-show-2024_rarebird.png

    Over- and Under-Represented Ed Tech Categories

    The vast majority (81.4%) of ed tech sponsors at the 2024 AIR Show are in the Teaching and Learning cluster — tools used primarily by teachers or students to facilitate learning.

    Within that cluster, 25% are what I call AI Assistance tools for teachers or students. These are primarily based on generative AI and aim to help teachers and students in a variety of ways. For example…

    • Providing information
    • Synthesizing content
    • Assisting in planning / scheduling
    • Generating assessment prompts, items, rubrics, etc
    • Assessing student work
    • AI “Tutoring” or “coaching”

    Etc. For example, Magic School AI, which appears to be a tool box of AI assistance prompts and interfaces for K-12 teachers.

    Note that the AI Teaching Assistance category does not include AI tools specifically for learning content design or development. Tools that were directly focused on helping teachers plan lessons, design modules, or generate content felt quite distinct and appear as the Learning Design / Dev category.

    Speaking of which, Learning Design / Dev tied for first with over nine ed techs represented at the AIR Show (>10% of the overall exhibitors). On the one hand, this should be no surprise, as generating content and organizing or synthesizing information is what the current wave of AI seems best at.

    On the other hand, this is just one example of how noisy ed tech AI is right now, and how competitive it will be for ed techs — at least in the near-term.

    Because the AIR Show’s sponsors leaned heavily toward K-12 markets, it’s no surprise that there were a lot of domain-specific tools for K-12, like literacy, STEM, and languages. Ed techs in these categories are primarily content-based, but because they often see specialized products or companies I break them out from the more general Digital Content & Curriculum category.

    I was surprised that I saw only 2 ed techs in the Originality Checking (aka Academic Integrity) category: Turnitin and GPTZero. I get that policing students sucks, but figuring out academic integrity in the age of AI is critical for education (and society writ large). No matter what your personal opinions of these products are, their proposed solutions need to be part of the conversation .

    Who Else Wasn’t There?

    If you look at the list of AIR Show’s sponsors, there are many other notable absences from the roster. I know some of those ed techs did participate at the larger ASU+GSV Summit later in the week. But why weren’t they also at AIR Show?

    Perhaps it was the added cost. Perhaps they had nothing (yet) to contribute to the AI conversation. Or perhaps they concluded that the business ROI for them would be low (read: lead gen).

    Expanded / Alternate Product Categories

    Most of the product categories here are ones that you may already be familiar with, and many have proven their value in education even before the rise of generative AI. However, coming out of ASU+GSV I have altered some of the product categories that I use in Rarebird’s consulting services and market maps:

    • AI Teaching Assistance (expanded from “AI Grading”)
    • AI Student Assistance (expanded from “AI Tutor”)
    • Answers / Solutions (broken out from “Study Support”)

    This last category includes products that are primarily focused on providing learners with immediate answers or solutions to questions. It is actually not entirely new. For example, it could include long-running online platforms like Chegg or CourseHero. Prior to the AIR Show I would have classified those as simply “Study Support”.

    But as I looked at their products and their online marketing, I think it’s time to call a spade a spade.

    Both of the products in this category aren’t shy about proclaiming their purpose: They derive the answers so you don’t have to.

    And, as you can see in the screenshot above, this is clearly meant to be answers to assigned class work, including live answers during an online quiz or test.

    Both products play some lip service to helping students “learn” or working with institutions, but I don’t take that seriously, since neither withholds answers until students work through a learning interaction.

    Wrap-Up

    I’ll admit it: The inaugural ASU+GSV AIR Show was fun.

    I remain cautiously optimistic about the impact generative AI can have on education, though I do have concerns about how AI seems to be sucking the air out of the room in terms of both educators’ attention and investors’ interest. But AIR proved to be a good way to do a pulse check on what’s happening amongst AI-driven startups in ed tech.

    Assuming AI doesn’t implode this year, I expect ASU+GSV to do this again in 2025, and I expect to attend.