Appendix A: Research Methodology

Tech Impact has developed a rigorous methodology governing the research for the reports in our Consumer Guide series. The following is an overview of the process.  

Eligibility Criteria 

Working in collaboration with a working group made up of experts from the funding partners of this report, we began by defining a set of gating criteria to govern the eligibility of software packages for inclusion. As with previous editions, we decided to focus on grants management software for private foundations—we also included Foundation Cloud, which works primarily with community foundations but has a grants management product that met our gating criteria.  

Our definition of a grants management system includes only systems capable of managing the full grant cycle, and which are offered as free-standing products rather than part of a foundation “back office” management service. To be included in the report, a system must be able to do the following without customization:   

  • Track associations between grants and particular programs     
  • Track organizations separately from individual grants     
  • Track which requirements grantees have and have not met     
  • Define a payment schedule for each grant and see upcoming scheduled payments  
  • Run reports on a variety of system fields

 

In addition, systems that qualify for the report must include the following features:   

  • Ability to track entire grant lifecycle, from application through review to award and outcomes reporting    
  • Customizable online applications and forms    
  • Online application review/scoring    
  • Grantee relationship tracking/management (built-in or via integration)    
  • Document/email generation (built-in or via integration)    
  • Payment tracking    
  • Flexible and customizable reporting    
  • Web-/Cloud-based (or easy remote access)    
  • Responsive design (accessible across a variety of devices)    
  • Multi-factor authentication 

To be included, vendors must have a minimum of 30 clients, 10 of which are foundations.  

 

Product Selection 

We started with the list of systems evaluated in our 2020 edition. The market has experienced several changes since its publication, with some vendors leaving the grants management space and others affected by mergers and acquisitions. We asked our working group of partners to help us identify additional systems for inclusion; we also received several requests to be included from vendors. We emailed an eligibility questionnaire to all vendors on this expanded list to determine whether their software fit our GMS definition.   

Six systems included in the 2020 edition do not appear in this edition:   

  • CC Grant Tracker rebranded as Symplectic Grant Tracker and now focuses primarily on research grantmakers.  
  • Bonterra acquired Cybergrants; we made multiple attempts to reach out to the vendor for inclusion, but did not receive a response.  
  • Saleforce will sunset its Foundation Connect product in 2025/2026; we included Salesforce’s new grants management product in its place.  
  • SurveyMonkey did not respond to multiple attempts to include the SurveyMonkey Apply product.  
  • WebGrants did not meet our gating criteria related to the number of foundation clients  
  • WizeHive released a new product that will eventually take the place of Zengine; so we included that product in its place 

 

Four new systems met the gating criteria and were added to the report:   

  • Impactfully by Foundation Source  
  • NPact Foundation Cloud (successor to the Granted GE Spectrum platform included in the 2016 guide)  
  • Submit.com  
  • Temelio  

The final result was a list of 14 systems, four of which were making their debut in this year’s Consumers Guide.  

 

Evaluation Criteria 

Based on feedback from vendors and readers and discussions with subject matter experts, we continued to refine the way in which we evaluate software for this report. This year’s evaluation criteria format is similar to the well-received approach we used in our 2022 Landscape of Integrated Software for Community Foundations report. Rather than organizing the functionality into a rubric that rates the platform on a scale of “basic” to “advanced,” we instead have adjusted our focus to report on how the systems meet the functionality in our rubric.   

We also removed basic functionality that all systems handle similarly—for example, creating custom fields and assigning tasks to users—in favor of focusing on areas of likely differentiation between systems in the report.   

We divided the rubric into eight sections, with specific functionality listed in each section (see Appendix B for the full rubric). Functionality in seven of the sections is mapped to one of the following statuses:   

  • This is core functionality available to all clients  
  • This is core functionality available for premium/enterprise subscribers  
  • This can be done using functionality designed for another purpose  
  • This can be done with an add-on or module at an extra cost  
  • This can be done via a pre-existing integration with a third-party solution [Please identify the 3rd party system(s)]  
  • This requires custom development  
  • This is not a feature we provide  

 

In the eighth section, Training and Support, functionality is mapped to the following three statuses:   

  • This is included at no extra cost  
  • This is available at an extra cost  
  • This is not available  

 

In February 2024, we sent vendors a questionnaire asking them to map how their systems met the functionality to the identified statuses. We conducted a series of 90-minute product demos with the 14 vendors selected for this report in March and April of 2024. We then validated vendor responses to the questionnaires during product demos and updated the statuses as necessary.   

Finally, we used the revised questionnaires and demos to write up the system profiles, which the vendors were able to review for accuracy.  

 

Customer Experience Survey 

Tech Impact and our partner working group developed an updated Customer Experience Survey based on the one used in previous reports that we, along with our partners—Grantbook, PEAK Grantmaking, and TAG—distributed to our email lists in January and February 2024. After we received our initial results, we asked the vendors to distribute the survey to their clients as well. In total, we received 268 responses.   

For each system review, we’ve published how many people reported using the grants management software in the survey. We also include a rating on a scale of one to four based on those users’ reported experiences with the training, support, and implementation offered by the systems’ vendors, and the percentage of respondents who would recommend the system to others.   

Note: The sample size for many of these systems was very small and, for some newer systems, all responses were from vendor outreach rather than organic submissions. As a result, this survey should not be taken as a rigorously scientific research method. We still felt the results to be useful for foundations considering all factors when making a software selection decision. We’ve reprinted the content of the survey in Appendix C.