Staying ahead of the Inspector General – How your Institution can perform its own analytics-driven review
Article

Staying ahead of the Inspector General – How your Institution can perform its own analytics-driven review

In two articles published in the National Council of University Research Administrator’s (NCURA) magazine (August 2015 and October/November 2015), we described the new audit technique being employed by the National Science Foundation’s (NSF) Office of Inspector General (OIG).  The OIG is utilizing data analytics to review large quantities of information provided by grantees and other public sources to select institutions for further review.  Once selected, an institution is asked to submit detailed transaction records for all NSF-sponsored awards for the past three fiscal years, which are then churned through a series of analytical tests.  These tests are designed to identify “red flags” which could represent an inappropriate cost charged to an award.  Transactions identified as “red flags” are then selected for further, detailed testing as part of the full audit procedures.

Our original articles detailed many of the challenges institutions face when selected for such an audit, including the time and human resources required to assist in managing and responding to OIG requests.  Since these data analytics are here to stay, we wanted to provide some further detail regarding what an institution can do to proactively evaluate its exposure or attempt to get ahead of such a review.  Luckily, internal audit (IA) can often play a key role in working with research leaders at your institution to implement analytical procedures and help assess both the risk of potentially inappropriate costs charged to a portfolio of research awards and also to the effectiveness of processes in place to respond to this type of OIG audit. If your institution’s IA function does not have the time or resources available for this type of work, consider if your institution wishes to partner with an outside service provider to perform a preemptive analytics review.

To begin, IA must establish the scope of its review.  We suggest performing high-level analytics of your organization’s funding portfolio to inform your selection.  This testing can be performed on all awards from a certain sponsor (e.g., one or two top funding agencies), awards within a certain department or school, or even certain types of costs (such as travel expenditures).  Each option carries pros and cons, so IA should discuss the possibilities with research leadership or other stakeholders to evaluate the risk-benefit tradeoff.  Once the audit focus area is established, IA can then establish the parameters for its auditable universe.  As the OIG’s audits cover three years of expenditures, we recommend selecting a multi-year universe.  IA should also engage counsel in the planning conversations, to ensure that appropriate legal consideration is given to potential risks associated with such a review and prepare the institution in the event a voluntary disclosure becomes necessary (for these reasons, we tend to focus on awards that are still active, allowing the institution to correct inappropriate charges more readily).

While the OIG has not shared the exact tests used for its red flag analysis, many of the items reviewed can be discerned from published audit reports or materials created and presented by the OIG (many of which can be found online or through resources such as NCURA or the Association of College and University Auditors (ACUA)).  However, IA may be more limited in its access to information than the OIG, particularly in regards to project budgets.  Project success requires more up front conversation and planning to understand what data is available, in what form, and from whom.  Ensuring a detailed understanding of the nature and type of information that IA can use will help to set the appropriate audit steps and save time in the end.

Additionally, though the OIG’s audit purpose is to identify cost recovery opportunities for the NSF (towards its mission of promoting appropriate use of federal dollars), IA can use this process to help evaluate its institution’s preparedness to respond to such an audit and educate key leaders and research stakeholders on what they can expect if faced with a government audit.  This can lead to valuable dialogue and identification of potential weakness or gaps in the organization’s design and better position your institution to mitigate some of the challenges that come with a government audit.

Lastly, once a successful program is developed, your institution can continue to use data analytics as a way to monitor research expenditures and (hopefully) position itself to stay ahead of questions that might be raised by a government auditor.  IA will be able to expand or rotate the universe of expenses because the approach, analytical tests, and data needs will now exist (i.e., build onto the learning curve).  Ongoing tests can be used to provide metrics or other information for monitoring purposes to research or university leaders, or used to enhance the control and oversight environment on campus (consider partnering with “spot audits” to verify the appropriateness of charges on a routine basis).  With the increased access and availability of information, and the expectations of such, data analytics reviews and “audits” of large selections of data will likely continue to become more commonplace, so IA can seize this opportunity to be a thought leader across campus and help the institution mitigate some of its risk related to sponsored research compliance.

Stethoscope on a laptop
Next up

Proposed rule implementing the Merit Based Incentive Program System (MIPS) expected to be released soon