Data use is a double-edged sword, hopefully doing much more good than harm, but capable of doing quite a bit of both. For example, the U.S. government's Beacon Community program may want to use data to assess and improve the healthcare of underserved populations. This is an admirable goal. However, what is to stop insurance companies from using this information to deny coverage to residents in those areas, in much the same way that insurance coverage is now denied many hurricane prone communities? Measuring and improving educational outcomes was the impetus for school rankings and standardized tests. However, it led to people gaming the system to avoid penalization.
Of course, in this case, it might lead to a community overstating poor health outcomes to receive more funding. There will always be unintended consequences, which are difficult to predict. But the larger the data sets, and the more sophisticated our ability to analyze them; the chances for abuse and unforeseen negative consequences become greater.
Other problems are raised by the imbalance of power between individuals and organizations. The Fair Credit Reporting Act (FCRA) covers health history data compilations, but it is up to the patient to request the record and dispute the claim, after their aggregation has already been used. Technically, an individual can refuse to sign the waiver that allows the company access to this data, but then the company can refuse to do business with them altogether. And this assumes a level of corporate transparency and honesty that cannot be assumed, given the amount of corruption brought to light by litigation.