Sponsors float open-source software ratings system

A university, a startup and Intel are pushing a proposal for a standard model to rate open-source software to help users decide a project's suitability.

A university, a start-up and chip giant Intel are pushing a proposal for a standard model to rate open-source software to provide customers with a better sense of the maturity of the more than 100,000 projects available today.

The Business Readiness Ratings (BRR) model is the brainchild of Carnegie Mellon University West's Center for Open Source Investigation (COSI) and is being cosponsored by open-source testing and certification start-up, SpikeSource, and Intel.

"The model allows users and developers to get a feeling for the appropriateness of open-source software for their environment," vice-president of product marketing at SpikeSource, Joaquin Ruiz, said.

One way of thinking about the BRR model was as a kind of tailored Netflix service, he said. Like the online video ordering service, users and developers would rate the different open-source projects.

The model should save organisations a good deal of time they would have spent trying to do their own in-house assessment of the wealth of open-source projects around, Ruiz said. For instance, if a company was looking for an open-source Wiki type application, there were seven currently available, while he estimated there were 135 open-source general content management tools in the market.

For the next three months, COSI, SpikeSource and Intel were inviting comment on the BRR model from users and developers, Ruiz said.

Using those comments, the model would be enhanced and the organisations would hope to have the model in production by the end of the year, he said. The model would need to be adaptable to reflect different usage assessments, with the requirements of a university, say, being quite distinct from those of a large corporation, Ruiz said.

COSI, SpikeSource and Intel have defined 12 categories for assessing open-source projects, including how well the software meets user needs, its usability, scalability, performance and support. The categories in turn consist of a number of grouped together metrics. For instance, under the rating 'quality', metrics will include user estimations of the quality of the software's design, the code and the testing and how complete and error-free each of these three are.

Users will rate the categories for a project using a scale of one for unacceptable up to five for excellent and then the 12 categories will be weighted in terms of importance. The top seven or fewer categories will then be taken as the basis for ending up with a calculation of a project's overall BRR score.

On the BRR website, the model's sponsors are providing a white paper and discussion forums together with samples, standard templates and worksheets of the model. In the white paper, the sponsors state the aim of the model is to offer "a vendor-neutral federated clearinghouse of quantifiable data on open-source software packages to help drive their adoption and development".

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Carnegie Mellon University AustraliaIntelMellonNetflix

Show Comments
[]