2024 Global RIghts Project FAQs
What are some limits of the grades and CIRIGHTS data?
Grades can be useful in evaluating areas where states excel and areas where they should put in more effort (or the international community should pressure them) to improve human rights within their country. Grades are imperfect, and anyone looking at these grades should understand some of the limitations of our grading system. We are exploring how states treat people within their country rather than how they behave outside of their country. For most of the rights we measure, we evaluate how they treat citizens. In contrast, for a few rights (such as child labor, and forced labor), we consider how they treat citizens and non-citizens in their territory. We are also evaluating practices rather than laws. In other words, we are interested in whether individuals have the rights they are entitled to under international law rather than the strength of domestic laws that claim to protect those rights. We are not simply looking at whether violations occur within a country but also at how governments respond to violations. If there is state violence against citizens, then a country’s human rights grade would decrease. However, if the state investigates and holds perpetrators accountable, then the score would not decrease as they have redressed violations. Our scores are also the floor in a country rather than average respect. If most of the country has high respect but violations are concentrated in a small area, then our scores represent what rights look like in the small area. Finally, these grades are best interpreted as how the international community evaluates human rights respect rather than actual human rights respect. Many violations are missed by international human rights reports, and in-depth country studies often uncover a host of violations the international community is not paying attention to.
This grade generally represents what human rights look like in a country for its most vulnerable citizens. This is the floor rather than the average. That said, it is important to note that it is not a perfect measure of overall human rights. Although CIRIGHTS is the most comprehensive human rights data set available, we are still missing some rights, (such as health, education, water, housing, and food). We also hold all governments to the same standards despite some countries facing significant barriers to human rights.
In many ways our grading system suffers from the same limitations as traditional grades. Our measure of human rights is still useful for investigating human rights around the world. However, trusting a single source of information is likely to lead to mistakes and we encourage those interested to explore some of the other measurement projects and case studies of countries they are interested in.
How do you handle missing scores?
In the previous years report we had to impute entire years for a few rights. However, thanks to the generous support of the university, several donors, and highly motivated student researchers, we have been able to catch up. In this report we are not missing an entire year for any of the variables covered here. For the 21st century (the scope of this report) we have 4,548 country-years which we have scores for every right. Each of these is a country “Finland” and a year “2001” meaning our scores for Finland in 2001 represent a single country-year. The dataset used to produce this report has as total of 109,152 scores of which we are missing 1,704 scores or 1.5% of country years. This is a very small number and one we work every year to reduce. For those 1.5% of cases we assume that a country’s score for that right is the same as the previous year.
Is the data in this report the same as the publicly available data?
The data in this report are largely the same as the publicly available data. However, the data we release has some missing scores for some of the rights. When calculating an overall score for a country any missing score would lead to that country not receiving a grade. As a result we chose to impute missing scores. We assume that if we are missing a score the most likely case is that the score is the same as the previous year. This is based on a strong finding in the human rights literature that rights are “sticky” and tend to change little from year to year. This assumption has the advantage of allowing us to talk about every country and every right without introducing bias or missing cases. The downside to this approach is that some of the scores we impute are likely incorrect. This is a small portion of cases but might make the difference between a country moving up or down a few points.
We caution people from making a similar assumption when using the data for a purpose other than the one used here, to inform the public about human rights in the world. If we were engaging in research or policy recommendations we would likely need to be more careful and add some uncertainty to our statements. However, we think it is better to begin with a simple discussion of what the data tells us and that the findings are likely to be quite similar to more advanced research.
How is the data created?
We train undergraduate and graduate students in content analysis, a methodology in social science to convert text into numerical data. They use a scoring guideline (available online) that has rules for what counts as a violation and where to find the texts used to score a country. At least two students score each country separately, taking notes that can be reviewed later, and then compare scores. If the scores match, that score is added to the dataset alongside a set of notes explaining the decision. Where the scores differ, they discuss the case, and try to settle on a single score. Usually disagreements occur over one person missing a sentence or interpreting a word differently than the other. However, if they cannot reconcile their scores, one of the principal investigators steps in to decide on a final score by looking at the case, the notes taken by each scorer, and the source material. Having multiple people scoring each country helps reduce errors. All of the rights in the dataset have a high degree of inter-coder reliability meaning there are few disagreements that need to be resolved. We also check for odd patterns in the data and spot check scores to ensure we identify as many errors as possible.
The CIRIGHTS project is committed to human rights education. Our methodology is aimed at producing easy to understand scores that are transparent, replicable, and reliable. This means anyone should be able to download our scoring guide, and the human rights reports we use, and replicate our findings.
What sources are used to create the data?
One of the reports the CIRIGHTS project uses to quantify nearly all rights currently in the dataset is the annual U.S. Department of State’s Country Reports on Human Rights Practices. Depending on the right, re-searchers may also use the Amnesty International Annual Report, the Human Rights Watch Annual Report, the USSD International Religious Freedom Report, or the U.S. Department of State’s Trafficking in Persons Report. We opt to limit the sources of textual information about each variable, rather than adding additional sources with different country coverage and additional bias we may not be able to account for. By doing this we have a good sense of where our scores may be more biased. When using multiple sources of information this becomes much harder as very few sources cover every country in the world.
How do I read the maps?
The maps show the level of respect for human rights in the 21st century and in 2022. For all rights, higher values indicate greater respect. The legend shows what colors correspond to “No respect” and “Full respect.” The histogram below the legend shows how the values for this right are distributed around the globe; absent this, discerning distributions can be hard to see, given that some countries are smaller than others.
How is the GRIP grade created?
The Global RIghts Project (GRIP) scores government respect for 24 human rights for all countries of the world. The GRIP project assigns each country a grade of 0 to 100 which evaluates government respect for human rights. A score of 0 indicates widespread violations of all rights, and a score of 100 indicates no evidence of human rights violations in the country.
We are evaluating how well states protect internationally recognized human rights in practice. Like any grade, it is an imperfect measure. A score of 100 does not mean there were no human rights violated; we simply do not have evidence of human rights violations given the sources used, the rights we are measuring, and the scoring rules. For any country on our list, we will miss some rights violations either due to a lack of international attention or leaders successfully “cheating” on their human rights obligations in ways that evade our ability to detect violations.
To create the index, we total the scores for all 24 rights by adding them together. Then, we rescale the sum of these rights so that they range from 0 to 100. A score of 100 indicates full respect for all human rights, while a score of 0 indicates that all rights in a country are violated. Since human rights are interdependent, interrelated, and indivisible, we opt to treat all rights equally (in other words, no right is worth more than any other for the purpose of our analysis). While different scholars, policy makers, and citizens may believe some rights to be more important than others, we agree with the United Nations framework that treats all rights as equal.
We created a simple additive index because it is easy to understand and generates rankings corresponding to journalistic and scholarly reports of human rights practices. We hope this index will help contribute to a discussion among journalists, policymakers, citizens, and scholars of human rights that extends what we consider human rights beyond simply looking at state violence. All of the data we use to create this index are publicly available, so anyone can create their own index (e.g., add and subtract rights, add additional rights from other measurement projects, weight some rights as more important based on their own judgment, etc.).