effective altruism: Effective altruism (EA) is the name of a growing social movement and an idea - that of using evidence and reason to find the most effective possible ways of doing good in the world. An effective altruist is someone who identifies with and acts according to the concept of effective altruism.

cost-effectiveness: The cost-effectiveness of a charitable intervention refers to its marginal impact per dollar. For example, each marginal dollar donated to SCI pays for about 1.8 deworming treatments.

impartiality: Impartiality is the valuing of all human lives equally, independent of location, age, gender, etc.

cause-indifference (or cause neutrality): One is cause-indifferent if one differentiates between charities only based on how the charities contribute to good in the world. That is to say, one does not have a "pet cause."

prioritization: Causes can be categorized according to their scope (how much good or bad they do) and their tractability (how easy they are to improve).

counterfactual reasoning: Counterfactual reasoning is a method of deciding between actions by looking at the expected outcome in each case. For instance, one might consider how some intervention performs compared to a control.

leveraging donations: Sometimes, charitable donations can be leveraged to increase their effect. For example, instead of donating $1000 to charity, one might use the $1000 to hold a fundraiser event which results in the donation of more than $1000.

Philosophy

consequentialism: Consequentialism is the view that moral claims only depend on consequences or states of the world. That is, a consequentialist believes that the extent to which an act is good or bad depends solely on the extent to which the states of the world it causes are good or bad. Most effective altruists are consequentialists. Moral philosopher and effective altruist Thomas Pogge is one notable exception; he ascribes to a deontological system of ethics (one in which people have duties to do or not do certain actions).

utilitarianism: Utilitarianism is a particular consequentialist moral theory, which states that an act is good or bad according to the extent to which it increases happiness and decreases suffering. Different variations of utilitarianism define happiness and suffering in different ways; for instance, preference utilitarianism defines happiness (resp. suffering) as the fulfilment (resp. denial) of one's desires or preferences, whether or not this leads to pleasure. Many EAs ascribe to some form of utilitarianism.

population ethics: Population ethics asks questions about the relative importance of different sentient beings or groups of sentient beings. Its important questions include: What is the moral status of non-human animals? What is the moral status of not-yet-born humans? Is the total amount of humans with good experiences morally relevant, or does only their average happiness matter? Population ethics is a source of significant disagreement among effective altruists.

rationalism: Rationalism is the view that reason and experience / evidence, rather than religious belief and emotional responses, should be the basis of one's actions and opinions.

moral realism: Moral realism is the claim that morality exists as more than just a human construct, in the same way that most people think of the external world existing independent of humans to perceive it. By contrast, moral non-realism is the claim that morality is just an idea that humans like to talk about.

Actions and term-requiring causes

earning to give: Earning to give refers to the practice of choosing a career not for its direct impact but for its salary, and then donating a significant portion of this salary to effective charities. Earning to give can be more effective than direct work because money is flexible, because earning to give is irreplaceable (someone else will sometimes do the direct impact job if you don't), and because it allows individuals to specialize in what they are best at. Many effective altruists earn to give.

pledge (GWWC and the other one): Many effective altruists sign pledges to donate a significant portion of their incomes to charity. Members of Giving What We Can pledge at least 10% of their income to effective charities to relieve the suffering caused by extreme poverty. TLYCS has a similar pledge. A more general pledge is available at http://effectivealtruismhub.com/donations.

x-risk: An existential risk is a danger that is global in scope and terminal in intensity. That is, it threatens to "either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." Examples include severe climate change, nuclear warfare, and unfriendly artificial intelligence.

meta-EA: A meta-EA charity is an organization which contributes indirectly by seeking to build the effective altruism movement or increase its efficiency. Examples include GiveWell, CEA, TLYCS, and MIRI.

Organizations

CEA: The Centre for Effective Altruism (CEA) is a coalition of projects related to EA. Giving What We Can and 80,000 Hours are both part of CEA. CEA's other projects include setting priorities between different global challenges and raising public awareness of EA.

80K: 80,000 Hours (80K) is an organization that offers free, one-on-one career advice to individuals seeking to use their careers to do the most good in the world. 80K also publishes general research on the social impact of careers to its website.

GWWC: Giving What We Can (GWWC) is an international society dedicated to eradicating extreme poverty. GWWC recommends cost-effective charities and encourages individuals to sign its pledge, which represents a commitment to donate a fraction of one's income to effective anti-poverty charities, GWWC also has local chapters that meet in cities in the UK, USA, and elsewhere.

GiveWell: GiveWell is a non-profit that evaluates charities in order to find outstanding giving opportunities. In particular, GiveWell seeks out charities who provide strong evidence of impact-per-dollar and room for more funding, and who can demonstrate trustworthiness and transparency. GiveWell recommends just a few charities at a time, and many of these recommendations inform the donations of many effective altruists.

TLYCS: The Life You Can Save (TLYCS) is a non-profit founded by philosopher Peter Singer. It promotes effective altruism - with a focus on reducing poverty and economic inequality - through public outreach. TLYCS seeks to create local groups of informed givers and a global online community, and encourages individuals to sign its charitable-donation pledge.

AMF: The Against Malaria Foundation (AMF) is a non-profit that funds the distribution of long-lasting insecticidal nets (LLIN) to areas with high incidence of malaria, mostly in Africa. Givewell has recommended AMF as a top charity several times. In 2013, Givewell estimated that it costs AMF about $6.13 to distribute one LLIN and $3,400 to save the life of one child.

SCI: The Schistosomiasis Control Initiative (SCI) is a non-profit that works with local Ministries of Health across sub-Saharan Africa to treat children and at-risk adults for schistosomiasis and other parasitic worms. GiveWell has recommended SCI as a top charity, and in 2013, estimated that it costs $0.80 to deworm one child, with SCI paying about 70% of these costs (see "leveraging donations").

GiveDirectly: GiveDirectly is a non-profit that makes direct cash-transfers to poor households in Kenya and Uganda. These cash transfers are unconditional - recipients may spend them as they see fit. GiveWell has recommended GiveDirectly multiple times.

FHI: The Future of Humanity Institute (FHI) is a research center at Oxford that is leading producer of primary research on existential risk. FHI's main areas of research are global catastrophic risk, applied epistemology, human enhancement, and future technologies.

MIRI: The Machine Intelligence Research Institute (MIRI) is an non-profit whose mission is to "ensure that the creation of smarter-than-human intelligence has a positive impact." MIRI's main activity is to conduct research on a few topics: How can a machine reason coherently about its own behavior? What is a better formalization for decision-making under uncertainty? How can we specify an AI's goals to ensure that it matches our intentions, even as the AI modifies itself? What AI-related interventions are the most beneficial?