There are many questions about the extent of people’s duty to care for each other, but the existence of the duty is a foregone conclusion. Almost nobody, now or in the past, abandons ill or disabled members of their group. If they do, they’re viewed as despicable subhumans.
Scientists, with their intense belief in the absence of a free lunch, explain that behavior by noting that overall group survival must be improved. That’s a bit like saying water is wet. If a costly trait, like helping others, does not help survival, those creatures who have it die out. If it does help, well, then they survive long enough to be around when scientists are studying the issue.
The other evidence that it’s good for the group is that countries with the most solid safety nets, the OECD ex-US, are also the wealthiest and best-run. Far from impoverishing the people because they’re wasting money on non-producers, it somehow enriches them. The clearest example is perhaps Botswana, discussed in the previous chapter, which went from poor with no safety net to richer with a better safety net. In the relationship between wealth and social safety nets, the point is not which comes first. If care comes first, apparently that increases wealth. If wealth comes first, providing care certainly doesn’t reduce it and does make life richer.
I’d argue, together with major moral philosophers, that we also have a moral duty to each other. It’s not only that our souls are hardwired or that there’s utility to security for all. We’re social animals. Our very lives depend on the functioning of our group, which makes it more than churlish to refuse assistance when a unit of that group is in need. It’s cheating, a form of stealing, to refuse to reciprocate for benefits already received.
The sticking point is the delimitation of the group we’re willing to care for. Everybody includes their families. Most people include their immediate circle of friends and neighbors to some extent. Large numbers of people extend it to all citizens of their countries, but even larger numbers don’t. And some feel that way about all of humanity.
As a matter of fairness, care needs to extend to the whole group. I’d argue that means all humanity, but we don’t have coordination on a planetary scale yet. The largest group currently able to distribute benefits and costs consistently is the nation. It’s simple fairness to extend care to the whole group because anything else requires a double standard: one rule for me and another rule for others. Everybody in a position to do so takes whatever help they need if they’re facing disease or death. Everybody, at the time, feels help is a human right. If it’s a human right for one, it’s a human right for all.
There’s also a practical reason why care should be a function of government. The larger the group over which the burden of care can be distributed, the smaller the cost borne by any one individual. The proportion of GDP paid in taxes is actually lower in some countries with medical care for their citizens (e.g. Australia, UK) than is the equivalent per capita expense in, for instance, the US, where citizens pay more per capita for those services and yet have poorer outcomes. (A reference summarizing the previous link.) The government is a more efficient distributor of social insurance than, in effect, requiring each family or small town to be its own insurer. It’s an analogous case to mass transit. A village would be crushed by the expense of building a complete modern mass transit system for themselves alone, but when everyone pays into it the costs per individual are small and the benefits are much greater.
Providing care works best as a coordinated, distributed, non-profit system, which is exactly the type of endeavor government is designed to undertake. (Unlike defense, however, government doesn’t have to have a monopoly on care.)
I’ll spend a moment on the concept of moral hazard (as I have in an earlier post) since it has some followers at least in the US. The idea is that if someone else is paying, the normal limits on overspending are lifted and much money will be wasted. A recent example occurred in the financial industry. Institutions peddled much riskier instruments than they would have on their own account because they assumed that taxpayers would carry major losses should they occur. Much money was wasted. So moral hazard is a real problem. It’s just not a real problem in the realm of social insurance.
Social insurance is for things people would rather avoid, even if someone else pays for them. Nobody gets old for fun. Very few people go to doctors by choice (unless it’s for elective cosmetic treatments, and those aren’t covered in any system). Medical visits are a chore at best, and one most of us avoid no matter who’s paying for it. Nobody says, “Gee, I think I’ll check into the hospital instead of going to the beach.” So the motivation to spend other people’s money is simply not there on the part of the patients. The doctors can be another story, but that problem is created largely by faulty reward systems. At this point, ways of fixing those are known if people actually want to end the problem rather than profit from it.
It’s also worth pointing out that the consumer model of medicine is as much of a fantasy as the freeloader. When people need treatment it’s not usually a planned and researched event. Sometimes it’s even a desperate event. Very few patients know which treatments are available for them or which are best for their condition. There is no way, in that case, to “shop” for the best option. It’s a complete misapplication of a marketplace model, which presupposes equal information among rational and independent actors. Patients are not in a position to be choosy and are in a dependent relationship to vastly more knowledgeable experts. Basing actions on fantasy is, to use the metaphor one more time, like jumping from a third floor window on the assumption one can fly. It does not end well.
Two classes of patients who do cost more than they should are hypochondriacs and malingerers. Doctors are quite good at spotting the latter, and the former are a microscopic expense compared to the costs of trying to stop them from using the system.
There is a simple reason for that, and it’s not just because of all the expensive bureaucratic gatekeepers. The main reason is people don’t always know when they’re ill or how serious it is. That’s why they go to the doctor. Because they don’t know. That means any attempt to make it difficult or costly to get medical attention results in a significant number of people who don’t get it early. Untreated medical problems, except when they lead to immediate death, are always more expensive to treat later. Thus, it is more expensive to discourage people from spending other people’s money to go to doctors, counterintuitive as that might seem.
Common sense dictates the opposite because it seems obvious that paying for medical care will cost more than not paying for it. So any care has to cost more than no care. And that is indeed true. But it misses a crucial point: it’s very hard to watch people die in the street. Refusing to spend any money on care for others is only cheaper if one is willing to follow it through all the way to the logical conclusion. Without care, some people will die in inconvenient places. If you can’t stand the sight and send the dying to hospital, somebody will wind up paying. The amount will far exceed what it would have cost to prevent the problem in the first place.
The common sense intuition that it’s cheaper not to pay for care depends on being willing to live in a world where people suffer and die around you. Such places require psychological coping processes, mainly the invention of reasons why the victims deserved their fate so that one can feel safer in a terrifying world. The process necessarily feeds on itself and further blunts understanding — whether of people, of situations, or of effective solutions — until there’s none left. The real moral hazard of social insurance is not spending money and losing some of if it. The real hazard is withholding it and losing everything else.
Given that there’s a duty to care for others, how big is it? Does it have priority over all other needs? Would that serve any purpose?
The answer is, of course not. The duty extends to what can feasibly be provided without destroying other essential aspects of life, given local technology and funds. If funds are limited, and they always are, some types of care have to be given priority over others.
Since social insurance involves spending other people’s money, the first criterion should be using it where it brings better downstream benefits. That one rule indicates a well-known series of actions that all return hundreds of times their initial cost in subsequent benefits, both to the economy as well as to their citizens’ quality of life. Those include maternal and neonatal care, clean water, safe toilet facilities, bednets in malarial regions, vaccinations, prevention and treatment of parasitic diseases, and preventing disease-producing vitamin A, B and protein deficiencies. Using public funds to avoid much larger expenditures later is a clear case for investment, in the literal meaning of that word.
The next tier of public health is provision of emergency services, then basic hospital care, and then full-blown medical care. Spending money on all of these saves money down the road, but the more initially expensive services may nonetheless be out of reach for the poorest countries, even with intelligent priorities. That is a clear signal for international aid, I would argue, and not only for moral reasons but also for the purely practical one that health and disease don’t stop at borders. Effective treatment of TB anywhere, for instance, is prevention of TB everywhere.
Palliative care for the terminally ill has no financial downstream benefits, but a system without double standards will always provide it, regardless of the nation’s poverty level. An absence of double standards dictates that available treatments to alleviate suffering must be provided.
So far, I’ve been discussing government involvement that people welcome, even when they don’t welcome paying for it. But when public health measures are compulsory, that’s much less popular. A compulsory component is, however, essential. The right to be free from harm comes very near the top in the hierarchy of rights, and a careless disease carrier can certainly spread harm.
It’s become acceptable to respect people’s rights to their own beliefs over the right to be free from harm, which shows confusion over the relative rank of the two. Ranking a right to belief, even when it’s not counterfactual, above the right to be secure in one’s person will end in all rights being meaningless … including the right to one’s own beliefs. Freedom isn’t possible if freedom from harm is not assured. That’s generally obvious to people when the threat is immediate, and the confusion arises only because so many people no longer feel that lethal communicable diseases are their problem. So, it has to be understood that the right not to catch diseases from others is of the same order as not to be harmed by others generally. The powers-that-be have the right to enforce whatever measures are needed to safeguard the public health.
That said, although the mandate to safeguard the public health is an ethical issue, how it’s done is a scientific and medical one. Effectiveness must be the primary determinant of action. Even though the government has the right to compel adherence to public health measures like vaccination or quarantine, if compulsion is not effective, the very mandate to safeguard the public means that compulsion should not be applied.
The fact is that medical compulsion makes sense only under rare and unusual circumstances, generally when there’s imminent danger of infection for others from specific individuals who are uninterested in or incapable of taking the necessary steps to prevent transmission. That situation can arise in any potentially epidemic disease, such as tuberculosis, especially extremely multidrug resistant (XDR) TB, or Ebola Viral Disease. As Bayer & Dupuis note in the first linked article on TB, there’s a range of effective measures from directly observed ingestion of medication all the way up to involuntary detention. The deprivation of rights must be proportional to the degree of threat.
Public health is best served when people are willing participants in the process of preventing disease, and the best way to enlist cooperation is to give them what they want. At the simplest level, that’s information, and at the most complex, it’s treatment for the disease and assistance with any tangential consequences. The importance of helping rather than forcing patients is evident in any public health document. For instance, the CDC sheet on tuberculosis treatment mentions coercion only in the context of increased treatment failure due to noncompliance. Also, as should be obvious, high voluntary treatment adherence leads to lower costs.
Common sense plays us false again by insisting that it must be more expensive to persuade people than to dispense with all the extras and just force them to shut up and take their medicine. The fallacy in that is clear with a moment’s thought. A person facing what amounts to imprisonment, or any other negative consequences, will hide their disease — and spread it — as long as possible. It’s far more expensive to treat an epidemic in a whole population than it is to give even the most expensive treatment to the few people who start it. A person who can expect every possible assistance will go to be tested as quickly as possible, which costs less by orders of magnitude than dealing with an epidemic. (There’s also, of course, the benefit that far fewer people take time off work, suffer, or die.)
Vaccination has different obstacles, but ones which it’s equally important for concerted government action to overcome in the interests of public health. For vaccination to be effective in preventing epidemics of highly infectious diseases (as opposed to conferring individual immunity), a high proportion of the population must be immunized. The number varies based on infectivity and mode of transmission, but it’s on the order of 95%. Then, if the disease infects a susceptible person, the chances are it will find only immune people around that individual and be unable to spread.
That sets up an interesting dichotomy. If vaccinations are compulsory for the purpose of public health, fairness requires the rule to apply to everyone equally. However, as a matter of practical fact, it doesn’t matter if a few people avoid it. How communities handle this depends to some extent on how angry it makes them if some people can be exceptions. The most important factor, however, is the practical concerns around encouraging maximum cooperation. The sight of people being dragged off to be vaccinated does nothing to educate people that immunization is actually a good thing to do. I would argue that the practical considerations indicate approaching the issue of exceptions as tolerantly as possible, and only employing compulsion when the public health is endangered. In other words, education about the benefits is more important than enforcement against people who don’t get it, unless the latter group is actually endangering others.
Education about vaccination — or any other fact-based issue of public significance — has to be understood in its broadest sense. It’s the sum of inputs people get on the topic, whether from news reports, ads, stories, school, or, last and least, government-produced information materials. All of these avenues need to respect the truth.
Requiring respect for the truth may sound like interference with the right to free speech, but I strongly disagree. On the contrary, necessary restrictions actually support free speech by improving the signal to noise ratio. My argument is in the chapter on Rights, but the short form is that people are not entitled to their own facts, and that facts differ from beliefs and opinions because they’re objectively verifiable to an acceptable standard of certainty. Respect for the facts and its converse, a relative absence of misinformation, together with useful information in school that is later reinforced by wider social messages, clears the air enough to enable people to make reality-based decisions when the need arises. Those who persist in counterfactual beliefs become a small enough population that workarounds are possible.
Getting back to vaccination, specifically, the question was how to handle the small minority of people for whom immunity is not required on grounds of public health. Exemptions could be given to those who want them, up to the maximum that is safe for epidemiological purposes. If demand is larger than that, the exemptions could be distributed randomly within that pool. (And it would indicate that education efforts need to be stepped up.)
Other public health measures need to follow the same principle. Compliance can be compulsory when that is essential to prevent harm to others, but given the nature of people and diseases, the system will work better and more cheaply if most compliance is willing. That means the vast majority of efforts need to be channeled toward ensuring that people understand the benefits of public health measures, even when they involve personal inconvenience or hardship. Compulsion needs to be reserved for cases of criminal negligence and endangerment. It needs to be the last resort, not the first, and it needs to be limited to the very few actually abusing the system, not the larger number who are only afraid of it.
Although not usually understood as medicine, there are many functions of government with direct effects on health. Agricultural subsidies, urban planning, and mass transit all come to mind. Urban planning may sound odd, but the layout of neighborhoods, the proximity of stores, the ease of walking to mass transit, the availability of car-free bike routes, and the presence of parks, all have a large effect on how much walking or other exercise people do just in the course of everyday life. That’s starting to look like an underappreciated factor (pdf) in maintaining public health. Direct government support of exercise facilities is another example of a public health measure that’s not usually included in that category. Those facilities could be public playgrounds, swimming pools, playing fields, gyms, dance schools, or just about anything that facilitates movement rather than sitting.
A problem with the current implementation of government policies that affect public health is a narrow definition of the field. As a result, the relevance of health to policies is overlooked. An obvious example in the US is corn-related subsidies. They were started to get corn-state votes (not their official reason, of course), and had the effect of making high calorie corn-based ingredients cheap, which has contributed to a rise in calories consumed, obesity, and associated diseases. Separate from the issue of whether buying corn state votes with taxpayer funds is a good idea, it’s definitely a bad idea to make people sick in the process. Any aspect of government with a public health component needs to be examined for public health implications and implemented accordingly.
Moving from matters too mundane to be considered medical to those too new for their implications to be generally appreciated, the increasing availability of genomic information raises some fairness issues. (Discussed here earlier.)
The vision is that as we learn more, we can take preventive steps and avoid the ills we’re heir to. In the ads for the first crop of genetic testing companies, the knowledge about disease susceptibility is used to apply treatment and lifestyle choices that avert the worst. That’s not a hard sell.
But there are many other aspects to testing that are less benign. Some are personal, and hence not a direct concern here. For instance, what about diseases for which there is no treatment? In a system with true individual control over their own data (discussed under privacy in the second chapter), concerns about information falling into unwanted hands should be a thing of the past. The decision whether or not to be tested is still a difficult one, but also a purely personal one. The government’s responsibility ends at making sure that all tests have the patient, not profit, as their focus. An adequate medical system would require access to useful follow up counseling for all tests. Allowing testing without comprehension is allowing some companies to use people’s fears to part them from their money. It’s hard to see how it differs from less technical scams.
The most problematic implication of genetic testing is that improved information about risk undermines the premise behind private insurance. The general idea behind insurance is that bad things don’t happen to most people most of the time. By taking a little bit of money from everybody, there’s enough money to tide over a few specific people who have problems. The bigger the group, the better this works.
However, private insurance companies necessarily insure only a subset of the population. If the risk pool is smaller than everybody, then the best thing a company can do to improve profits is to get rid of bad risks. Hence, the better the tools and the more accurate the risk assessment, the less private insurance will actually function as insurance, i.e. as a way of diluting risk. We can try to patch that with regulations and fixes, but the underlying gravity will always work in the same direction. Insurance companies will use testing to slough off bad risks. They have so much to gain from a more accurate assessment of risk, that I’d be willing to bet they’ll be among the earliest adopters of diagnostic genome scans. In places with private medical insurance, it won’t just be former cancer patients who are uninsurable.
The inescapable implication is that genetic knowledge works to the individual’s benefit only in a national or supranational health care system. Anything less, any ability to exclude some people from the pool will, with improved knowledge, end in so many exclusions that there is no pool and hence no real insurance. Thus, there’s yet another practical reason why a national medical system is not only a good idea, but a necessary one.
The most difficult question is testing for diseases which do have treatments. They force a choice about who controls treatment decisions. The obvious answer — the patient decides — is easy when all parties agree. But in many cases they don’t. Assuming a national medical system funded by taxpayers, there’s a mandate not to waste money. Then assume genetic testing that can accurately predict, say, heart disease risk. (We’re not there yet, but we’ll inevitably get there some day, barring a collapse of civilization.) Exercise is a demonstrated way to improve heart health. So, should everybody exercise to save taxpayers (i.e. everybody) money? Should exercise be compulsory only for those in the riskiest quintile? Or the riskiest half? How much exercise? Which exercise? Will those who refuse be given lesser benefits? Or what of drugs, such as statins, that reduce some cardiovascular risks? If people are forced to take them, what does that do to the right to control one’s own body? And that doesn’t even touch on who’s liable if there are eventual side effects. More broadly, do we have the right to tell some people to lead more restrictive, healthier lives because their genes aren’t as good? What level of genetic load has to be present before prevention becomes an obligation? What happens when the medical wisdom changes about what constitutes prevention?
The questions, all by themselves, show how impossible the choices are. Add to that how counterproductive compulsion is in medicine, and it becomes clear that the idea of vesting treatment control in anyone but the individual is absurd.
That does mean taxpayers have to foot the bill for the stupid behaviors of others, but there are two reasons why that’s a better choice than the alternative. The first is that everyone, for themselves, wants to be free to live life as they see fit. Fairness demands that others be treated as we ourselves want to be treated, so control needs to rest with the individual on that basis alone. The second is that the choice is not between wasting some money on the unwise or saving it for the more deserving. The choice is between wasting some money on the unwise or wasting vastly more on an unwieldy system of oversight. The ridiculous endpoint is a webcam in every refrigerator and cars that won’t start if the seat senses the driver has gained too much weight.
It’s important to remember the nature of the real choice because any system of public universal health care will naturally tend toward preventive measures. In a private system, the incentives promote the exclusion of riskier members and a focus on diseases because that’s where the profits are. (Discussed at more length here). In a public system, exclusion is not an option and prevention is cheaper than cure, so the focus is on, or will tend to, preventing disease in the first place. That’s a very good thing, except if it’s allowed to run amok and trample crucial rights to self-determination in the process. So, again, it’s important not to force preventive measures on people even though the system is — rightly — focused on prevention.
However, just because preventive measures for non-infectious diseases can’t be forced, that doesn’t mean prevention can’t be promoted. As I discussed under vaccination, fact-based information can certainly be presented. Education and social planning that facilitate healthy living have to be the tools to promote prevention.
Care of the elderly is another major area of social responsibility. From one perspective, it’s the disabilities of age that require help, which would make retirement a subset of assistance to the disabled generally. In other ways, however, retirement is supposed to be a deserved rest after many decades working. Those are two separate issues.
The main difference between them is that helping the disabled is a universal social obligation, but providing a reward for a life well spent really isn’t. That kind of reward also raises some fairness issues. If the reward is a right, and rights apply to everyone equally, then everyone should get that reward. But life span is not a known quantity. There’s no way to calculate a universally applicable number of years’ rest per number lived. A method that relies on the luck of the draw to get one’s rights may work after a fashion, but can hardly be called fair.
Retirement based on the proceeds from investments is not the topic here. Obviously, anyone can invest money and try to acquire independent means, but that’s not an option for everyone. Considerable starting capital is needed, or the ability to save large sums of money over a period of decades. That’s especially true in a system where interest rates are limited by degree of risk and wealth creation. For instance, assuming a 3% rate, one would need around $800,000 to receive $25,000 per year, if that was the living wage. A person making $25,000 who had no starting capital and received 3% interest would have to save somewhat over $10,000 per year to have a large enough nest egg after some 40 years. Although that’s perhaps not physically impossible, it would preclude a family or any other interests. It is, in any case, not the sort of sacrifice that people would generally choose for themselves. On the other hand, if a person has somewhat over $200,000 to start with, then a rather high savings level of $1800 per year (for retirement alone) over 40 years will also yield $800,000. Not everyone has $200,000 to start with, and even fewer don’t have other pressing needs for the money before reaching retirement.
Self-funded, investment-based retirement is available only to people with money, who can certainly try to follow that course. From a government perspective though, when rights are the concern, retirement based on individual investment is irrelevant because it can’t be universal.
Given that there’s no social obligation to pay a guaranteed annual income to able younger people, it’s hard to see why there should be one to able older people. However, a number of factors muddy the issue. Older workers are less desirable in work requiring speed or stamina. That can lead to real issues in keeping or finding work as people get older, since any job, physical or not, requires some stamina. At a certain point, and the point varies with the individual, it’s better for everyone if retirement is an option.
Recognition of the reduced employability of the elderly could be expressed in much looser “disability” standards as people get older. Given that people differ, it should be a gradual scale, with a reduction in standards beginning, say, at forty and reaching completely self-defined disability at five years less than average life expectance. In other words, the older one gets, the more weight is given to one’s own assessment of ability to work and the less to medical input.
Retirement in that system would be very different from what it is today, except for the most elderly. Much younger people in chronic poor health could elect to retire early with medical approval. Much older people could keep working if they wanted to. Mandatory retirement would not be a factor, which is as it should if individual self-determination is important.
Retirement also does not have to be an all or nothing system. Hours worked could be reduced gradually. People who no longer have the capability to work full time could reduce their hours, and pension payments could make up the difference to a living wage. Where possible, there should also be rules to enforce accommodation for workers who need it. That applies not only to the elderly, but also to the disabled and to parents who need alternating shifts.
The proportion of people drawing pensions in a flexible system oriented to individual health might increase or decrease compared to our current situation. Without access to the necessary detailed information, it’s hard to tell. My guess is that it would reduce it quite a bit. Furthermore, unlike a 40-hour work week, a 24-hour week would not exceed the stamina of a much larger number of people. So the proportion of retirees supported by the working population might be smaller than under the current system.
I suspect that a big part of eventual resistance to retirement benefits based on reduced stamina would come from the feeling that after decades as wage slaves, people deserve some time of their own. In other words, it’s based on the retirement-as-reward model. However, I also suspect that the shorter work week would reduce the power of that model considerably. A 24-hour week leaves enough time for interests besides work and means that unsatisfying jobs occupy less of one’s life. That may mean less need to retire just to escape a bad job or to “have a life.”
Mandatory age-specific retirement does have one useful social function. It forces change. It prevents ossified codgers from taking up valuable space people require for other purposes. And, if done right, it’s also good for the codgers by shaking them out of ruts. I’m not aware of studies proving that the leavening effect of retirement is necessary for social health, but my guess is it’s good for us.
I suspect that would be even truer if and when we have longer life spans. Nobody could stay active, creative, interested, or even polite, in the same job for a hundred years. Something like the academic tradition of sabbaticals would be necessary for everyone. In the popular misconception, those are long vacations, but in reality they involve different duties rather than time off. In this case it would be a time to reorient and to look for work in a new field, or for a different job in one’s old field. With a 24-hour work week, there is enough time to take a few years for retraining ahead of the required transition. If the rule was to take a year off after every thirty, in a type of socially funded temporary “retirement,” and to return to a new situation, there’s no question it would have a leavening effect.
There is a question about how it could work in practice. People starting a new career after leaving another at the top would have lower salaries. There’s generally huge resistance to any change involving less money. Those at the top of the tree could easily game the system. For instance, the Russian limit on Presidents is two terms, so Putin came back as Prime Minister instead, at the behest of his cronies. The U.S. military, to give one example, approaches both problems — people serving too long or gaming the system — by having the option of retirement after 20 years of service and no option to return to similar rank. However, I doubt very much that any society could afford fair pensions, i.e. ones that equalled a living wage, for everyone after 20 years of work. The leavening effect would have to come from changes to other jobs.
I see the financial aspect of the requirement to switch jobs as follows, although what actually works would have to be determined as the ideas were applied. People who haven’t had a major job change in 30 years (i.e. a change other than promotions or transfers) would prepare for it much as they do now with retirement. They’d retrain ahead of time, if they wanted, and they’d save some money over the years to cushion the transition. For average earners, interest payments wouldn’t equal a living wage, but they’d help ease a lower starting salary in a new job. Further, given flatter salary differences, a lower starting income would likely still fall within the range of middle class pay. Those for whom the difference is large would be the high earners, and it doesn’t seem unreasonable to expect them to save more and cushion their own transition. The process would be much the same as the way people plan now for lower incomes in retirement.
In summary, retirement under a system that tries to apply equally to all would look a bit different. Disability would be the primary determinant of age at retirement, but with rules for determining disability adapted to the realities of old age. Mandatory retirement at a specific age can’t be applied equally, but having some mechanism that requires job changes is probably necessary for social health. A year “sabbatical” every thirty years or so to facilitate and reinforce the shift to new work should be funded as a form of retirement in the broad sense.
The obligation to care for people with disabilities covers a wide spectrum. At one end it means nondiscrimination and the requirement to provide simple accommodations as needed. At the other end are those who require 24-hour nursing care. That spans a range of services, starting with simple information, and continuing through increasing levels of assistance as needed. From a government perspective the issue is coordinating very different functions so that they’re matched to variable individual needs without waste. The social and medical science of delivering care is, of course, far more complex, and is beyond the focus here.
Waste can arise because people overuse the system, and that problem gets popular attention. It can also arise from management inefficiences that don’t properly match services to needs, which is a much more boring topic. The money wasted is, as usual, orders of magnitude greater in the boring zone. The good news, however, is that since much of the waste is caused by the government’s lack of responsiveness to the needs of the disabled, it would be amenable to solution in a system with transparency, effective feedback methods, and administrative accountability. Medical and social science, as well as the disabled themselves, can determine which aid is needed and when. The government’s job is to streamline distribution and delivery of services so that they’re matched to the individual’s current needs rather than the government’s administrative ones.
As with all other aspects of social care, the expense of the assistance that can be provided depends on a country’s wealth. Accommodation, home help to enable the disabled to live outside institutions, and any devices to assist independent living, are not expensive and return a great deal of value to the disabled themselves and to everyone else by retaining more productive members of society. Expensive medical treatments might be out of reach in some situations, but if government were doing everything possible to coordinate an easier life for the disabled in other respects it would be a big step forward from where we are now.
Facilitating child care is directly relevant to social survival, and it’s not hard to make the case that it’s society’s most important function. Logically, that should mean children (and their parents) get all the social support they need. In practice, the amount of support follows the usual rule of thumb in the absence of explicit rules: those with enough power get support, and those without, don’t.
Defense, the other social survival function, can provide useful insights into how to think about child care. Placing the burden of child care solely on the family and, more often than not, on the women in the family, is equivalent to making defense devolve to tiny groups. It’s equivalent to a quasi-gang warfare model that can’t begin to compete with more equitably distributed forms. When it comes to defense, it’s clear that effectiveness is best served by distributing the burden equally in everyone’s taxes. That is no less true of rearing the next generation, which is even less optional for social survival.
Just as having a police and defense force doesn’t mean that people can’t resolve disputes among themselves, likewise having social support where needed for children doesn’t mean that the state somehow takes over child care. It means what the word “support” always means: help is provided when and where parents and children can use it. It’s another task involving long term coordination among all citizens for a goal without immediate profit, precisely the type of task for which government is designed.
The state’s responsibility to children extends from supporting orphans and protecting children from abuse up to more general provisions for the growth and development of its youngest members.
Barring overwhelming natural disasters, there are no situations where it’s impossible for adults to organize care for children. There is no country in the world, no matter how poor, that doesn’t have sufficient resources to care for orphans. That’s a question of will and allocating resources. For instance, there is no country with orphans that has no standing army. There is no country with orphans without any wealthy people. People make decisions about what is important, and spend money on it. But that’s not the same as being unable to afford care for orphans.
Judging by people’s actions when directly confronted with suffering children, few people would disagree about the duty to care for them. But on a less immediate and more chronic level, distribution of the actual money doesn’t match either instinct or good intentions. As with everything else, meeting obligations depends on rules that require them to be met. Otherwise the powerless have no recourse, and there are few groups more powerless than children.
Children’s right to care parallels the adult right to a living, but it also has important additional aspects beyond the satisfaction of the physical needs. Children have a right to normal development, which means a right to those things on which it depends. Furthermore, individuals vary, so the needs for nutrition, exercise, medicine, and education need to be adjusted for individual children. Specifically with respect to education, children have the right to enough, tailored to their talents and predilections, to make a median middle class living a likely option.
Provision of care is a big enough task that the discussion often stops there. I’ll address care itself in a moment, but first there’s a vital practical factor. Effective child advocates are essential to ensure that the right to good care is more than words. Children with parents presumably have backers to make sure they get what they need. For orphans, or those whose parents don’t fulfill their duties, there need to be child advocates, people with the power to insist on better treatment when there are inadequacies.
Advocacy needs to be the only task of the advocates. If they are also paid care providers, administrators, or have other responsibilities in the state’s care of children then there is an inevitable conflict of interest. If a child’s interests are being shortchanged, the people doing it are necessarily adults, possibly even the advocate him- or herself in their other role. Given the social weight of adults versus children, the child’s needs are the likeliest to be ignored if there’s any divergence of focus. In order for the advocates to truly represent the children under their oversight, they cannot have conflicting priorities.
Another essential element for good advocacy is a light enough case load that real understanding of a child’s situation and real advocacy is possible. Given a 24-hour work week, that might be some ten children per advocate.
The advocates are public servants and, as such, would be answerable to their clients just like any other public servants. Other adults, or even the child as she or he gets older and becomes capable of it, can call the child’s representative on shoddy work. However, since the clients are children, contemporaneous feedback is unlikely to ensure that the advocates always do their jobs properly. An added incentive should be that children can use the benefit of hindsight if the advocate has been irresponsible. In other words, children who are not adequately cared for by the state can take the responsible representatives to court, including the advocates who neglected them, and call for the appropriate punishment. Thus, being a child advocate would carry much responsibility, as it should, in spite of the light workload.
Given the potential for legal retribution, the expectations for what constitutes good care need to be stipulated clearly for all paid carers. If the state provides the types of care shown to have similar outcomes to families, and if the carers meet their obligations, that would be an adequate defense in eventual suits.
I don’t see these same constraints — outside oversight of and by advocates and the eventual possibility of legal retribution — as necessary for parents, whether birth or adoptive. In the case of criminal negligence or abuse, the usual laws would apply and parents could expect the punishment for criminal behavior. But there isn’t the need for the same level of feedback about subtler neglect because parents aren’t likely to decide that it’s five o’clock on a Friday, so they’ll forget about their kids until some more convenient time. Paid government functionaries, on the other hand, could be expected to often consider their own convenience ahead of the child’s unless given real motivation to do otherwise. The mixture of rewards and penalties suggested here is just that, a suggestion. Research might show that other inducements or penalties were best at motivating good care for children. However it’s accomplished, the point is to ensure good care.
There’s nothing new in the idea of child advocates. They’re part of the legal system now. But they’re woefully overworked and under-resourced which limits their effectiveness at their job. In a system with realistic case loads and built-in feedback, implementation should be more effective. It’s very important to get the advocacy component right, because even the best laws are useless if they’re not applied. Children are not in a position to insist on their rights.
Child advocates have another important function. A point I made earlier in Chapter 4 on Sex and Children is that one of the most important rights for children, practically speaking, is the right to leave a damaging family situation. Application of that right is likely to be problematic whether because children don’t leave situations when they should, or because they want to leave when they shouldn’t. Adult input is going to be necessary for the optimum application of children’s rights. The child advocates would be in the front lines of performing this function. Any adult in contact with the child could do it, but it would be part of the official duties of the advocates. They would assess the situation, call in second opinions as needed, and support the child’s request, or not, depending on the situation.
The definition of adequate care meets with a lot of argument. For some, only family placement is adequate, even when there are no families available. The consequence, at least as I’ve seen it in the U.S., is to cycle children in and out of foster homes like rental furniture. Somehow, that’s deemed better than institutional care.
The definition of adequate care by the state has to take into account the reality of what the state can buy with well-spent money. Nobody, not even a government, can buy love. So it is pointless to insist that adequate care must include a loving environment, no matter how desirable such an environment is. That simply can’t be legislated.
What can be legislated is a system of orphanages and assistance for families who want to adopt. The word “orphanage” tends to raise visions of Dickensian horrors crippling children for life. Orphanages don’t have to be that bad.
The studies I’ve seen showing the damaging long term neurological and behavioral effects of foster and institutional care don’t separate the effects of non-parental care from the effects of unpredictable caregivers, anxiety-inducing uncertainty about one’s future, neglect, bad care, and downright abuse. Bad factors are more common in non-parental care, and bad factors are, well, bad. It’s not surprising studies would show such care is not good for children.
An adequate standard of care would meet a child’s basic needs, and those are not totally different from an adult’s. Children need a sense of safety, of comfort in their situation. Stability, which families generally provide, is an important component, and that’s one reason family situations are supposed to be better than institutions. But stability, by itself, is only slightly preferable to chaos in the same way as prison is preferable to a war zone.
At least as important as stability — possibly the main reason stability feels comfortable — is the feeling of control it provides. Having no idea what will happen next, which is a very fundamental feeling of powerlessness, causes fear, frustration, and anger in everyone, at any age.
That is where birth families have a natural advantage. Since the child was born into that situation, it feels normal and isn’t questioned. So long as things continue as they were, the world seems to operate on consistent rules, one knows what to expect, one can behave in predictable ways to achieve predictable results, and there’s some sense of control over one’s situation. This is why even bad situations can seem preferable to a change. The change involves a switch to a new and unfamiliar set of conditions, the same behavior no longer leads to the expected consequences, and there’s a terrifying sense of loss of control and of predictability. That’s bad enough for an adult with experience and perspective. For a child with no context into which to place events, it’s literally the end of the world.
I haven’t found sociological studies that place the question of foster care in the context of a child’s sense of loss of control because it’s axiomatic that children are powerless. I’m saying that they shouldn’t be.
Given that children, like adults, need stability and a sense of control in their lives, there are implications for what it means to have child-centric laws. Children have rights, and rights are fundamentally rules that give one control over one’s own situation compatible with the same degree of control for others. There have to be some differences in implementation because of children’s lack of experience, but the fundamental idea that children have the right to control some fundamental aspects of their own lives must be respected. That means children should not be separated from people they care about, and that they should be able to separate from those who are causing serious problems for them.
With the very basic level of control of being able to leave damaging situations, children’s main need is for stability in a benign environment with someone they care about. They don’t actually need their birth parents. They only seem to because birth parents are usually the best at providing that environment. If they don’t, and somebody else does, children are far better off with the somebody else rather than dad or mom, no matter what the birth parents want. Child-centric laws would give more weight to the child’s needs in that situation than the parent’s. The current focus on keeping children with their birth families under any and all circumstances is misguided. The focus should be on keeping children with the people they, the children, care for the most.
Specific people may be beyond the state’s power to provide due to death, disaster or abandonment, but a stable benign environment can be achieved. One way is by facilitating adoption. The other is by providing a consistent, stable, and benign institutional environment.
If there are relatives or another family where the child wants to stay and where the adults are happy to care for the child, then that transition should be facilitated. The child should be able to live there without delays. The child advocate would have a few days, by law, to make an initial check of the new carers, and during that time the child could stay in the facilities for orphans. The longer official process of vetting the new family and making the adoption official would then, by law, have to take place within weeks, not years.
There also needs to be a fallback solution if no family situation is available. Likewise, there needs to be a place for children who have to leave their families and have no other adults who can care for them. In my very limited personal experience, systems that mimic family situations seem to work about as well as ordinary families do. I’ve seen that in the Tibetan Children’s Villages. They’re set up as units with a consistent caregiver for a small group of children (about five or six). By providing decent working conditions and good salaries, turnover among caregivers is reduced and reasonably consistent care is possible to achieve.
It’s not the cheapest way to provide care, but saving money is a lower priority than enabling children to grow into healthy adults. Longitudinal studies of outcomes may show that other arrangements work even better. My point is that there are situations with paid caregivers and consistent environments that seem to work well. The solutions that are shown to work, those with a similar level of success as families at allowing children to grow into stable and competent adults, are the standard of care the state can provide, and therefore should provide.
The problem of dysfunctional parents leads to the difficult question of revoking parental rights. In a system where children have rights and can initiate separation from dysfunctional families, border line cases become easier to decide. If it’s not a good situation, and the child wants to leave, the child can leave. The situation does not have to reach the more appalling level seen when the impetus to remove the child comes from the outside, as it always does now.
If the child is not the one triggering a review of the parents’ rights, then the decision on whether to revoke becomes as difficult as it is now, if not more so.
The child’s needs, including what the child her- or himself actually wants, have to be the primary deciding factor. Even dubious situations, barring actual mental or physical abuse, have to be decided according to what the child genuinely wants. It may go without saying that discovering the child’s wishes requires competent counselors to spend time with the child. Also, since the child has rights, she or he could ask for another counselor if they didn’t do well with the first one. Only in cases of actual abuse could the child be taken out of a situation even without their request. The reason for that is probably obvious: abuse, whether of adults or children, tends to make the victim feel too helpless and depressed to leave. Physical condition of the home would not be a reason to place children elsewhere. It would be a reason to send in a professional to help with home care, but not one to separate families.
Care of infants
There’s a systemic problem when the input of children is part of the process of enforcing their rights. Those who can’t articulate any input, such as infants, require special rules. I’ll discuss that, which in some respects includes disabled children with similar mental capacity, and then return to the care of children generally.
Infants have a number of special needs. They can’t stand up for themselves in any way, so they must have others who stand up for them. Their physical development is such that incorrect handling, even when it would be physically harmless for an older child, can lead to permanent disability. And they’re at such an early stage of development that bad care causes increasing downstream harm. So they need skill in their carers. The harm caused is permanent, so preventing it has to be the first priority, one that trumps parental rights.
That last may seem particularly harsh in the current view where children have no rights. However, as discussed in the second chapter, rights have to be balanced according to the total harm and benefit to all parties. The right to be free from interference ends at the point where harm begins. Harm to infants and children is a more serious matter than harm to adults — more serious because children are more powerless and because the downstream consequences are likelier to keep increasing — and the adult’s rights end where their concept of care or discipline harms a child. The child doesn’t have more rights than an adult in this respect, only the same rights. The extent to which the reduction in parental rights seems harsh is a measure only of how much children are truly viewed as chattel.
Prevention is always better than cure, but in the case of infants it’s also the only real alternative. There is no cure for a ruined a future. The good news is that there are a number of proven measures that reduce the incidence of neglected or damaged children. Almost nobody sets out to have a child on purpose in order to abuse him or her, so the human desire to do the right thing is already there. It’s only necessary to make it easy to act on it. For that, people need knowledge, time, and help in case of extreme stress. All of these can be provided.
A semester’s worth of parenting class must be an absolute requirement in high school. Students who miss it could take a similar class in adult education. It should teach the basic practical skills, such as diapering, feeding, clothing, correct handling, and anger management. There should be brief and to the point introductions about what can be expected of infants cognitively and emotionally, and when. There should be programs that allow students who want more extensive hands-on experience to volunteer to work with children. And for those who come to this required class with plenty of life experience on the subject — through having been the eldest in an extended family, for instance — they could act as student-teaching assistants in those areas where they were proficient. Nobody, however, should be excused or everybody would claim adequate experience. It’s an odd thing about people, how near-universal the drive is toward parenthood and yet how common the desire is to avoid actual parenting.
Since having the class is a prerequisite to having a child, there need to be sanctions for those who insist on ignoring it. On average the likeliest consequence of skipping the class will be nothing. Offspring have been born to parents with no previous training for over five hundred million years. However, in a small minority of human parents, the lack of knowledge and practice will lead to children who become a greater social burden. That costs money, so it’s only fair to make those who cause the problem to share in its consequences. Yearly fines should be levied on those who have children without passing the parenting class, until they do so. The assessment should be heavy enough, i.e. proportional to the ability to pay, that it forms a real inducement to just take the class. The amount, like other top priority fines, would be garnished from wages, government payments, or assets, and would apply to both biological parents. If one parent makes the effort, but the other doesn’t, the fines would still apply to the recalcitrant one even if they’re an absent parent.
Not everyone learns much in classes, even when they pass them, so there would no doubt be a residual group of parents who find themselves dealing badly with infants. For that case, there needs to be an added layer of help to protect infants. One program that has been shown to work is pairing a mentor with parent(s) who need one. Those programs come in different forms and go by different names, such as Shared Family Care in Scandinavia, Nurse-Family Partnership in the US, and similar programs (e.g. 1, 2, 3). Parents could ask for that help, which would obviously be the preferred situation, or any concerned person, including older children, could anonymously alert the local child advocates to look into a specific situation. Professionals who notice a problem would be under an obligation to report, as discussed below and in Chapter 4.
Time is a big factor in successful parenting, in many ways even bigger than knowledge. A fair society with an equitable distribution of money, work, and leisure that results in a parent-friendly work week, such as one of 24 hours, would presumably go a long way toward enabling people to actually care for their children. In two-parent or extended families, alternating shifts would make it easier for at least one of the parents to be there for the children at all times.
Caring for infants, however, is full time work in the literal meaning of the term. There is no time for anything else. So parental leave needs to be considered a given. The amount of time best allotted could still bear some study, but judging by the experience of countries with leave policies, somewhere between one and two years is appropriate. The time would be included in planning by employers, as they do now in a number of OECD countries, or the same way they accommodate reservists’ time in the military. Those without children who want to use the benefit could take leave to work with children or on projects directly for children.
Statistics from countries with good parental leave policies, with economic and medical systems that reduce fear of poverty and disease, and with some concept of women’s rights, show that infant abuse or neglect is a much smaller problem than in countries without those benefits. However, even in the best situations, there will still be individuals who find themselves stressed beyond their ability to cope. The system of care needs to include emergency assistance along the lines of suicide hot lines — call them desperation hot lines — where immediate and effective help can be forthcoming on demand.
Which brings me to the horrible question of what to do when all prevention fails and abuse or neglect of infants occurs. The early warning system of concerned neighbors, relatives, or friends needs to be encouraged with appropriate public health education campaigns, so that people know what the signs of actual abuse are and so that the social discomfort of “meddling” is known to be less important than protecting infants. All professionals in contact with the child, whether daycare workers, pediatricians, nurses, clergy, or others, would have a legal obligation to alert child advocates to potential problems. Then the advocates could look into it, and take steps to remove the infant to one of the children’s villages before permanent damage occurred. If the parent(s) wanted to retain custody, they would need to demonstrate a willingness to learn what the problem was, to correct it, and to retain a mentor until the infant was out of danger.
Revocation of parental rights is a very serious step which might be necessary after review by child advocates and other involved professionals. To ensure that the parent(s)’ side was represented, each of the two parents concerned could appoint one member of the eventual panel, and, as with other legal proceedings, decisions could be appealed. One difference is that quick resolution is even more essential when infants are involved than in other legal cases.
Finally, who decides on the fate of an infant so badly damaged by abuse that they’ve been severely disabled? This is not as uncommon a situation as one might think because it doesn’t necessarily require much malicious intent. An angry parent who shakes a baby can cause permanent and severe brain damage. Parents without a grasp of nutrition can cause malnourishment that leads to retardation. If the worst has happened, then should those same parents be the ones to decide on the child’s future? The answer is obviously not, at least not in any system where children aren’t chattel. The parents would have their parental rights revoked (besides being subject to legal proceedings for criminal abuse), and decisions about the child’s care would be given to a court-appointed child advocate. If the infant is so damaged they’ve lost higher brain functions and would actually suffer less if treatment were withdrawn, then a panel of child advocates should make the decision.
When a child can leave a family and move either to another one or to state care facilities, a gray area develops. Since it will inevitably take some time for the official situation to catch up to the real one, responsibilities for care decisions and financial support need to be made explicit.
There is a spectrum of parental rights and different degrees of separation. The situation is more analogous to divorce than to death, and needs similar official nuances to match it. In the earliest stages of separation, day-to-day care decisions would have to rest with the actual caregivers, but longer-term decisions could be put off until the situation clarified. If the relationship between child and parents deteriorated rather than improved, the child advocate(s) would find it necessary to transfer more decision-making to the new caregivers until in the end full rights would be transferred. None of it would be easy, the specific decisions would always depend on the individual situations, but that doesn’t mean it’s impossible. These things can be worked out. They can be arranged between divorcing adults, and the situation with children who need a separation is not fundamentally different. The main point is that the child should have some input into and sense of control over their own fate.
In the case where a child has left a problematic home and the parent(s) are still living, the question of child support arises. In the very short term, the parents would be liable for reasonable payments for food, in the longer term for a realistic amount that covers all expenses. Once a child had left home, that amount would be paid to the government’s children’s bureau who would then disburse it to the child’s actual caregivers. An arrangement via a responsible and powerful intermediary is necessary since the child is unable to insist on payment and since it isn’t right to burden the caregivers with financial or legal battles with the parents in addition to their care of the child. Since payment goes to a government agency, non-payment would have much the same consequences as non-payment of taxes. The amounts would be deducted from wages or other assets.
An issue that arises repeatedly whenever money is involved is that money, rather than the child, becomes the focus. In current custody battles, it’s not unknown for one parent to demand some level of custody mainly so as to save or get money. And yet, some form of support payments need to be made. It’s hardly fair for parents to get away scot-free, so to speak, when they’re the ones who created the problem that drove the child to another home. I’m not sure how that contradiction can best be resolved. It’s easy enough, when caregivers are personal acquaintances, to see which of them are concerned about the children and which of them are mainly interested in the check. Possibly, the difference would be evident to child advocates or other professionals who could include it in the record. More important, children have rights in this system, they can contest poor treatment, and they have accountable advocates whose only job is to make sure they get good treatment. Possibly, that would keep the brakes on the more mercenary caregivers. If it doesn’t, better methods have to be found. The point is to approach as closely as possible the goal of a stable benign environment for the child.
State care of and for children runs into money. Child advocates, orphanages, and the bureaucracy to handle payments and disputes all need to be staffed, or built and maintained. The right of a child to a fair start in life is one of the obligations that must be met, like having a system of law or funding elections. It has to come near the top of the list of priorities, not the bottom. In terms of actual money, taking care of children without a home is not nearly as expensive as, say, maintaining a standing army. No country seems to have conceptual difficulties paying for the latter, so with the right priorities, they’d have no difficulty with the former either.
General social support for parents is a less acute subject than care of orphans, but it has an effect on almost everyone’s quality of life. Social support implies that when nothing more than some flexibility and accommodation is needed, there’s an obligation to provide it. Parents who need flexible hours or alternating shifts should be accommodated to the extent compatible with doing their jobs. Different employers might need to coordinate with each other in some cases. If the systems are in place to do it, that wouldn’t be difficult. Another example is breastfeeding in public. Breastfeeding can be done unobtrusively, and it is simple facilitation of basic parenting to avoid unnecessary restrictions on it.
When parents need time off to deal with child-related emergencies, rather more effort is involved for those who have to pick up the slack. Unlike parental leave for newborns, but like any family emergency, this is not a predictable block of time. Family leave for emergencies needs to be built in to the work calendar for everyone, plus recognition that those with children or elderly dependent relatives are going to need more time. As with parental leave, those without a personal need for it who would nonetheless like to take it can use the same amount of time to work directly with children or the elderly.
Parental and family leave are social obligations borne by everyone, so the funding should reflect that. Employers, including the self-employed, would be reimbursed out of taxes for the necessary wages, at a living wage level. Employers could, of course, elect to add to that sum, but the taxpayers responsibility doesn’t extend to replacing high wages, only to providing the same and adequate standard of support to all.
The need for child care in the sense of day care may well be alleviated by a parent-friendly 24-hour work week. However, there is still bound to be some need for it since children require constant watching, and since some parents are single. Day care service could run the gamut from government-provided crèches where parents can drop off infants or toddlers on an as-needed basis (a service available to some in Australia, for instance) to nothing more than a government role in providing coordination for parents to form their own child care groups.
(I know that in the US the immediate objection to the latter will be how to determine liability. The US has gone so liability-mad that ordinary life is becoming impossible. There always has to be someone to sue and there always have to be bulletproof defenses against suits. I see nothing fair in that and don’t see it as part of a fair society. Criminal negligence — an example in this context might be undertaking child care and then gabbling on a phone while the children choke on small objects — would be pursued like any other crime. Vetting carers ahead of time, and the knowledge that crimes will be met with criminal penalties, are the tools to prevent criminal behavior, just as similar constraints operate in other work. Like other crimes, damages are not part of the picture. Punishment is. If harm requiring medical treatment happened, that would be covered by general medical care.)
The government’s responsibility to inspect and certify increases with the formality of the arrangement. Who and what is certified, and which aspects need vetting by individuals need to be explicitly stated. Costs to the taxpayers vary with degree of government involvement. At the low end, the government acts as a coordinator of volunteers, and has minimal responsibility, something even very poor countries can afford. At the high end are government run day care and preschool centers that are designed to contribute to the child’s healthy growth and education.
I’ve mentioned what facilities for children could look like. In the Money and Work chapter I also said that unemployable people should be able to live in the equivalent of “on base,” as in the military, except that in the world I’m envisioning that would be a social and public work corps, not an army corps.
There’s a common theme, which is that the government needs living facilities for a number of people with different needs. The obligation to care for the elderly, the disabled, children, the ill, and the mentally incapable, all mean that there need to be an array of state-funded facilities to provide that care. There’s also the aforementioned work corps, some of whom will need living quarters.
These facilities are probably best not viewed in isolation. Some of them would probably even be improved if compatible functions were combined in a mixed use environment. For instance, child care, the elderly, the disabled, and the work corps can all benefit from contact among the groups, even if that contact is only for some hours of the day. It’s known, for instance, that children and the elderly both benefit from each others’ society (in reasonable quantities). A “campus,” as it were, that included the many different facilities might be the best way to deliver services efficiently with the greatest convenience to the users. Increasing diversity and choice is always a goal in a system that prizes individual rights.
I haven’t discussed transition methods much in this work since the idea is to point toward a goal rather than accomplish the more difficult task of getting there. Outright poverty would presumably not be an issue in a society with a fair distribution of resources. However, outright poverty is a big issue in almost every society now, and there are some simple measures that can be taken against it which are worth mentioning.
The simplest way to alleviate poverty is to give the poor money. I know that P. J. O’Rourke said the opposite, but in his case it was supposed to be an insightful witticism based on his own insight and wit. Actual study and experience prove otherwise.
The concept of alleviating poverty by giving the poor money seems silly only because psychological coping mechanisms among the non-poor require blaming the poor for their own fate. Their poverty must be due to something they’re doing, with the corollary that therefore it’s not something that can happen to me. And what they must be doing is wasting money. Otherwise why would they lack it? And if they waste money, it’s crazy to give them more without very stringent controls.
The fallacy lies in the assumption that the poor cause their own condition. That is not always true. Some of the poor are money manager miracles, squeezing more out of a penny than more liberally endowed people can even conceive. Nor do they lack energy. It’s just that, as de Kooning put it, being poor takes all one’s time. The result is that without any resources, time, or money, the poor have no way to work themselves out of the trough they’re in.
Now comes the part that’s really hard to conceive. Evidence is accumulating from several national programs in Brazil, in Mexico, in India, that there’s a large gender disparity in who wastes money and who doesn’t. Women spend it on their children, on feeding them and on sending them to school. Men are less reliable. National programs (e.g. Brazil, pdf) which have given money to poor women have been very successful. (Just to be clear, I doubt this is genetic. Women, as a disadvantaged class, don’t see themselves as being worth more than others, whereas men see their own needs first. Whether women would still be good guardians of cash if they had a higher opinion of themselves remains to be seen. For now and the foreseeable future, poor women are good guardians.)
Also, as the linked sources and an accumulating number of others point out, reducing the middlemen who handle aid money is a critical component of success. One reason giving the poor money may not seem to work is that the stringent controls people like to implement require plenty of overseers. These middlemen, like people everywhere, tend to freeze onto any money near them, and so less gets to its final goal. This can be due to outright corruption, but the bigger expenses are more subtle than that. Either way, the fewer intermediaries, the better.
The take-home message for a country trying to move toward justice and alleviate poverty is to provide an allowance to mothers with children, contingent on staying current with vaccinations and school work if that seems necessary.
Support for basic research relates to academics and is discussed in the next chapter, but there are other research-related issues with direct impact on care. Applied research, especially on locally important conditions, can improve care or reduce costs. Research programs also function to retain creative members of the medical profession, precisely those people who are likeliest to come up with solutions to problems. That is no less true in poor countries, who tend to consider all research a luxury. Some of that feeling may be due to the very expensive programs in rich countries, but research does not always have to be costly. A few thousand dollars applied intelligently to local problems can return multiples of hundreds on the investment. As for expensive research, a country without the infrastructure could still make a conscious decision to provide some support. It could participate by supporting field work in partnership with other institutions, or by supporting travel for medical scientists working on specific projects. I would argue that some support of research is always appropriate because of its stimulating effect on the rest of the system.
Another aspect of research is its context, which tends to promote some questions and therefore to preclude whole sets of answers to issues that are not addressed. Consider the current state of medical research as an example. Finding cures is much more spectacular than, for instance, preventing disease by laying down sewer pipes. Nobody ever got a Nobel Prize for sewer pipes. Yet they’ve saved more lives than any medical cure so far. So, despite the utility of prevention, there’s a bias toward cures. Add to that a profit motive, where far more money can be made from cures than prevention, and research questions regarding prevention won’t even be asked, let alone answered.
A recent development in medicine has opened another field in which profit motives hold back rather than promote progress. Genomics has made explicit something that has always been obvious to practicing medical personnel: people vary in their reactions to treatment. Sometimes, as with some recent cancer drugs, the variation is so great that the difference is between a cure and complete uselessness. But profits are made by selling to millions, not to a few dozen folk scattered across the globe. So, despite the fact that some intractable diseases require individualized treatments, pharmaceutical companies aren’t very interested in bringing the fruits of that academic research to market. They keep searching for one-size-fits-all blockbuster drugs instead of what works. In effect, profits are costing us cures.
The government can put its finger on the scale here. It can provide grants for the questions of greater benefit to society. It can provide lucrative prizes for the best work. (It’s amazing how fast status follows lucrative prizes in science.) It can endow a few strategic professorships. The cost of all these things is minute compared to the social and eventual financial benefits.
A function of government in the area of medical and social research should be to keep an eye on the context and to counteract tendencies to skew research toward glamorous or profitable questions. The structural problems now are that preventive and individual medicine are underserved, but in another time the biases might be precluding other questions. Government should act, as always, to counteract tendencies that don’t serve the public good.
The concept of care follows the same principles as all the other aspects of society: spread the benefits as widely as possible and the burdens as thinly as possible without shortchanging some at the expense of others. Do all this within evenly applied limits imposed by technology and wealth. It’s the same “one for all, and all for one” idea that’s supposed to guide the internal relations of human groups in general. When it doesn’t, it’s because people fear they’ll lose by it … or hope they, personally, will gain by depriving someone else. The experience of countries with good social care systems shows that in fact the opposite is true. Either everyone wins or everyone loses.
So far, I can't make comments appear on archive pages. Please click on the post title or permalink to go to the post page, if you'd like to comment.