Interview With Economists Santiago Capraro, Carlo Panico & Luis Daniel Torres-Gonzàles On The Causes Of Inequality

09-26-2024 ~ Economic inequality is one of the most pressing issues of our times. Inequality has pernicious effects on individuals and society at large. It causes a wide range of health and social problems, from reduced life expectancy and lower social mobility to violence and mental illness. Economic inequality erodes societal cohesion and fuels support for authoritarian leaders.   But what is driving inequality in the contemporary world? A recently published book, titled Inequality and Stagnation: A Monetary Interpretation, and co-authored by academic economists Santiago Capraro, Carlo Panico and Luis Daniel Torres-Gonzàles, attributes rising inequality to the outgrowth of the financial sector. In the interview that follows, Santiago Capraro, Carlo Panico and Luis Daniel Torres-Gonzàles make the case for an economic approach that, in their own view, offers the best explanatory framework for understanding the driving forces behind inequality.

J. Polychroniou: Income and wealth inequality has risen sharply since the 1980s in most advanced economies around the world and has been blamed for many of the social ills facing capitalist societies in the 21st century. Economic inequality is also particularly problematic in most emerging and developing economies–and there is little evidence to suggest that this is due to less redistributive pressures in the developing world than there are in advanced liberal democracies. Indeed, in your new book titled Inequality and Stagnation: A Monetary Interpretation, it is argued that the cause of inequality, along with the sluggish growth of recent decades, is the outgrowth of the financial sector. In your view, how did the changing character of the financial system following the collapse of the Bretton Woods system lead to rising inequality and sluggish growth?

Caprano, Panico & Torres-Gonzàlez: Our book addresses theoretical, historical, and institutional issues, deriving from the writings of Keynes and Sraffa a Classical-Keynesian approach that focuses on the interactions between political arrangements, distributional variables, and the level of output and growth. This approach is used to argue that the outgrowth of the financial sector is the main cause of low growth and rising inequality observed during the last decades.

The Classical-Keynesian approach denies that money is neutral in long-period analysis, i.e. denies the absence of persistent monetary influence on the levels of growth and distribution. It highlights that monetary factors and the institutional organization and conduct of economic policy play a key role in affecting the path of the economy, offering the following interpretation of recent events.

After the abandonment of the Bretton Woods Agreements, financial regulation shifted from an approach based on the discretionary powers of the authorities over the managers of financial firms to one based on fixed rules, such as capital requirements. The pressures of the financial sector on the political world favored this change that led to the transformation of the specialized system, which forces financial companies to operate in a single type of activity, into a universal one, which allows them to operate in multiple businesses, such as credit intermediation, capital market operations and insurance.

The new approach to regulation has allowed for the introduction of a wide range of financial innovations and has made speculative activity predominant over the funding of production and international trade. As a result, the sector has grown at higher rates than the rest of the economy and has increased its degree of concentration and its ability to obtain legislation favorable to its interests.

The outgrowth of the financial sector has raised its share of national income and intensified distributional conflicts to the detriment of workers. Other effects have been the rise of instability and of a number of crises, which have
– forced central banks to cut interest rates;
– led to a modification of the process of coordination between monetary and fiscal policies;
– restricted the use of fiscal policy;
– generated negative effects on effective demand and growth;
– increased job insecurity;
– reduced workers’ bargaining power.

At the same time, the alterations of the financial markets have modified the behavior of corporate firms, which have replaced a short-period capital gains strategy for that previously adopted known as retain and reinvest.

This course of action has increased managers’ incomes and produced negative effects on investment, which have further contributed to the decline of effective demand and growth.

This monetary interpretation differs from those offered by other literature.

How did the outgrowth of the financial sector affect the working of the economy?

The book focuses on the distributional motivations of the pressures of the financial industry on the political world to describe how it has played a key role in shaping the recent behavior of the economy. The changes in legislation, caused by the large expenditures of this sector in lobbying activities, has led to its outgrowth and to a large number of modifications in the working of the economy and in the organization and conduct of economic policy.

Chapter 2 of our book presents information on these changes starting with those caused by the financial regulation introduced after the Bretton Woods era. The new legislation has allowed the financial sector to grow at higher rates than the rest of the economy. Among the evidence of the expansion of this sector we can here recall that, from 1977 to 2007, the annual growth rate of international financial transactions at constant dollars was 18.33%, while international trade grew by 8.76% and world GDP by 3.12%.

A process of concentration of the sector has accompanied this expansion. In 2007 the gigantic international financial business was dominated by 17 mega-banks. Now 12 dominate it. In addition, between 1984 and 2021, the number of financial companies protected by FDIC has decreased by 70%, from 14,261 to 4,236.

While the financial industry grew and concentrated, instability and the number of crises worldwide increased after the long period of stability of the Bretton Woods era. The following table, based on data presented by Laeven and Valencia (2020), reports the number of banking, debt and currency crises that have occurred since 1970.

1970-79 35
1980-89 164
1990-99 211
2000-17 130

The large number of crises in the 1990s induced national governments to consider financial stability as the prior objective of fiscal policy. Austerity began to dominate in many countries and the authorities changed the organization of policy. For the first time in history, monetary policy became the leading part of economic policy and the public sector became a big creditor of the central bank.

Instability has also manifested itself through exchange rate volatility. This has modified the conduct of monetary policy in less rich countries, imposing a large accumulation of international reserves and huge sterilizing operations that have further promoted the use of austere fiscal policies (see Chapter 12 of the book).

In the richest countries financial instability has imposed the conduct of a monetary policy based on large liquidity issues and low interest rates. As argued in Chapter 11 of our book, the Federal Reserve has been forced, since the early 1990s, to a persistent and widespread fall in interest rates, which has also reduced the rate of return of shares causing other relevant effects on income distribution.

These results have been accompanied by a marked reduction in the annual growth rate of global GDP from around 5% in the 1960s to around 2% in the 2010s. Job insecurity has increased plummeting the ability of workers to appropriate productivity gains, as shown by the following Figure.

Productivity-compensation gap for the US economy, 1950-2019

Low wage increases have reduced inflation. Thus, while instability grew, inflation vanished for a long period of time, inducing the monetary authorities to be more concerned about the former and to implement a policy of financial stability targeting, instead of the announced inflation targeting.

The effects on income distribution have been that the wage share has fallen. At the same time, the remunerations of the managers of large corporations have risen sharply. Taking advantage both of workers’ difficulty in appropriating productivity gains and of the declining path of stock rates of return, managers have been able to attribute to themselves a large portion of firms’ value-added gains. As Piketty (2014: 278, 302–3, 334) points out, in recent years 65% of those who make up the top 1% group in the US are managers of large corporations, mainly those of the financial sector.

The new position of managers has generated a rising distribution of dividends that has negatively influenced the funding of investment, further contributing to the fall in effective demand and growth.

To what extent did the technological advances of the 1970s contribute to the reshaping of the financial industry?

Technological advances are always relevant in the restructuring of an industry. Nonetheless, one can argue that the new approach to financial regulation, spawned by the change in legislation after the Breton Woods era, has been the main source of the reshaping of the financial industry and its outgrowth.

Chapter 9 of the book employs the Classical-Keynesian approach to examine the evolution of financial regulation in the United States, arguing that, without the introduction of these changes, legislation would have prohibited the explosive growth of the financial sector. Based on the Classical-Keynesian approach, the chapter interprets the evolution of regulation as the result of the pressures of the financial industry on the political world. It presents statistical information showing that this industry has constantly spent more than the others in lobbying activities.

After the Bretton Woods era financial regulation has changed from an approach based on the discretionary powers of the financial authorities over the managers of financial firms, which was introduced by Roosevelt after the crisis of 1929, to one based on pre-established rules, like capital requirements. The new regulation has permitted the introduction of different forms of financial innovation and has favored the outgrowth of the financial sector. Moreover, it has contributed to

– modifying the functioning of markets,

– altering the strategy of corporate enterprises,

– increasing financial instability and the number of crises,

– reordering the conduct and organization of economic policy,

– influencing negatively income distribution and growth.

Financial crises have become more common and more intense during the last decades. Is it simply because of deregulation?

>Always employing a Classical-Keynesian approach, Chapter 10 of our book examines the causes of the increase in systemic risk and the number of crises. It considers the powers that legislation attributes to the authorities as the crucial element in the analysis of financial stability, stressing that the study of crises should focus on the formation of legislation and financial policy.

The chapter argues that the failures of the institutional organization of financial markets can destabilize the operators’ expectations that determine the degree of liquidity of assets and thus cause solvency problems and crises. In addition, it shows that the same failures of institutional organization can be observed in the periods that preceded the crises of 1929 and 2007. In the years prior to the two crises the same conflicts developed between the financial industry and the rest of society over the transformation of the specialized system into universal. During those years one can also observe

– the same explosive growth of the financial industry,

– the same process of concentration of this sector,

– the introduction of the same forms of financial innovation,

– the use of the same incentives for managers, executives and employees of the sector,

– the presence of the same deceptive behaviors in the financial world.

Thus, the Classical-Keynesian approach allows one to state – as Stiglitz (2003: 79) does – that the crises are characterized by irrational exuberance and speculative bubbles. Yet, unlike Stiglitz, the Classical-Keynesian approach leads to make the crucial addition that the exuberance and the bubbles are caused by the faults of the legislation regarding the institutional organization of markets and the powers of the authorities.

The post-Bretton Woods Monetary System ushered in a new era of economic policies and brought into play different theories of income distribution. In Inequality and Stagnation, you propose a monetary interpretation based on the Classical-Keynesian model of inequality and stagnation. What are the advantages of this approach for understanding the role of the organization of financial markets and in explaining financial crises?

The advantages of using a Classical-Keynesian approach for interpreting the role of the financial sector in recent years can be perceived by recalling the main interpretations of the rising inequality proposed by the literature.

The dominant interpretations, which our book names “real”, accept the neutrality of money in long-period analysis. Some “real” interpretations try to acknowledge that monetary and financial factors can play a role. Yet, the way they integrate these factors in the theoretical foundations of the discipline leads them to offer a false description of how these elements operate.

An important review of this literature states that the weakness of these interpretations is due to the lack of ‘a satisfactory theoretical framework for considering the joint and endogenous evolution of finance, growth and inequality’ (Demirgüç-Kunt and Levine, 2009: 289). Our book uses the Classical-Keynesian approach to provide the interpretation of the recent inequality and stagnation with a satisfactory theoretical framework.

The literature also presents a group of Post Keynesian interpretations emphasizing the role of monetary and financial factors. The Classical-Keynesian approach belongs to this group. It considers that the analysis of monetary and financial events must move from the distributional conflicts that shape political agreements and influence legislation and financial policies (see Palma, 2009). These elements define the technical aspects of the working of financial markets and the way the authorities can intervene to stabilize them.

By adopting this perspective, the Classical-Keynesian approach avoids assuming the existence of ‘ironclad tendencies’ in the working of the economy. It allows understanding why processes of greater or lesser growth and inequality are observed over time, inciting to inquire how the political setting can be modified and the current tendencies reversed.

The following summary of the main interpretations of recent inequality and stagnation can better clarify the convenience of adopting a Classical-Keynesian approach.

“Real” interpretations

The interpretation of our book differs from that derived from neoclassical theory, which accept the tendency to full employment, the neutrality of money in long-period analysis, and the view that the level of distributive variables depends on the relative scarcity of productive factors. Mankiw (2013) uses this theory to argue that the recent rise in inequality is due to the increased demand for the talents of sports and music stars.

Alvaredo, Atkinson, Piketty and Saez (2013) criticize Mankiw presenting statistical information, which shows that the group that has benefited most from the recent change in distribution is not composed of sports and music stars, but of the managers of large corporations, particularly of financial companies.

Piketty’s (2014) interpretation argues that the greater inequality has been caused by an exogenous and inevitable reduction in the growth rate of economies.

Acemoğlu and Robinson (2015) criticize Piketty (2014) saying that his description of the dynamics of technology and growth fails to capture crucial elements of the functioning of the economy and to understand why, over time, processes of greater or lesser growth and equality are generated. Acemoğlu and Robinson recall some examples in which increases in inequality caused social reactions that changed political balances and legislation and favored a better distribution of income. However, when Acemoğlu and Robinson (2015) describe the aforementioned dynamics, they focus on the evolution of “real” factors such as technology, education, and labor market institutions, overlooking the evolution of monetary and financial institutions and the legislation that generates them. Acemoğlu and Robinson (2015) have inspired a wide body of literature, which has also overlooked the evolution of financial institutions (see De Loeker, Eeckhout, & Unger, 2020).

“Real” interpretations that recognize the role of some monetary factors

While accepting the view of Acemoğlu and Robinson, Rajan (2010) acknowledges the role of financial institutions but does not admit that the rise in inequality depends on the pressures of the financial industry to obtain legislation favoring their incomes. According to him, the rise is the result of the increased instability and of the fact that it is not possible to prevent greedy operators and inept and corrupt public officials from harming the work of spontaneous market forces that guarantees the efficient functioning of a competitive economy. The effects of incompetence, cheating and corruption – Rajan says – have been even felt in the American system, despite it enjoys well-shaped institutions. Rajan (2010) concludes that the presence of elements of inefficiency and corruption has increased with the recent integration of emerging countries into international markets.

Stiglitz (2012) too refers to monetary factors but, unlike Rajan, recognizes the role of lobbying activity in influencing the behavior of the authorities. Stiglitz (2012: 111-2 and 119-120) accepts the neoclassical view that in a competitive economy money is neutral in long-period analysis and spontaneous market forces produce efficient results that make government interventions unnecessary.

According to him, market imperfections generate the problems of inequality that economic policy must eliminate. Unfortunately, in recent decades governments have been more likely to favor the interests of rich and powerful groups than social justice.

Stiglitz’s interpretation presents two elements of weakness. First, it does not analyze the distributional conflicts that have influenced the behavior of the authorities. Second, accepting the neoclassical foundations, Stiglitz does not have a logical critique of this theory and must prove, through empirical analyses that are difficult to elaborate, that the effects of imperfections are more relevant than those of “real” factors. Mankiw (2013: 30) has highlighted the weakness of this position, observing that Stiglitz would have had to empirically demonstrate that the high incomes of the top 1% are the result of the operation of market imperfections and do not reflect the greater demand for the talents of the people who make up this group.

In the 1990s, the essays of the New Growth Theories also asserted that, when markets are not competitive, financial policies and innovation influence inequality and the growth of economies (see Greenwood and Jovanovic, 1990; King & Levine, 1993; Pagano, 1993). The review by Demirgüç-Kunt and Levine (2009) recognizes that this literature

‘underestimates the potentially enormous impact of financial policies on inequality… Financial regulation legislation deserves a much more prominent place in the study of inequality… Literature … lacks a satisfactory theoretical framework to consider the joint and endogenous evolution of finance, growth and inequality … There is good reason to believe that income distribution shapes public policy, including financial policy. Thus, understanding the channels through which income distribution shapes the functioning of financial systems and financial policies are extraordinarily valuable lines of research (Demirgüç-Kunt and Levine, 2009: 289-290).’

Citing the Handbook of Income Distribution by Atkinson and Bourguignon (2000), Demirgüç-Kunt and Levine (2009) conclude that this literature has failed to develop these lines of research. The Classical-Keynesian approach used by our book attributes to these lines of research a central position.

“Monetary” interpretations

A large part of the Post Keynesian literature emphasizes the role of monetary and financial factors. It proposes a homogeneous view on the functioning of the economy, examining it from various perspectives but elaborating them through different methodological procedures.

Lavoie (2016: 60) states that ‘the drawbacks and weaknesses of modern capitalism are due not to price rigidities or market imperfections, but rather to the intrinsic dynamics of the market system’. He recalls Minsky’s financial fragility hypothesis to argue that ‘capitalism is inherently unstable [because] … in a world of fundamental uncertainty … speculative euphoria … is an inevitable outcome’ (Lavoie, 2016: 61).

Boyer (2000), on the other hand, focuses on the changes in the relationships between shareholders, managers and workers of large corporations, formalizing a finance-led growth model, which competes with the wage-led and profit-led models of Bhaduri and Marglin (1990).

A different line of research in Post Keynesian literature argues that analyses of monetary phenomena make theoretical sense if, instead of merely examining the technical aspects, they consider the distributional conflicts that shape political agreements and influence legislation and financial policies (see Palma, 2009).

This line of research can be found in the Classical-Keynesian approach, derived from the writings of Keynes and Sraffa, which moves from the degree of liquidity of assets and argues that it ends up being shaped by the institutional organization of markets and the ability of the authorities to control stability. As Crotty (2019: 239-57) points out, in the General Theory, Keynes (1936: 162) argued that, if political agreements establish legislation that generates a well-set institutional organization and regulation, expectations are directed towards stability and the economy lives ‘normal times’. On the contrary, when the pressures of economic groups succeed in shaping legislation, the work on institutional organization must be considered ‘ill done’ (Keynes, 1936: 154) and the economy will live through ‘abnormal times’, during which the functioning of markets is close to that of a casino.

According to the Classical-Keynesian approach, distributional conflicts are key elements in the analysis of financial stability. The adoption of this approach makes it possible to clarify how political elements shape the ordinary functioning of a competitive economy and allows for an analytical critique of the logical coherence of neoclassical theory.

What about Thomas Piketty’s documentation of the long-term evolution of wealth and income distributions? What are the strengths and shortcomings of his approach to income and wealth distribution?

Piketty (2014) offers a great contribution to the long-term evolution of wealth and income distributions. His empirical reconstruction allows a deep comprehension of these phenomena. His theoretical positions are however weak. Acemoğlu and Robinson (2015) rightly criticize his view that the greater inequality has been caused by an exogenous and inevitablereduction in the growth rate of economies. Moreover, his sparse and meager references to the role of monetary and financial factors highlight the defective way he integrates these factors in the theoretical foundations of the discipline.

The weakness of his theoretical positions is also exposed by Piketty’s (2014: 215-216) statement that the assumption of decreasing marginal productivities is something natural to accept. He fails to appreciate that this assumption was at the center of the 1966 debate on capital theory published by the Quarterly Journal of Economics, which proved that this assumption faces serious logical shortcomings when the analysis supposes that the economy produces more than one commodity.

Piketty (2014: 200, 215-216, 231-232) provides an account of that debate in terms of ‘postcolonial behavior’ ignoring that in its Summing up Samuelson (1966: 583) recognized that, being derived from mathematical procedures, the shortcomings of the assumption of decreasing marginal productivities represent ‘facts’ that everybody can verify and not personal or ideological standpoints.

Part 2 of our book deals with the state of scientific knowledge on the theoretical foundations of the economic discipline, highlighting the consequences of the logical shortcomings of neoclassical theory. Then, Part 3 highlights that Keynes and Sraffa jointly worked to revolutionize the theoretical foundations of the discipline, proposing a monetary theory of production and distribution and identifying what must be done from a scientific perspective to achieve this result.

What sorts of reforms are needed to counter the problems generated by the dominance of finance in the 21stcentury?

The main problem that countries face nowadays is the imbalance in power relations that the dominance of finance has generated. The history of human societies teaches us that the concentrations of power are the worst enemy of democracy. They influence political life, changing the distribution of income to favor their interests while impairing social and economic stability. Thus, each country has to strengthen, in the first place, the unity and the security of its national institutions.

Achieving positive results is not easy, particularly when the concentration of power enjoyed by the financial industry has reached the current levels. It requires long period commitments and a broad consensus on the need to introduce indirect measures like

– improving the education system,

– reforming the funding of parties and electoral campaigns,

– regulating the media,

– strengthening the institutions that guarantee the balance of powers and the democratic game.

The political strategy is difficult. Yet, it is important to consider it because the problems that the dominance of finance will continue to generate are not sustainable over time from an economic, social, and political point of view.

References
Acemoglou D., Robinson J.R., 2015, The rise and decline of general laws of capitalism, Journal of Economic Perspectives, 29 (1), Winter, 3-28.
Alvaredo F., Atkinson A.B., Piketty T., Saez E., 2013, The top 1 percent in international and historical perspective, Journal of Economic Perspectives, 27 (3), 3-20.
Atkinson A.B., Bourguignon F., eds., 2000, Handbook of Income Distribution, Amsterdam: Elsevier.
Bhaduri A., Marglin S., 1990, Unemployment and the real wage: the economic basis for contesting political ideologies, Cambridge Journal of Economics, 14 (4), 375-393.
Boyer R., 2000, Is a finance-led growth regime a viable alternative to Fordism? A preliminary analysis, Economy and Society, 29 (1), 111-145.
Crotty J., 2019, Keynes against Capitalism: His Economic Case for Liberal Socialism, Abington: Routledge.
De Loecker J., Eeckhout J., Unger G., 2020, The rise of market power and the macroeconomic implications, Quarterly Journal of Economics, 135 (2), 561-644.
Demirgüç-Kunt A., Levine R., 2009, Finance and inequality: theory and evidence, Annual Review of Financial Economics, 1 (1), 287-318.
Greenwood J., Jovanovic B., 1990, Financial development, growth, and the distribution of income, Journal of Political Economy, 98 (5), 1076-1107.
Keynes J.M., 1936, The General Theory of Employment, Interest and Money, in D. Moggridge (1973), The Collected Writings of J.M Keynes, Vol. VII, London: Macmillan.
King R.G., Levine R., 1993, Finance and growth: Schumpeter might be right, Quarterly Journal of Economics, 108 (3), 717-737.
Laeven L., Valencia F., 2020, Systemic banking crisis database II, IMF Economic Review, 68 (2), 307-361.
Lavoie M., 2016, Understanding the global financial crisis: contributions of post-Keynesian economics, Studies in Political Economy, 97 (1), 58-75
Mankiw N.G., 2013, Defending the one per cent, Journal of Economic Perspectives, 27 (3), Summer, 21-34.
Pagano M., 1993, Financial markets and growth: an overview, European Economic Review, 37 (2-3), 613-622.
Palma J.G., 2009, The revenge of the market on the rentiers: why neo-liberal reports of the end of history turned out to be premature, Cambridge Journal of Economics, Special Issue on the Global Financial Crisis, 33 (4), July, 829-866.
Pasinetti L.L., 1993, Structural Economic Dynamics: A Theory of Economic Consequences of Human Learning, Cambridge: Cambridge University Press.
Piketty T., 2014, Capital in the Twenty-first Century, Cambridge, MA: The Belknap Press of Harvard University Press
Rajan R.G., 2010, Fault Lines: How Hidden Fractures still Threaten the World Economy, Princeton: Princeton University Press.
Samuelson P.A., 1966, A summing up, Quarterly Journal of Economics, 80 (4), November, 568-583.
Stiglitz J.E., 2003, The Roaring Nineties: A New History of the World’s most Prosperous Decade, New York: Norton and Company.
Stiglitz J.E., 2012, The Price of Inequality, New York: Norton & Co.

C.J. Polychroniou is a political economist/political scientist who has taught and worked in numerous universities and research centers in Europe and the United States. His latest books are The Precipice: Neoliberalism, the Pandemic and the Urgent Need for Social Change (A collection of interviews with Noam Chomsky; Haymarket Books, 2021), and Economics and the Left: Interviews with Progressive Economists (Verso, 2021).




The Left Wins Presidential Election In Sri Lanka

09-25-2024 ~ On September 22, 2024, the Sri Lankan election authority announced that Anura Kumara Dissanayake of the Janatha Vimukthi Peramuna (JVP)-led National People’s Power (NPP) alliance won the presidential election. Dissanayake, who has been the leader of the left-wing JVP since 2014, defeated 37 other candidates, including the incumbent president Ranil Wickremesinghe of the United National Party (UNP) and his closest challenger Sajith Premadasa of the Samagi Jana Balawegaya. The traditional parties that dominated Sri Lankan politics—such as the Sri Lanka Podujana Peramuna (SLPP) and the UNP—are now on the back foot. However, they dominate the Sri Lankan Parliament (the SLPP has 145 out of 225 seats, while the UNP has one seat). Dissanayake’s JVP only has three seats in the Parliament.

Dissanayake’s triumph to become the country’s ninth president is significant. It is the first time that a party from the country’s Marxist tradition has won a presidential election. Dissanayake, born in 1968 and known by his initials of AKD, comes from a working-class background in north-central Sri Lanka, far from the capital city of Colombo. His worldview has been shaped by his leadership of Sri Lanka’s student movement, and by his role as a cadre in the JVP. In 2004, Dissanayake went to Parliament when the JVP entered an alliance with Chandrika Kumaratunga, the president of the country from 1994 to 2005 and the daughter of the first female prime minister in the world (Sirimavo Bandaranaike). Dissanayake became the Minister of Agriculture, Land, and Livestock in Kumaratunga’s cabinet, a position that allowed him to display his competence as an administrator and to engage the public in a debate around agrarian reform (which will likely be an issue he will take up as president). An attempt at the presidency in 2019 ended unsuccessfully, but it did not stop either Dissanayake or the NPP.

Economic Turbulence
In 2022, Colombo—Sri Lanka’s capital city—was convulsed by the Aragalaya(protests) that culminated in a takeover of the presidential palace and the hasty departure of President Gotabaya Rajapaksa. What motivated these protests was the rapid decline of economic possibilities for the population, which faced shortages of essential goods, including food, fuel, and medicines. Sri Lanka defaulted on its foreign debt and went into bankruptcy. Rather than generate an outcome that would satisfy the protests, Wickremesinghe, with his neoliberal and pro-Western orientation, seized the presidency to complete Rajapaksa’s six-year term that began in 2019.

Wickremesinghe’s lame duck presidency did not address any of the underlying issues of the protests. He took Sri Lanka to the International Monetary Fund (IMF) in 2023 to secure a $2.9 billion bailout (the 17th such intervention in Sri Lanka from the IMF since 1965), which came with removal of subsidies for items such as electricity and a doubled value-added tax rate to 18 percent: the price of the debt was to be paid by the working class in Sri Lanka and not the external lenders. Dissanayake has said that he would like to reverse this equation, renegotiate the terms of the deal, put more of the pain on external lenders, increase the income tax-free threshold, and exempt several essential goods (food and health care) from the increased taxation regime. If Dissanayake can do this, and if he earnestly intervenes to stifle institutional corruption, he will make a serious mark on Sri Lankan politics which has suffered from the ugliness of the civil war and from the betrayals of the political elite.

A Marxist Party in the President’s House
The JVP or the People’s Liberation Front was founded in 1965 as a Marxist-Leninist revolutionary party. Led by Rohana Wijeweera (1943-1989), the party attempted two armed insurrections—in 1971 and again from 1987 to 1989—against what it perceived as an unjust, corrupt, and intractable system. Both uprisings were brutally suppressed, leading to thousands of deaths, including the assassination of Wijeweera. After 1989, the JVP renounced the armed struggle and entered the democratic political arena. The leader of the JVP before Dissanayake was Somawansa Amerasinghe (1943-2016), who rebuilt the party after its major leaders had been killed in the late 1980s. Dissanayake took forward the agenda of building a left-wing political party that advocated for socialist policies in the electoral and social arenas. The remarkable growth of the JVP is a result of the work of Dissanayake’s generation, who are 20 years younger than the founders and who have been able to anchor the ideology of the JVP in large sections of the Sri Lankan working class, peasantry, and poor. Questions remain about the party’s relationship with the Tamil minority population given the tendency of some of its leaders to slip into Sinhala nationalism (particularly when it came to how the state should deal with the insurgency led by the Liberation Tigers of Tamil Eelam). Dissanayake’s personal rise has come because of his integrity, which stands in stark contrast to the corruption and nepotism of the country’s elite, and because he has not wanted to define Sri Lankan politics around ethnic division.

Part of the refoundation of the JVP has been the rejection of left-wing sectarianism. The party worked to build the National People’s Power coalition of twenty-one left and center-left groups, whose shared agenda is to confront corruption and the IMF policy of debt and austerity for the mass of the Sri Lankan people. Despite the deep differences among some of the formations in the NPP, there has been a commitment to a common minimum program of politics and policy. That program is rooted in an economic model that prioritizes self-sufficiency, industrialization, and agrarian reform. The JVP, as the leading force in the NPP, has pushed for the nationalization of certain sectors (particularly utilities, such as energy provision) and the redistribution of wealth through progressive taxation and increased social expenditure. The message of economic sovereignty struck a chord amongst people who have long been divided along lines of ethnicity.

Whether Dissanayake will be able to deliver on this program of economic sovereignty is to be seen. However, his victory has certainly encouraged a new generation to breathe again, to feel that their country can go beyond the tired IMF agenda and attempt to build a Sri Lankan project that could become a model for other countries in the Global South.

By Atul Chandra and Vijay Prashad

Author Bio: This article was produced by Globetrotter.

Atul Chandra works at Tricontinental Research Services (New Delhi).

Vijay Prashad is an Indian historian, editor, and journalist. He is a writing fellow and chief correspondent at Globetrotter. He is an editor of LeftWord Books and the director of Tricontinental: Institute for Social Research. He has written more than 20 books, including The Darker Nations and The Poorer Nations. His latest books are Struggle Makes Us Human: Learning from Movements for Socialism and (with Noam Chomsky) The Withdrawal: Iraq, Libya, Afghanistan, and the Fragility of U.S. Power.

Source: Globetrotter




Noam Chomsky: The Persistent Prompt Out Of Propaganda Bubbles

Noam Chomsky

09-23-2024 ~ In June 2024 I was listening to Noam Chomsky (recorded) trying to make sense with extremely good poise to a Times journalist to think outside western propaganda bubble.
In 2023 and 2024 I have listened to him responding to the polemic-interviews like that of Times as well as the more independent, alternate or amateur media. I get transported in time back to the 1999-2000 when I listened to the voice for the first time, except for the fact that now I was listening to a mind in his mid-nineties. This was, the opposite of the fast fatalistic human beings who often surround us with their system conservative spiritualisms, not a mind that aged chronologically and hence finds solace in a time they froze at, nor was this an unprovocative mind to whom the demagogues could find consensus.

When an interviewer asked polemically in 2023 whether he thought, after all these years, he was wrong on occasions, Noam peacefully replied “many times”. And then he went on to say that he was for example wrong to get later that he should have in the opposition movement to the US led war on Vietnam. This from a human being in his mid-nineties is nothing short of sheer hope. It increases the spectrum of possibilities, whether or not one uses it. I know that after the massive stroke he had in 2023, Noam may not speak his mind out again. But I heard that expressions rise in him when he listens to the ongoing wars on Palestine and violence on life. That is the most a brain can be in a life time I suppose.

Personally, the new, profound and disruptive voice I first heard in a documentary by the end of 90s, have transformed to an insistent voice of hope. Video cassettes that were played in old VCR machines of the day that brought out often flickering images and analogue tape VHS output has long gone. I sense somewhere along the days I have also lost much of my ability to listen, though audio-visual hours have statistically increased in the digital-now.

Noam’s academic trajectories in the domain of linguistics, morphophonemic (s), generative grammar, syntactic structures or mathematical linguistics, are of course archipelagos far away for me. I have been caught in the crossfires of the more informed agreements and disagreements about those worlds, but in my comfortable ignorance. On the other hand, Manufacturing Consent: Noam Chomsky and the Media, the 1992 documentary, brought home by a friend as a video cassette recording, left a great imprint. I had never imagined Media through the sieves of elite groups, propaganda or the unwillingness to portray certain events coupled with the added emphasis on others.

I vividly remember the shots of the Chimpanzee that was named Nim Chimsky as an evident pun on Noam Chomsky, in a Columbia university language project. Apart from the stage setting part wherein the linguistic work, the development of his particular rationalistic explanations, syntactic structures vis a vis the semantics, human cognition as against behaviorisms as well as the schools he was associated with as a linguist, the documentary was more about how thought controls happen in modern democracies.

The media, the agenda settings and the opinion making on which he let loose unprecedented disruptive thoughts, made The New York Times describe him as arguably the most important intellectual alive. But of course, the NYT had more to add. They went on to describe him as “disturbingly divided in intelligence”. They lamented that despite being an intellectual he writes such terrible things about American Foreign policy. They charactered his science as complicated and political views as “simple minded” (and hence irrelevant). Chomsky though was evidently relieved that the New York times in 1979 had the negatives to follow the opening exaltations. In fact, the irritations of the mainstream media are a good preface to what Chomsky has been doing throughout his responsible interventions right from the 60s to the day he suffered the stroke.

Noam introduced me to ideas in the political spectrum like libertarian socialism or anarcho-syndicalism. It told me how he and others who came out of civil war and anti-war movements saw the system. There was the occasional tuning into the alternate radio stations and the recordings of the telecasts by David Bersamian and those at Z magazine. Later some of these conversations came out in print. What got me hooked to his thoughts was the analysis of technologies within democracies that envisage ignorant masses, who in fact can only be meddlesome and hence needs to be controlled for their own good.

Manufacturing Consent: The Political Economy of Mass Media (1988) coauthored by Edward S Herman as well as Necessary Illusions: Thought Control in Democratic Societies with which this was followed up in 1989, in retrospect were even more perceptive of the unipolar world order and the eulogies that celebrated liberal democracies post 1990s in the mainstream. The former, donated the under graduate psychology-thesis writer a name to title his small dissertation on political opinion making in 2001. My writing innocently and confidently claimed to throw light on how cognitive consent for the prevailing political order, then acquiring communal and neoliberal tenor across India, was crafted through every day deployment of themes, agendas, frames, and modes of interactions.

The haunt of Noam had already prompted longer trajectories, one of which was my post-graduation. For someone who was otherwise enthusiastic about ecologies, behaviours, animals and cognition, ‘International relations and Politics’ was smuggled in alongside the persistent urge to explore the tracks suggested. I followed up Necessary Illusions with Deterring Democracy (1991) that for the first time opened frames of cold war, global system, post cold war or imperialism. Understanding Power (2003) further widened the possibility of exploring how power of the empire designed in the shape of the United States of America operated simultaneously outside (Vietnam) and inside (on Welfare system), sovereign states.

I remember getting thrilled to find two other works on a stroll across a book fest in the Kochi City’s marine drive. Those days the book strolls were all about gathering as many works by Chomsky as I can! Powers and Prospects: Reflections on Human Nature and the Social Order (1996) and Profit Over People: Neoliberalism and Global Order  (1999) added more reasons to charge up for the post-graduation. The intensity of the brutal suppressions in East Timor by the Indonesian military with a complicit and often supportive US regime ever since the mid-60s that Powers and Prospects talked about was a major prompt to get beyond the given. Why did the liberal global media play down a genocide that reached almost a million and practically eliminated one of the largest communist workers parties the PKI, while playing up others? This was a great lesson on the profound idea of propaganda bubbles we live in. I already had a counter opinion to the celebrated liberal internationalist Woodrow Wilson, through the perspectives in Media Control (2002) of Creel Commission in turning a population as a war mongering mass. More Indian edition publications and those by alternate publishers and leftwing groups stacked the shelves. Class Warfare (1996), Rogue States (2000), Propaganda and the Public Mind (2001) were some of them. I thought I was ready to be a critical voice in my Masters classes. I realised later that other designs other than mainstreams in IR were taking shape through those years.

The last work in print I got was perhaps a booklet by Leftword called Government in the Future (2005) which was a reprint of his earlier lecture. Before the time when digital downloads got upper hand and print purchases went downhill for a while, Chomsky introduced me to others like Howard Zinn’s A People’s History of the United States (1999) and Water Lippman’s ideas of ‘Spectator Democracy’ wherein the public is reduced to complacent herds in capitalism. I owe the greater share of my engaged interactions during post-graduation to Noam Chomsky. I thought his voice faded away ever since, though the few instances I came across the audio of the man now in his 80s, I stayed put. By now I was in my research phase. I moved increasingly to ethnographies, urban ecologies and gradually into political ecologies.

Noam Chomsky – Photo: Mathew A. Varghese

It is amazing that off late; while designing courses in political ecology and the politics of climate, Noam Chomsky re-emerges as prompt par excellence. His perspectives on the IPCC, legislative regimes of climate, and reflections on consequences to organised life were of a fresh researcher in her/his prime. Critical scholarship on broad frames like climate economics and observations on hegemonic actors like ExxonMobil was as updated as during the 1960s. I have listened to the extrapolations he made on the nexus between Koch brothers and GOP, or the hijack of the COP28 by oil conglomerates and companies like Adnoc (Abhu Dhabi). In the last couple of years Noam spoke tirelessly and in volumes to disparate groups about alternative designs to capitalism. He elaborated on green commitments that needs to be made out of the GDPs as well as the domination by finance capital and banking systems, even to belligerent interviewers.

There are inevitable bio-physical silences that all life gets into, while alive or otherwise. But what matters is what went on before those silences. We live in times when, strings of silences can be garbed as a loud continuum of breaking news, facts and technologies of mediation proliferate, or fixes and fatalisms precede any attempt to understand the leviathan. Perhaps why I always pause and listen every time I hear Noam speak is because I have felt that he was never silent. He spoke for seven decades despite the behemoths and myths of capitalist consensus. An interviewer started off in 2023 with the sarcastic statement of Noam being the greatest intellectual on par with big names. Then he asked how he likes to be described or would write if he fills a form and whether he is a public intellectual. The nonagenarian with a smile told him not to read too much PR. Then he replied that he teaches continental science and philosophy like any other who works as a teacher in a university. He said he never took the latter tag seriously and that the interviewer and himself probably has more privilege to get public with their intellect than another one who might hold better opinions but no privilege! The MIT professor kept communicating with the public often written off by mainstream as “ignorant meddlesome outsiders” and on the face of the newer avatars of The New York Times of 70s that forever failed in writing him off as “disturbingly disconnected” and “maddeningly simple minded”.

Mathew A Varghese
SIRP, Centre for Urban Studies, M G University




Forests Thrive When Indigenous People Have Legal Stewardship Of Their Land

09-18-2024 ~ The fate of intact forests is closely linked to that of Indigenous peoples.

Forests are essential for life on Earth. Because they produce oxygen and help regulate the balance of carbon dioxide and oxygen in the atmosphere, forests are known as the “lungs of the Earth.”

For millions of local and Indigenous people, forests are also homes, hunting grounds, and traditional cultural and ceremonial spaces. These communities have been caring for forests for countless generations because doing so ensured their survival and the preservation of their societies. Yet, despite scientific evidence showing that recognizing Indigenous land rights is crucial to stopping deforestation, governments and corporations often fail to do so.

Carbon Sinks
Trees and forests are among the world’s best carbon capture technologies. Excess carbon is stored in trees’ trunks, roots, and surrounding soil. On average, global forests annually absorb 7.6 billion metric tons of carbon dioxide, or about 1.5 times the emissions of the United States.

Deforestation removes these essential carbon sinks, increasing the amount of greenhouse gases in the atmosphere. According to the Environmental Defense Fund, tropical forest destruction contributes around 20 percent of annual anthropogenic carbon dioxide emissions.

Beyond functioning as carbon sinks, forests are essential to environmental health, providing invaluable ecosystem services to human and nonhuman animals. These services include preventing soil erosion, improving water quality, assisting watershed development, and creating a barrier against strong winds, heavy rain, and flooding.

Healthy forests also foster biodiversity. Although they cover only 31 percent of the globe, “they are home to more than 80 percent of all terrestrial species of animals, plants, and insects,” points out the United Nations Sustainable Development Goals.

Indigenous Forest Defenders
The fate of intact forests is closely linked to that of Indigenous peoples. Many forest-dwelling communities have managed their homelands for centuries based on customary laws rooted in spiritual beliefs and conservation principles. Former UN Special Rapporteur on the Rights of Indigenous Peoples Victoria Tauli-Corpuz argued, “World leaders have a powerful solution on the table to save forests and protect the planet: recognize and support the world’s Indigenous Peoples.”

Indigenous Peoples and local communities have been managing some of the last intact rainforests for generations, and they’ve been doing so successfully. About 36 percent of the world’s remaining intact forests are on “land that’s either managed or owned by Indigenous peoples’ land,” states a Mongabay article referring to a 2020 study published in the Frontiers in Ecology and the Environment. “The rate of tree cover loss is less than half in community and Indigenous land than elsewhere,” said Tauli-Corpuz.

In a 2021 article in the journal Ambio, more than 20 researchers argued that “[b]iodiversity is declining more slowly in areas managed by [Indigenous peoples and local communities] than elsewhere.”

Several studies confirm that forests managed by Indigenous and local communities with secure land rights have lower deforestation rates, greater biodiversity, improved livelihoods, and reduced greenhouse gas emissions.

Nemonte Nenquimo, a leader in the Waorani community in Ecuador and founding member of the Ceibo Alliance, says, “As go our peoples, so goes the planet… The climate depends on the survival of our cultures and our territories.

These Defenders Face Constant Threat
These communities, however, face constant threats from companies seeking to log and develop their lands. On the front lines of deforestation, they frequently suffer violence, intimidation, and criminalization when they defend their lands. The assassination of Honduran Indigenous leader Berta Cáceresin March 2016 highlights such dangers. Between 2012 and 2021, the total number of environmental defenders killed was at least 1,733. The maximum deaths took place in Brazil, where a third of the 342 activists killed were Indigenous or Afro-descendant, according to a report by the nonprofit Global Witness.

The report further stated that in 2021 alone, 200 land defenders were murdered across the globe, with more than three-quarters of the attacks taking place in Latin America.

Indigenous resistance has successfully stopped pipelines, coal plants, and deforestation. From Standing Rock to the Amazon, these communities have been challenging corporate power. Supporting Indigenous and front-line communities is essential. By gaining legal rights to their land, they can protect and manage it, preserving their way of life and safeguarding biodiversity.

A Case Study: The Dayak Bahau Community’s Resistance to Deforestation
In Indonesia, the Dayak Bahau community of Long Isun on Borneo Island is fighting to protect some of the country’s last intact forests. However, two-thirds of these forests are at risk from industrial development.

Dayak, roughly translated as “interior people,” refers to about 200 riverine and hill-dwelling ethnic groups in Borneo. The Dayak Bahau people mainly live in the east of Borneo. During the late 19th century, a large group settled in Long Isun on the banks of the Meraseh River, a tributary of the Upper Mahakam River in East Kalimantan.

Long Isun’s forests cover more than 80,000 hectares of rich forests, larger than all five boroughs of New York City combined, and the Dayak Bahau has managed most of this area. They manage this area through 11 forest functions and land use categories, including settlement areas, production forests, hunting grounds, medicinal plant areas, and grave sites. They also have a forest reserve area, Tana Peraaq, protected to sustain future generations.

They sustainably grow crops like rice, cacao, and durian, rotating their farms so that the forest can regenerate. While modern forms of mechanized agriculture can lead to desertification, the Dayak Bahau use swidden agriculture (letting a field fallow for some time to regenerate), foraging, and other traditional farming techniques designed to conserve the forest and biodiversity instead of eradicating it.

Land-use decisions are made through community processes led by Indigenous leaders or Hipui. The community’s connection to its land is also spiritual, as reflected in its continued practice of customary rituals passed down for generations to honor its deities and ancestors. Because each element of nature is considered imbued with a spirit, the Dayak people strive to be in harmony with the natural world.

There are many customary regulations and rituals around rice farming. For example, many Dayak Bahau villages celebrate Hudoq, where masked dancers pay homage to “Hunyang Tenangan,” a rice-keeping divinity, and ask him to protect their rice paddies and bring a bountiful harvest.

The community also customarily respects the Ulin tree, an ironwood tree native to Borneo. If a community needs to cut down an Ulin tree, a ritual must be performed as requested by the original ancestral parents. The Long Isun believe that their ancestors’ spirits flow through the food they consume and the land, rivers, and forests they depend on. In the words of spiritual leader Inui Yek, “Though we humans can give birth, the land cannot. If we chop down the forest, what hope is there for our grandchildren? Dayaks can’t be separated from the forest; our lives are spent in the forest. Without her, we lose our identity.”

Despite the Long Isun community’s sustainable practices, the Indonesian government has allocated their land for logging and palm oil plantations. From 2009 to 2019, more than 487,631 hectares of forests were destroyed in East Kalimantan. The Harita Group now controls the community’s land.

Harita Group timber concessions now occupy more than one-quarter of Long Isun’s territory. Borneo’s rainforests, home to many unique species, are rapidly disappearing, with only 50 percent of the forest remaining due to “decades of logging, land clearing, and agricultural conversion,” according to a March 2023 article on Earth.org.

Global brands (including Mondelēz and Procter & Gamble) that source palm oil from mills operated by Harita can help protect these forests by respecting Indigenous rights. The Long Isun community is demanding legal recognition of their land as a customary forest, which would grant them ownership and management rights. Without this recognition, their forests and way of life remain at risk.

Indigenous Land Stewardship Keeps Forests Standing
Having Indigenous communities be stewards of our forests is integral to combating the climate crisis. According to scientists, intact forests can reduce emissions by more than 30 percent by 2050, which is essential to keeping temperatures below the agreed-upon 2 degrees Celsius required to avoid a climate catastrophe.

“Climate change poses threats and dangers to the survival of Indigenous communities worldwide, even though Indigenous peoples contribute the least to greenhouse emissions,” the United Nations points out.

Highlighting how their knowledge and understanding of the natural world are pertinent to shaping a more sustainable world and combating the threat faced due to extreme temperatures, the UN further adds, “[I]ndigenous peoples interpret and react to the impacts of climate change in creative ways, drawing on traditional knowledge and other technologies to find solutions which may help society at large to cope with impending changes.”

By Fitri Arianti

Author Bio: Fitri Arianti is a senior forest campaigner at Rainforest Action Network (RAN). A Jakarta native raised in California with a background in development studies, Arianti serves as a cultural translator. She works with RAN’s grassroots partners in Indonesia to profile the social impacts of the palm oil industry, build joint strategies, and hold corporate offenders accountable. She is a contributor to the Observatory. Find her online @CuriousFitri.

Source: Independent Media Institute

Credit Line: This article was produced by Earth | Food | Life, a project of the Independent Media Institute.

 




Why Celebrities, Actors, Writers, And Artists Fear AI

Leslie Alan Horvitz
Photo: lesliehorvitz.com

09-15-2024 ~ Artificial intelligence can steal your likeness, mannerisms, voice, and creative work. Can anything be done about it?

With online access, you can easily tap into the powerful world of artificial intelligence (AI). By using Google’s AI chatbot, Gemini, or Microsoft’s Copilot, people can use AI to supplement or replace traditional web searches. OpenAI’s ChatGPT—the generative AI that’s become all the rage—can create a sci-fi novel or an innovative computer code and even diagnose a patient’s condition—produced in mere minutes in response to a human prompt.

Using a text-to-image program like DALL-E, a person can create an image of a unicorn walking along a busy city street. If they don’t like it, another prompt will tweak it for them or add another pictorial element.

But who owns this computer-generated content? Answering that question becomes tricky when the prompt includes the likeness or voice of someone other than the user. While regulators, legislators, and the courts are grappling with questions about the use and application of AI, they need to catch up, particularly on the issue of copyright.

“There’s a video out there promoting some dental plan with an AI version of me,” the actor Tom Hanks lamented in October 2023. “I have nothing to do with it.” He isn’t the only one facing these issues. Actress Scarlett Johansson also found that her voice and likeness were used in a 22-second online ad on X (formerly known as Twitter).

Don’t be taken in by singer Taylor Swift “endorsing” and giving away free Le Creuset Dutch ovens to Swifties—her fans. While Swift has said that she likes Le Creuset cookware, she isn’t doing ads for the brand. This and many other AI-generated fake ads use celebrity likenesses and voices to scam people. These include country singer Luke Combs’ promotion of weight loss gummies, journalist Gayle King’s video about weight loss products, and another fake video featuring the influencer Jimmy Donaldson (known to his followers as MrBeast).

A casual listener might have mistaken the song “Heart on My Sleeve” as a duet between the famous rap artist Drake and the equally famous singer The Weeknd. But the song, released in 2023 and credited to Ghostwriter, was never composed or sung by Drake or The Weeknd. There are several instances where the voices of singers were generated using AI. For example, an AI-generated version of Johnny Cash singing a Taylor Swift song went viral online in 2023.

This raises questions about who the rightful owners of these products are, considering that they are in whole or in part produced by AI. And what rights do Tom Hanks, Scarlett Johansson, Taylor Swift, and Drake have over their likeness and voices that were used without their permission? Do they have any rights at all?

Fighting Back
Musicians and their publishers have many ways to fight against such AI-generated content. A singer whose voice has been cloned could invoke the right of publicity (considered a facet of the right to privacy). Still, this right is on record only in certain states—notably New York and California, where many major entertainment companies are located.

According to an article in the Verge, singers Drake and The Weeknd could sue Ghostwriter (once his identity was exposed) using the same law that the TV game show Wheel of Fortune’s longtime co-host, Vanna White, relied on to sue a metallic android lookalike used in a Samsung advertisement in 1992.

The Copyright Act
The U.S. Copyright Office has adopted an official policy that declares it will “register an original work of authorship, provided that the work was created by a human being.” Based on this, can AI content be considered to be created by a human being? In one sense, it is, yet the program usually generates content that no human being is responsible for, leaving the question largely unanswered. Congress needs to address this dilemma.

The Copyright Act affords copyright protection to “original works of authorship.” However, the Constitution, which led to the establishment of the Copyright Office and the Copyright Act, is silent on that question.

The concept of transformation can be inferred from the Copyright Act—though it is not explicitly stated in the Copyright Office’s criteria about whether a work infringes on the rights of another party—. In terms of AI, this means that a story or an image generated by AI is so unique and distinctive—so transformative—that no objective observer could mistake the source(s) or the content generated by AI as the original work.

So far, no one in authority has provided satisfactory answers about what regulatory frameworks are required to ensure AI’s “ethical” use. Government officials and agencies don’t appear to have kept up with technological advances. Kevin Roose, tech correspondent for the New York Times, said on the podcast Hard Fork that new copyright laws for AI were unnecessary. “[I]t feels bizarre… that when we talk about these AI models, we’re citing case law from 30, 40, 50 years ago,” said Roose. “[I]t… feels… like we don’t quite have the legal frameworks that we would need because what’s happening under the hood of these AI models is actually quite different from other kinds of technologies.”

But what is happening under the hood of these AI models? No one is sure about that either. What the software does with the data (text, images, music, and code) fed into the system is beyond human control.

Scraping the Web to Build LLMs
Two aspects of AI concern creatives working across various fields, from books to art to music. The first is the “training” of these AI models. For instance, large language models (LLMs) are “trained” when the software is exposed to staggering amounts of texts—books, essays, poems, blogs, etc. Some of this content is collected—or scraped—from the internet. The tech companies maintain that they rely on the doctrine of fair use while doing so.

OpenAI, for instance, argues that the training process creates “a useful generative AI system” and contends that fair use is applicable because the content it uses is intended exclusively to train its programs and is not shared with the public. According to OpenAI, creating tools like its groundbreaking chatbot, ChatGPT, would be impossible without access to copyrighted material.

The AI company further states that it needs to use copyrighted materials to produce a relevant system: “Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens,” according to a January 2024 Guardian article.

Getty, the image licensing service, has taken a dim view of the defense used by AI companies. It filed a lawsuit against the developer of Stable Diffusion, Stability AI, stating that the company had copied its images without permission, violating Getty Images’s copyright and trademark rights.

In its suit, Getty stated: “Stability AI has copied at least 12 million copyrighted images from Getty Images’ websites… to train its Stable Diffusion model.” This is a case of infringement—not fair use.

The second aspect of AI that worries artists and others is the prospect that AI’s production of content and other output in response to users’ prompts infringes on copyrighted work or an individual’s right to market and profit from their likeness and voice.

Also, in cases where users download content, who is charged for infringement? In the case of Napster, the now-defunct software company, the users were inadvertently implicated and had to bear legal penalties for downloading music illegally.

Will AI Make Writers and Artists Obsolete?
The Authors Guild and noted authors such as Paul Tremblay, Michael Chabon, and Sarah Silverman have filed multiple lawsuits against OpenAI and Meta (the parent company of Facebook), claiming that the “training process for AI programs infringed their copyrights in written and visual works,” stated a September 2023 report published by the Congressional Research Service. E-books, probably produced by AI (with little or no human authorial involvement), have begun to appear on Amazon.

AI researcher Melanie Mitchell discovered, to her dismay, that a book with the same title as hers—Artificial Intelligence: A Guide for Thinking Humans, published in 2019—was being marketed on Amazon but was only 45 pages long, poorly written (though it contained some of Mitchell’s original ideas), and authored by one “Shumaila Majid,” according to a January 2024 Wired article.

Artists, too, have responded with alarm to AI’s encroachment. Yet the practice of using original works by artists for training AI programs is widespread and ongoing. In December 2023, a database of artists whose works were used to train Midjourney, an AI image generator, was leaked online.

The database listed more than 16,000 artists, including many well-known ones like Keith Haring, Salvador Dalí, David Hockney, and Yayoi Kusama. Artists have protested using various means, including using the hashtag “No to AI art” on social media, adopting a tool that “poisons” image-generating software, and filing several lawsuits accusing AI companies of infringing on intellectual property rights.

Generative AI is hurting artists everywhere by stealing not only from our pre-existing work to build its libraries without consent, but our jobs too, and it doesn’t even do it authentically or well,” artist Brooke Peachley said during an interview with Hyperallergic.

The use of AI was one of the major points of contention in the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) strike from July to November 2023. SAG-AFTRA represents about 160,000 performers. AI was also a sticking point in reaching a new deal for the Writers Guild of America (WGA), representing screenwriters.

For several months in 2023, the two unions’ strikes overlapped, all but shutting down movie, TV, and streaming productions.

“Human creators are the foundation of the creative industries, and we must ensure that they are respected and paid for their work,” SAG-AFTRA said in a March 2023 statement. “Governments should not create new copyright or other IP [intellectual property] exemptions that allow AI developers to exploit creative works, or professional voices and likenesses, without permission or compensation. Trustworthiness and transparency are essential to the success of AI.”

In its official statement, the WGA declared: “GAI [generative artificial intelligence] cannot be a ‘writer’ or ‘professional writer’ as defined in the MBA [Minimum Basic Agreement] because it is not a person, and therefore materials produced by GAI should not be considered literary material under any MBA.” The MBA is the collective bargaining agreement with the movie and TV studios.

When the WGA contract was negotiated and the strike ended in September 2023, the movie studios agreed that AI-generated content couldn’t be used to generate source material. This meant that a studio executive couldn’t ask writers to develop a story using ChatGPT and then turn it into a script (with the executive claiming rights to the original story).

In the agreement, the WGA also “reserves the right to assert that exploitation of writers’ material to train AI is prohibited by MBA or other law,” according to a September 2023 article in the Verge.

Shortly after WGA settled, the actors worked out their own agreement and ended their walkout. SAG-AFTRA subsequently signed a deal allowing the digital replication of members’ voices for video games and other forms of entertainment if the companies first secured consent and guaranteed minimum payments.

Congress Dithers, States Act
To solve some of the challenges presented by the increasing use of AI, Congress could update copyright laws by clarifying whether AI-generated works are copyrightable, determining who should be considered the author of such works, and deciding whether or not the process of training generative AI programs constitutes fair use.

By mid-2024, Congress had made little significant progress in enacting legislation to regulate AI. According to the nonprofit Brennan Center for Justice, several bills introduced in the 118th Congress (2023-2024) focused on high-risk AI, required purveyors of these systems to assess the technology, imposed transparency requirements, created a new regulatory authority to oversee AI or designated the role to an existing agency, and offered some protections to consumers by taking liability measures. Despite sharply polarized divisions between Democrats and Republicans, there is bipartisan agreement that regulation of AI is needed.

In 2023, two leaders of the Senate Judiciary Subcommittee on Privacy, Technology and the Law, Richard Blumenthal (D-CT) and Josh Hawley (R-MO), who are otherwise politically opposed, “released a blueprint for real, enforceable AI protections,” according to Time magazine. The document called for “the creation of an independent oversight agency that AI companies would have to register with” and “[proposed] that AI companies should bear legal liability ‘when their models and systems breach privacy, violate civil rights, or otherwise cause cognizable harms,’” states the article.

Meanwhile, individual states are not waiting for Congress to take action. In 2023, California and Illinois passed laws allowing people to sue AI companies that create images using their likenesses. Texas and Minnesota have made it a crime punishable with fines and prison time.

The obstacles to enacting effective regulations are formidable despite general agreement that AI should be safe, effective, trustworthy, and non-discriminatory. AI legislation must also consider the environmental costs of training large models and address surveillance, privacy, national security, and misinformation issues. Then there is a question of which federal agency would be responsible for implementing the rules, which would involve “tough judgment calls and complex tradeoffs,” according to Daniel Ho, a professor who oversees an artificial intelligence lab at Stanford University and is a member of the White House’s National AI Advisory Committee. “That’s what makes it very hard,” added the Time article.

Journalists, especially those working for small towns and regional papers, don’t have the luxury of waiting for states, much less Congress, to implement effective regulations to protect their work. The same holds for reporters employed by local radio stations and TV. Their jobs are already at risk. Cost-saving media moguls tend to look at AI as a convenient replacement for reporters, feeding AI with facts (the scores of a high school football game, the highlights of a city council or school board meeting) and then prompting the software to provide a publishable account—without a human reporter being involved.

AI as Co-Creator
The breathtaking pace of technological advances will likely lead to further changes in artificial intelligence down the road that we can’t imagine. As a writer, I believe that despite all the problems (the AI-generated books on Amazon, for instance, which deceive customers into purchasing them rather than the originals), AI is less of a threat than a potential tool. It will help save a writer’s time—especially with research—but is not destined to replace creative writers altogether.

A 2023 study by Murray Shanahan, professor of computing at the Imperial College of London, and cultural historian Catherine Clarke of the University of London supports this position.

“Large language models like ChatGPT can produce some pretty exciting material—but only through sustained engagement with a human, who is shaping sophisticated prompts and giving nuanced feedback,” said Clarke in a January 2024 Nautilus article. “Developing sophisticated and productive prompts relies on human expertise and craft.”

The authors see AI tools as “co-creators” for writers, “amplifying rather than replacing human creativity,” stated the article. The report further pointed out that mathematicians are still in business even after the introduction of calculators. Calculators simply made mathematicians’ lives easier. Similarly, using AI may change how we regard creativity.

By Leslie Alan Horvitz

Author Bio: Leslie Alan Horvitz is an author and journalist specializing in science and a contributor to the Observatory. His nonfiction books include Eureka: Scientific Breakthroughs That Changed the World, Understanding Depression with Dr. Raymond DePaulo of Johns Hopkins University, and The Essential Book of Weather Lore. His articles have been published by Travel and Leisure, Scholastic, Washington Times, and Insight on the News, among others. Horvitz has served on the board of Art Omi and is a member of PEN America. He is based in New York City. Find him online at lesliehorvitz.com.

Source: Independent Media Institute

Credit Line: This article was produced by Earth | Food | Life, a project of the Independent Media Institute.

 




Even The National Intelligence Director Admits Government Secrecy Is A Problem

Lauren Harper ~ Daniel Ellsberg Chair on Government Secrecy

09-12-2024 ~ Up to 90 percent of info is overclassified by the US. Whistleblowers alone can’t fix this systemic crisis of secrecy.

Deception, lies and secrecy — including lies to cover secrecy — characterize authoritarian regimes. However, the politics of lying and official secrecy are no less common in democratic governments. For example, thanks to whistleblower Daniel Ellsberg releasing the Pentagon Papers, the public learned of the truth about the Vietnam War: U.S. military officials were systematically lying to Congress and the public while, at the same time, U.S. forces were committing unspeakable crimes against the Vietnamese people. But that’s not an isolated example. The U.S. government also lied about the wars in Iraq and Afghanistan. If it weren’t for independent journalism and courageous whistleblowers, we might have never known about the torture at Abu Ghraib and the U.S. spying on its own people and private citizens across the globe.

And with the 23rd anniversary of 9/11 upon us, we should also be reminded that there are still questions to be answered about Saudi Arabia’s role behind the attacks.

In the exclusive interview for Truthout that follows, Lauren Harper, the first Daniel Ellsberg Chair on Government Secrecy at the Freedom of the Press Foundation, talks about government secrecy and the role of journalism and whistleblowers in defending democracy.

C. J. Polychroniou: I’d like to start by asking you to elaborate, in broad strokes, on the problem of government secrecy, especially national security secrecy, and the extent to which it erodes the democratic process.

Lauren Harper: Information is improperly classified between 75 percent and 90 percent of the time. This prevents information sharing — sometimes vital information — between agencies, with the public, and with Congress. It’s also expensive, costing taxpayers at least $18 billion a year.

Director of National Intelligence Avril Haines has reiterated that our approach to classifying information “is so flawed that it harms national security and diminishes public trust in government.” This trust is eroded when, for example, the CIA refuses to acknowledge the existence of a drone program that is widely reported on, including in The New York Times, on the basis the programs are properly classified. It also happens when a Freedom of Information Act (FOIA) request reveals that the U.S. Marshals Service abused classification markings to obscure the nature of its cell phone surveillance program.

Congress knows excessive secrecy is a problem. There have been three bipartisan commissions since the 1950s tasked with studying it, with the Moynihan Commission on Government Secrecy in the mid-1990s being the most important. The Moynihan Commission report underscored one of the key points about government secrecy that is often under-appreciated: it is a form of government regulation. I would frame that a little differently and say secrecy is a control mechanism, and one that prevents the public from basic self-governance.

This begs serious questions about why neither Congress nor successive presidential administrations have been able to rein in excessive secrecy, either through legislation or executive order.

I’d also add that national security secrecy is compounded by other bureaucratic challenges. Examples include agencies’ records management programs, which may allow agencies to destroy records that should be public; and technical acquisition processes, which may not take long-term records preservation or eventual public access into account.

Can any case be made in defense of government secrecy in democracies?

Yes, I think that there are real secrets that require protection, but with two important caveats. The first is that nothing should be secret forever, and the second is that there are instances where information might be properly classified, but that still warrants declassification or publication because the information is in the public interest.

To your question: Information pertaining to current weapons of mass destruction (WMD) systems is a good example of information that should usually be secret. That said, I do not think there is a place for forever secrets in healthy democracies. At a certain point, everything should be processed for declassification. For example, this rationale about WMD should not be used to keep historical records on nuclear policy secret.

A large part of the overclassification problem is that most classification decisions are subjective, and the government’s insistence on keeping too many secrets erodes its ability to maintain the necessary ones. Embracing the principle and practice of temporary secrecy would help this.

The number of documents marked as “Classified” or “Secret” has been increasing dramatically since 9/11. Moreover, journalists seem reluctant to publish classified information even though the Supreme Court in 1971 ruled that the government cannot restrain the press from publishing classified documents under the First Amendment. Is it because of the decline of independent media that we see few journalists go public with classified scoops?

You raise an interesting point, which is that we have no idea how many documents are classified — whether it’s at the confidential, secret, or top-secret level. The last time these numbers were published was fiscal year 2017, but the agency that reported these figures, the Information Security Oversight Office, decided to stop collecting the data because the figures it received from agencies was of such poor quality that the numbers were essentially meaningless. Currently, federal agencies can’t account for how many secrets they generate and maintain, and nobody is forcing them to do so.

In terms of issues faced by the press, independent or otherwise, I think there are at least four significant hurdles. The first major obstacle is that the government has grown more adept at surveilling its employees and monitoring their communications, their devices, etc. The second hurdle is the threat whistleblowers face of prosecution under the Espionage Act for sharing classified information with the press. And after the Julian Assange case, journalists justifiably fear they’ll be prosecuted as well. The third is related, which is the failure to pass the PRESS Act, which would shield journalists from federal court orders to disclose their sources and from federal government surveillance of their communications. The final barrier is the deference shown to government claims that documents are properly classified in the first place. As I said above, most classification decisions are subjective, and an interagency panel that reviews agency classification decisions historically overturns them 75 percent of the time. Yet we collectively seem to take the government’s claim that information is classified at face value, and that needs to change. Journalists need to question the validity of classification decisions more; so does Congress, and so do the judges that rule in these kinds of cases.

Reporting on excessive secrecy also needs to be an ongoing beat. Think of it this way: People in the intelligence community and elsewhere work tirelessly their entire careers to keep information secret. Occasional reporting on specific examples of excessive secrecy is not enough to challenge that systemic tide.

In a system like ours, where powerful vested interests have a dominant presence in every realm of public policy and government officials withhold information in order to deceive the public, are whistleblowers democracy’s last defense?

Whistleblowers and advocates for whistleblower protections are key lines of defense, but they face serious challenges. For example, the Department of Justice spied on congressional aides in an attempt to identify agency whistleblowers. That has to have a serious chilling effect on government whistleblowers who are considering working with Capitol Hill — and on members of Congress who would consider leaking to the press. (It’s also worth mentioning that while there are established whistleblower protections in the executive branch, there is no corollary for the legislative branch.)

Whistleblowers are important, but their protections are not as robust as they should be, and these individuals should not face — or be expected to carry — the burden of fixing a system-wide crisis.

We need more tools at our disposal. A key one is continuing to fight for the Freedom of Information Act to work the way it should, and that requires mandating that agencies actually embrace automation. We also need language — either in statute or executive order — that clearly defines what “damage to national security” means when agencies are making classification decisions.

Another potential tool to help reduce government secrecy is exploring the use of artificial intelligence (AI) to declassify large swathes of older documents. I’m not at the point where I am an evangelist on the use of AI in declassification and FOIA decisions, because we run the risk of AI being trained on poor-quality human decisions. So while it’s worth exploring,AI is an area in which the government needs to work with civil society to make sure the technology doesn’t just exponentially increase bad declassification decisions.

In your opinion, why did it take so long to open up the government’s secret files on the potential link of the Saudi government to the 9/11 plot? And why is it that the government has only released a copy of a document on the case that has been heavily redacted? Do we have here yet another case of government secrecy over the 9/11 terrorist attacks?

The same reason the government usually resists disclosing uncomfortable information. It wants to avoid facing public scrutiny or damaging a relationship with a foreign government whose alliance the U.S. government still maintains is critical in achieving its foreign policy goals.

And yes, we have secrecy surrounding 9/11 — just take a look at the 9/11 Commission Report and how many footnotes in it mention documents that are still classified. More broadly, we still have entrenched government secrecy about the post-9/11 world the U.S. created. For example, The New Yorker just published photos of the 2005 massacre of 24 civilians carried out by Marines in Haditha, Iraq, and which spawned one of the largest war crimes investigations in U.S. history. The New Yorker sued for photos, which were taken by Marines in the aftermath of the massacre, to try and understand why murder charges against the Marines were dropped. The FOIA lawsuit for the release took four years; but others had filed FOIA requests for records about Haditha and those photos nearly 20 years ago, and the government never released them. Most alarming? The commandant of the Marine Corps said in 2014 that he was proud that the photos had never been released, and that he’d learned — presumably about the dangers of release — from the Abu Ghraib prison photos.

We still know very little about the CIA’s torture program. Jose Rodriguez, who ran the CIA’s torture program and whom the former CIA head, Gina Haspel, reported to, famously said in 2005 that “the heat from destroying” the video evidence of waterboarding Guantánamo prisoner Abu Zubaydah “is nothing compared to what it would be if the tapes ever got into public domain.” Moreover, the Senate Intelligence Committee’s full report on the CIA’s torture program is still secret, and the CIA never faced any meaningful repercussions for spying on Senate staff trying to investigate.

What are we doing wrong when: 1) government officials think we are better off destroying or burying evidence of our actions, and 2) there is no meaningful ramification for agencies and officials for engaging in bad behavior?

Source: https://truthout.org/articles/even-the-national-intelligence-director-admits-government-secrecy-is-a-problem/

This article is licensed under Creative Commons (CC BY-NC-ND 4.0), and you are free to share and republish under the terms of the license. See further guidelines here.

C.J. Polychroniou is a political scientist/political economist, author, and journalist who has taught and worked in numerous universities and research centers in Europe and the United States. Currently, his main research interests are in U.S. politics and the political economy of the United States, European economic integration, globalization, climate change and environmental economics, and the deconstruction of neoliberalism’s politico-economic project. He is a regular contributor to Truthout as well as a member of Truthout’s Public Intellectual Project. He has published scores of books and over 1,000 articles which have appeared in a variety of journals, magazines, newspapers and popular news websites. Many of his publications have been translated into a multitude of different languages, including Arabic, Chinese, Croatian, Dutch, French, German, Greek, Italian, Japanese, Portuguese, Russian, Spanish and Turkish. His latest books are Optimism Over DespairNoam Chomsky On Capitalism, Empire, and Social Change (2017); Climate Crisis and the Global Green New DealThe Political Economy of Saving the Planet (with Noam Chomsky and Robert Pollin as primary authors, 2020); The PrecipiceNeoliberalism, the Pandemic, and the Urgent Need for Radical Change (an anthology of interviews with Noam Chomsky, 2021); and Economics and the LeftInterviews with Progressive Economists (2021).